Science.gov

Sample records for 3d ladar imagery

  1. Visualization of 3D images from multiple texel images created from fused LADAR/digital imagery

    NASA Astrophysics Data System (ADS)

    Killpack, Cody C.; Budge, Scott E.

    2015-05-01

    The ability to create 3D models, using registered texel images (fused ladar and digital imagery), is an important topic in remote sensing. These models are automatically generated by matching multiple texel images into a single common reference frame. However, rendering a sequence of independently registered texel images often provides challenges. Although accurately registered, the model textures are often incorrectly overlapped and interwoven when using standard rendering techniques. Consequently, corrections must be done after all the primitives have been rendered, by determining the best texture for any viewable fragment in the model. Determining the best texture is difficult, as each texel image remains independent after registration. The depth data is not merged to form a single 3D mesh, thus eliminating the possibility of generating a fused texture atlas. It is therefore necessary to determine which textures are overlapping and how to best combine them dynamically during the render process. The best texture for a particular pixel can be defined using 3D geometric criteria, in conjunction with a real-time, view-dependent ranking algorithm. As a result, overlapping texture fragments can now be hidden, exposed, or blended according to their computed measure of reliability.

  2. 3D ladar ATR based on recognition by parts

    NASA Astrophysics Data System (ADS)

    Sobel, Erik; Douglas, Joel; Ettinger, Gil

    2003-09-01

    LADAR imaging is unique in its potential to accurately measure the 3D surface geometry of targets. We exploit this 3D geometry to perform automatic target recognition on targets in the domain of military and civilian ground vehicles. Here we present a robust model based 3D LADAR ATR system which efficiently searches through target hypothesis space by reasoning hierarchically from vehicle parts up to identification of a whole vehicle with specific pose and articulation state. The LADAR data consists of one or more 3D point clouds generated by laser returns from ground vehicles viewed from multiple sensor locations. The key to this approach is an automated 3D registration process to precisely align and match multiple data views to model based predictions of observed LADAR data. We accomplish this registration using robust 3D surface alignment techniques which we have also used successfully in 3D medical image analysis applications. The registration routine seeks to minimize a robust 3D surface distance metric to recover the best six-degree-of-freedom pose and fit. We process the observed LADAR data by first extracting salient parts, matching these parts to model based predictions and hierarchically constructing and testing increasingly detailed hypotheses about the identity of the observed target. This cycle of prediction, extraction, and matching efficiently partitions the target hypothesis space based on the distinctive anatomy of the target models and achieves effective recognition by progressing logically from a target's constituent parts up to its complete pose and articulation state.

  3. Fusion of multisensor passive and active 3D imagery

    NASA Astrophysics Data System (ADS)

    Fay, David A.; Verly, Jacques G.; Braun, Michael I.; Frost, Carl E.; Racamato, Joseph P.; Waxman, Allen M.

    2001-08-01

    We have extended our previous capabilities for fusion of multiple passive imaging sensors to now include 3D imagery obtained from a prototype flash ladar. Real-time fusion of low-light visible + uncooled LWIR + 3D LADAR, and SWIR + LWIR + 3D LADAR is demonstrated. Fused visualization is achieved by opponent-color neural networks for passive image fusion, which is then textured upon segmented object surfaces derived from the 3D data. An interactive viewer, coded in Java3D, is used to examine the 3D fused scene in stereo. Interactive designation, learning, recognition and search for targets, based on fused passive + 3D signatures, is achieved using Fuzzy ARTMAP neural networks with a Java-coded GUI. A client-server web-based architecture enables remote users to interact with fused 3D imagery via a wireless palmtop computer.

  4. Threat object identification performance for LADAR imagery: comparison of 2-dimensional versus 3-dimensional imagery

    NASA Astrophysics Data System (ADS)

    Chaudhuri, Matthew A.; Driggers, Ronald G.; Redman, Brian; Krapels, Keith A.

    2006-05-01

    This research was conducted to determine the change in human observer range performance when LADAR imagery is presented in stereo 3D vice 2D. It compares the ability of observers to correctly identify twelve common threatening and non-threatening single-handed objects (e.g. a pistol versus a cell phone). Images were collected with the Army Research Lab/Office of Naval Research (ARL/ONR) Short Wave Infrared (SWIR) Imaging LADAR. A perception experiment, utilizing both military and civilian observers, presented subjects with images of varying angular resolutions. The results of this experiment were used to create identification performance curves for the 2D and 3D imagery, which show probability of identification as a function of range. Analysis of the results indicates that there is no evidence of a statistically significant difference in performance between 2D and 3D imagery.

  5. Spectral ladar: towards active 3D multispectral imaging

    NASA Astrophysics Data System (ADS)

    Powers, Michael A.; Davis, Christopher C.

    2010-04-01

    In this paper we present our Spectral LADAR concept, an augmented implementation of traditional LADAR. This sensor uses a polychromatic source to obtain range-resolved 3D spectral images which are used to identify objects based on combined spatial and spectral features, resolving positions in three dimensions and up to hundreds of meters in distance. We report on a proof-of-concept Spectral LADAR demonstrator that generates spectral point clouds from static scenes. The demonstrator transmits nanosecond supercontinuum pulses generated in a photonic crystal fiber. Currently we use a rapidly tuned receiver with a high-speed InGaAs APD for 25 spectral bands with the future expectation of implementing a linear APD array spectrograph. Each spectral band is independently range resolved with multiple return pulse recognition. This is a critical feature, enabling simultaneous spectral and spatial unmixing of partially obscured objects when not achievable using image fusion of monochromatic LADAR and passive spectral imagers. This enables higher identification confidence in highly cluttered environments such as forested or urban areas (e.g. vehicles behind camouflage or foliage). These environments present challenges for situational awareness and robotic perception which can benefit from the unique attributes of Spectral LADAR. Results from this demonstrator unit are presented for scenes typical of military operations and characterize the operation of the device. The results are discussed here in the context of autonomous vehicle navigation and target recognition.

  6. 3D-LZ helicopter ladar imaging system

    NASA Astrophysics Data System (ADS)

    Savage, James; Harrington, Walter; McKinley, R. Andrew; Burns, H. N.; Braddom, Steven; Szoboszlay, Zoltan

    2010-04-01

    A joint-service team led by the Air Force Research Laboratory's Munitions and Sensors Directorates completed a successful flight test demonstration of the 3D-LZ Helicopter LADAR Imaging System. This was a milestone demonstration in the development of technology solutions for a problem known as "helicopter brownout", the loss of situational awareness caused by swirling sand during approach and landing. The 3D-LZ LADAR was developed by H.N. Burns Engineering and integrated with the US Army Aeroflightdynamics Directorate's Brown-Out Symbology System aircraft state symbology aboard a US Army EH-60 Black Hawk helicopter. The combination of these systems provided an integrated degraded visual environment landing solution with landing zone situational awareness as well as aircraft guidance and obstacle avoidance information. Pilots from the U.S. Army, Air Force, Navy, and Marine Corps achieved a 77% landing rate in full brownout conditions at a test range at Yuma Proving Ground, Arizona. This paper will focus on the LADAR technology used in 3D-LZ and the results of this milestone demonstration.

  7. The application of iterative closest point (ICP) registration to improve 3D terrain mapping estimates using the flash 3D ladar system

    NASA Astrophysics Data System (ADS)

    Woods, Jack; Armstrong, Ernest E.; Armbruster, Walter; Richmond, Richard

    2010-04-01

    The primary purpose of this research was to develop an effective means of creating a 3D terrain map image (point-cloud) in GPS denied regions from a sequence of co-bore sighted visible and 3D LIDAR images. Both the visible and 3D LADAR cameras were hard mounted to a vehicle. The vehicle was then driven around the streets of an abandoned village used as a training facility by the German Army and imagery was collected. The visible and 3D LADAR images were then fused and 3D registration performed using a variation of the Iterative Closest Point (ICP) algorithm. The ICP algorithm is widely used for various spatial and geometric alignment of 3D imagery producing a set of rotation and translation transformations between two 3D images. ICP rotation and translation information obtain from registering the fused visible and 3D LADAR imagery was then used to calculate the x-y plane, range and intensity (xyzi) coordinates of various structures (building, vehicles, trees etc.) along the driven path. The xyzi coordinates information was then combined to create a 3D terrain map (point-cloud). In this paper, we describe the development and application of 3D imaging techniques (most specifically the ICP algorithm) used to improve spatial, range and intensity estimates of imagery collected during urban terrain mapping using a co-bore sighted, commercially available digital video camera with focal plan of 640×480 pixels and a 3D FLASH LADAR. Various representations of the reconstructed point-clouds for the drive through data will also be presented.

  8. Characterization measurements of ASC FLASH 3D ladar

    NASA Astrophysics Data System (ADS)

    Larsson, Håkan; Gustafsson, Frank; Johnson, Bruce; Richmond, Richard; Armstrong, Ernest

    2009-09-01

    As a part of the project agreement between the Swedish Defence Research Agency (FOI) and the United States of American's Air Force Research Laboratory (AFRL), a joint field trial was performed in Sweden during two weeks in January 2009. The main purpose for this trial was to characterize AFRL's latest version of the ASC (Advanced Scientific Concepts [1]) FLASH 3D LADAR sensor. The measurements were performed essentially in FOI´s optical hall whose 100 m indoor range offers measurements under controlled conditions minimizing effects such as atmospheric turbulence. Data were also acquired outdoor in both forest and urban scenarios, using vehicles and humans as targets, with the purpose of acquiring data from more dynamic platforms to assist in further algorithm development. This paper shows examples of the acquired data and presents initial results.

  9. Small SWAP 3D imaging flash ladar for small tactical unmanned air systems

    NASA Astrophysics Data System (ADS)

    Bird, Alan; Anderson, Scott A.; Wojcik, Michael; Budge, Scott E.

    2015-05-01

    The Space Dynamics Laboratory (SDL), working with Naval Research Laboratory (NRL) and industry leaders Advanced Scientific Concepts (ASC) and Hood Technology Corporation, has developed a small SWAP (size, weight, and power) 3D imaging flash ladar (LAser Detection And Ranging) sensor system concept design for small tactical unmanned air systems (STUAS). The design utilizes an ASC 3D flash ladar camera and laser in a Hood Technology gyro-stabilized gimbal system. The design is an autonomous, intelligent, geo-aware sensor system that supplies real-time 3D terrain and target images. Flash ladar and visible camera data are processed at the sensor using a custom digitizer/frame grabber with compression. Mounted in the aft housing are power, controls, processing computers, and GPS/INS. The onboard processor controls pointing and handles image data, detection algorithms and queuing. The small SWAP 3D imaging flash ladar sensor system generates georeferenced terrain and target images with a low probability of false return and <10 cm range accuracy through foliage in real-time. The 3D imaging flash ladar is designed for a STUAS with a complete system SWAP estimate of <9 kg, <0.2 m3 and <350 W power. The system is modeled using LadarSIM, a MATLAB® and Simulink®- based ladar system simulator designed and developed by the Center for Advanced Imaging Ladar (CAIL) at Utah State University. We will present the concept design and modeled performance predictions.

  10. ROIC for gated 3D imaging LADAR receiver

    NASA Astrophysics Data System (ADS)

    Chen, Guoqiang; Zhang, Junling; Wang, Pan; Zhou, Jie; Gao, Lei; Ding, Ruijun

    2013-09-01

    Time of flight laser range finding, deep space communications and scanning video imaging are three applications requiring very low noise optical receivers to achieve detection of fast and weak optical signal. HgCdTe electrons initiated avalanche photodiodes (e-APDs) in linear multiplication mode is the detector of choice thanks to its high quantum efficiency, high gain at low bias, high bandwidth and low noise factor. In this project, a readout integrated circuit of hybrid e-APD focal plane array (FPA) with 100um pitch for 3D-LADAR was designed for gated optical receiver. The ROIC works at 77K, including unit cell circuit, column-level circuit, timing control, bias circuit and output driver. The unit cell circuit is a key component, which consists of preamplifier, correlated double Sampling (CDS), bias circuit and timing control module. Specially, the preamplifier used the capacitor feedback transimpedance amplifier (CTIA) structure which has two capacitors to offer switchable capacitance for passive/active dual mode imaging. The main circuit of column-level circuit is a precision Multiply-by-Two circuit which is implemented by switched-capacitor circuit. Switched-capacitor circuit is quite suitable for the signal processing of readout integrated circuit (ROIC) due to the working characteristics. The output driver uses a simply unity-gain buffer. Because the signal is amplified in column-level circuit, the amplifier in unity-gain buffer uses a rail-rail amplifier. In active imaging mode, the integration time is 80ns. Integrating current from 200nA to 4uA, this circuit shows the nonlinearity is less than 1%. In passive imaging mode, the integration time is 150ns. Integrating current from 1nA to 20nA shows the nonlinearity less than 1%.

  11. MBE based HgCdTe APDs and 3D LADAR sensors

    NASA Astrophysics Data System (ADS)

    Jack, Michael; Asbrock, Jim; Bailey, Steven; Baley, Diane; Chapman, George; Crawford, Gina; Drafahl, Betsy; Herrin, Eileen; Kvaas, Robert; McKeag, William; Randall, Valerie; De Lyon, Terry; Hunter, Andy; Jensen, John; Roberts, Tom; Trotta, Patrick; Cook, T. Dean

    2007-04-01

    Raytheon is developing HgCdTe APD arrays and sensor chip assemblies (SCAs) for scanning and staring LADAR systems. The nonlinear characteristics of APDs operating in moderate gain mode place severe requirements on layer thickness and doping uniformity as well as defect density. MBE based HgCdTe APD arrays, engineered for high performance, meet the stringent requirements of low defects, excellent uniformity and reproducibility. In situ controls for alloy composition and substrate temperature have been implemented at HRL, LLC and Raytheon Vision Systems and enable consistent run to run results. The novel epitaxial designed using separate absorption-multiplication (SAM) architectures enables the realization of the unique advantages of HgCdTe including: tunable wavelength, low-noise, high-fill factor, low-crosstalk, and ambient operation. Focal planes built by integrating MBE detectors arrays processed in a 2 x 128 format have been integrated with 2 x 128 scanning ROIC designed. The ROIC reports both range and intensity and can detect multiple laser returns with each pixel autonomously reporting the return. FPAs show exceptionally good bias uniformity <1% at an average gain of 10. Recent breakthrough in device design has resulted in APDs operating at 300K with essentially no excess noise to gains in excess of 100, low NEP <1nW and GHz bandwidth. 3D LADAR sensors utilizing these FPAs have been integrated and demonstrated both at Raytheon Missile Systems and Naval Air Warfare Center Weapons Division at China Lake. Excellent spatial and range resolution has been achieved with 3D imagery demonstrated both at short range and long range. Ongoing development under an Air Force Sponsored MANTECH program of high performance HgCdTe MBE APDs grown on large silicon wafers promise significant FPA cost reduction both by increasing the number of arrays on a given wafer and enabling automated processing.

  12. Ultra-Compact, High-Resolution LADAR System for 3D Imaging

    NASA Technical Reports Server (NTRS)

    Xu, Jing; Gutierrez, Roman

    2009-01-01

    An eye-safe LADAR system weighs under 500 grams and has range resolution of 1 mm at 10 m. This laser uses an adjustable, tiny microelectromechanical system (MEMS) mirror that was made in SiWave to sweep laser frequency. The size of the laser device is small (70x50x13 mm). The LADAR uses all the mature fiber-optic telecommunication technologies in the system, making this innovation an efficient performer. The tiny size and light weight makes the system useful for commercial and industrial applications including surface damage inspections, range measurements, and 3D imaging.

  13. Use of laser radar imagery in optical pattern recognition: the Optical Processor Enhanced Ladar (OPEL) Program

    NASA Astrophysics Data System (ADS)

    Goldstein, Dennis H.; Mills, Stuart A.; Dydyk, Robert B.

    1998-03-01

    The Optical Processor Enhanced Ladar (OPEL) program is designed to evaluate the capabilities of a seeker obtained by integrating two state-of-the-art technologies, laser radar, or ladar, and optical correlation. The program is a thirty-two month effort to build, optimize, and test a breadboard seeker system (the OPEL System) that incorporates these two promising technologies. Laser radars produce both range and intensity image information. Use of this information in an optical correlator is described. A correlator with binary phase input and ternary amplitude and phase filter capability is assumed. Laser radar imagery was collected on five targets over 360 degrees of azimuth from 3 elevation angles. This imagery was then processed to provide training sets in preparation for filter construction. This paper reviews the ladar and optical correlator technologies used, outlines the OPEL program, and describes the OPEL system.

  14. Multi-static networked 3D ladar for surveillance and access control

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Ogirala, S. S. R.; Hu, B.; Le, Han Q.

    2007-04-01

    A theoretical design and simulation of a 3D ladar system concept for surveillance, intrusion detection, and access control is described. It is a non-conventional system architecture that consists of: i) multi-static configuration with an arbitrarily scalable number of transmitters (Tx's) and receivers (Rx's) that form an optical wireless code-division-multiple-access (CDMA) network, and ii) flexible system architecture with modular plug-and-play components that can be deployed for any facility with arbitrary topology. Affordability is a driving consideration; and a key feature for low cost is an asymmetric use of many inexpensive Rx's in conjunction with fewer Tx's, which are generally more expensive. The Rx's are spatially distributed close to the surveyed area for large coverage, and capable of receiving signals from multiple Tx's with moderate laser power. The system produces sensing information that scales as NxM, where N, M are the number of Tx's and Rx's, as opposed to linear scaling ~N in non-network system. Also, for target positioning, besides laser pointing direction and time-of-flight, the algorithm includes multiple point-of-view image fusion and triangulation for enhanced accuracy, which is not applicable to non-networked monostatic ladars. Simulation and scaled model experiments on some aspects of this concept are discussed.

  15. Processing 3D flash LADAR point-clouds in real-time for flight applications

    NASA Astrophysics Data System (ADS)

    Craig, R.; Gravseth, I.; Earhart, R. P.; Bladt, J.; Barnhill, S.; Ruppert, L.; Centamore, C.

    2007-04-01

    Ball Aerospace & Technologies Corp. has demonstrated real-time processing of 3D imaging LADAR point-cloud data to produce the industry's first time-of-flight (TOF) 3D video capability. This capability is uniquely suited to the rigorous demands of space and airborne flight applications and holds great promise in the area of autonomous navigation. It will provide long-range, three dimensional video information to autonomous flight software or pilots for immediate use in rendezvous and docking, proximity operations, landing, surface vision systems, and automatic target recognition and tracking. This is enabled by our new generation of FPGA based "pixel-tube" processors, coprocessors and their associated algorithms which have led to a number of advancements in high-speed wavefront processing along with additional advances in dynamic camera control, and space laser designs based on Ball's CALIPSO LIDAR. This evolution in LADAR is made possible by moving the mechanical complexity required for a scanning system into the electronics, where production, integration, testing and life-cycle costs can be significantly reduced. This technique requires a state of the art TOF read-out integrated circuit (ROIC) attached to a sensor array to collect high resolution temporal data, which is then processed through FPGAs. The number of calculations required to process the data is greatly reduced thanks to the fact that all points are captured at the same time and thus correlated. This correlation allows extremely efficient FPGA processing. This capability has been demonstrated in prototype form at both Marshal Space Flight Center and Langley Research Center on targets that represent docking and landing scenarios. This report outlines many aspects of this work as well as aspects of our recent testing at Marshall's Flight Robotics Laboratory.

  16. 3D imaging LADAR with linear array devices: laser, detector and ROIC

    NASA Astrophysics Data System (ADS)

    Kameyama, Shumpei; Imaki, Masaharu; Tamagawa, Yasuhisa; Akino, Yosuke; Hirai, Akihito; Ishimura, Eitaro; Hirano, Yoshihito

    2009-07-01

    This paper introduces the recent development of 3D imaging LADAR (LAser Detection And Ranging) in Mitsubishi Electric Corporation. The system consists of in-house-made key devices which are linear array: the laser, the detector and the ROIC (Read-Out Integrated Circuit). The laser transmitter is the high power and compact planar waveguide array laser at the wavelength of 1.5 micron. The detector array consists of the low excess noise Avalanche Photo Diode (APD) using the InAlAs multiplication layer. The analog ROIC array, which is fabricated in the SiGe- BiCMOS process, includes the Trans-Impedance Amplifiers (TIA), the peak intensity detectors, the Time-Of-Flight (TOF) detectors, and the multiplexers for read-out. This device has the feature in its detection ability for the small signal by optimizing the peak intensity detection circuit. By combining these devices with the one dimensional fast scanner, the real-time 3D range image can be obtained. After the explanations about the key devices, some 3D imaging results are demonstrated using the single element key devices. The imaging using the developed array devices is planned in the near future.

  17. Maritime target identification in flash-ladar imagery

    NASA Astrophysics Data System (ADS)

    Armbruster, Walter; Hammer, Marcus

    2012-05-01

    The paper presents new techniques and processing results for automatic segmentation, shape classification, generic pose estimation, and model-based identification of naval vessels in laser radar imagery. The special characteristics of focal plane array laser radar systems such as multiple reflections and intensity-dependent range measurements are incorporated into the algorithms. The proposed 3D model matching technique is probabilistic, based on the range error distribution, correspondence errors, the detection probability of potentially visible model points and false alarm errors. The match algorithm is robust against incomplete and inaccurate models, each model having been generated semi-automatically from a single range image. A classification accuracy of about 96% was attained, using a maritime database with over 8000 flash laser radar images of 146 ships at various ranges and orientations together with a model library of 46 vessels. Applications include military maritime reconnaissance, coastal surveillance, harbor security and anti-piracy operations.

  18. Verification of a 3-D terrain mapping LADAR on various materials in different environments

    NASA Astrophysics Data System (ADS)

    Edwards, Lulu; Brown, E. Ray; Jersey, Sarah R.

    2010-01-01

    A field validation of a laser detection and ranging (LADAR) system was conducted by the U.S. Army Engineer Research and Development Center (ERDC), Vicksburg, Mississippi. The LADAR system, a commercial-off-the-shelf (COTS) LADAR system custom-modified by Autonomous Solutions, Inc. (ASI), was tested for accuracy in measuring terrain geometry. A verification method was developed to compare the LADAR dataset to a ground-truth dataset that consisted of total station measurements. Three control points were measured and used to align the two datasets. The influence of slopes, surface materials, light, fog, and dust were investigated. The study revealed that slopes only affected measurements when the terrain was obscured from the LADAR system, and ambient light conditions did not significantly affect the LADAR measurements. The accuracy of the LADAR system, which was equipped with fog correction, was adversely affected by particles suspended in air, such as fog or dust. Also, in some cases the material type had an effect on the accuracy of the LADAR measurements.

  19. Human and tree classification based on a model using 3D ladar in a GPS-denied environment

    NASA Astrophysics Data System (ADS)

    Cho, Kuk; Baeg, Seung-Ho; Park, Sangdeok

    2013-05-01

    This study explained a method to classify humans and trees by extraction their geometric and statistical features in data obtained from 3D LADAR. In a wooded GPS-denied environment, it is difficult to identify the location of unmanned ground vehicles and it is also difficult to properly recognize the environment in which these vehicles move. In this study, using the point cloud data obtained via 3D LADAR, a method to extract the features of humans, trees, and other objects within an environment was implemented and verified through the processes of segmentation, feature extraction, and classification. First, for the segmentation, the radially bounded nearest neighbor method was applied. Second, for the feature extraction, each segmented object was divided into three parts, and then their geometrical and statistical features were extracted. A human was divided into three parts: the head, trunk and legs. A tree was also divided into three parts: the top, middle, and bottom. The geometric features were the variance of the x-y data for the center of each part in an object, using the distance between the two central points for each part, using K-mean clustering. The statistical features were the variance of each of the parts. In this study, three, six and six features of data were extracted, respectively, resulting in a total of 15 features. Finally, after training the extracted data via an artificial network, new data were classified. This study showed the results of an experiment that applied an algorithm proposed with a vehicle equipped with 3D LADAR in a thickly forested area, which is a GPS-denied environment. A total of 5,158 segments were obtained and the classification rates for human and trees were 82.9% and 87.4%, respectively.

  20. Flattop beam illumination for 3D imaging ladar with simple optical devices in the wide distance range

    NASA Astrophysics Data System (ADS)

    Tsuji, Hidenobu; Nakano, Takayuki; Matsumoto, Yoshihiro; Kameyama, Shumpei

    2016-04-01

    We have developed an illumination optical system for 3D imaging ladar (laser detection and ranging) which forms flattop beam shape by transformation of the Gaussian beam in the wide distance range. The illumination is achieved by beam division and recombination using a prism and a negative powered lens. The optimum condition of the transformation by the optical system is derived. It is confirmed that the flattop distribution can be formed in the wide range of the propagation distance from 1 to 1000 m. The experimental result with the prototype is in good agreement with the calculation result.

  1. Long-range imaging ladar flight test

    NASA Astrophysics Data System (ADS)

    Brandt, James; Steiner, Todd D.; Mandeville, William J.; Dinndorf, Kenneth M.; Krasutsky, Nick J.; Minor, John L.

    1995-06-01

    Wright Laboratory and Loral Vought Systems (LVS) have been involved for the last nine years in the research and development of high power diode pumped solid state lasers for medium to long range laser radar (LADAR) seekers for tactical air-to-ground munitions. LVS provided the lead in three key LADAR programs at Wright Lab; the Submunition Guidance Program (Subguide), the Low Cost Anti-Armor Submunition Program (LOCAAS) and the Diode Laser and Detector Array Development Program (3-D). This paper discusses recent advances through the 3-D program that provide the opportunity to obtain three dimensional laser radar imagery in captive flight at a range of 5 km.

  2. Automatic Reconstruction of Spacecraft 3D Shape from Imagery

    NASA Astrophysics Data System (ADS)

    Poelman, C.; Radtke, R.; Voorhees, H.

    We describe a system that computes the three-dimensional (3D) shape of a spacecraft from a sequence of uncalibrated, two-dimensional images. While the mathematics of multi-view geometry is well understood, building a system that accurately recovers 3D shape from real imagery remains an art. A novel aspect of our approach is the combination of algorithms from computer vision, photogrammetry, and computer graphics. We demonstrate our system by computing spacecraft models from imagery taken by the Air Force Research Laboratory's XSS-10 satellite and DARPA's Orbital Express satellite. Using feature tie points (each identified in two or more images), we compute the relative motion of each frame and the 3D location of each feature using iterative linear factorization followed by non-linear bundle adjustment. The "point cloud" that results from this traditional shape-from-motion approach is typically too sparse to generate a detailed 3D model. Therefore, we use the computed motion solution as input to a volumetric silhouette-carving algorithm, which constructs a solid 3D model based on viewpoint consistency with the image frames. The resulting voxel model is then converted to a facet-based surface representation and is texture-mapped, yielding realistic images from arbitrary viewpoints. We also illustrate other applications of the algorithm, including 3D mensuration and stereoscopic 3D movie generation.

  3. Improvements in the Visualization of Stereoscopic 3D Imagery

    NASA Astrophysics Data System (ADS)

    Gurrieri, Luis E.

    2015-09-01

    A pleasant visualization of stereoscopic imagery must take into account factors that may produce eye strain and fatigue. Fortunately, our binocular vision system has embedded mechanisms to perceive depth for extended periods of time without producing eye fatigue; however, stereoscopic imagery may still induce visual discomfort in certain displaying scenarios. An important source of eye fatigue originates in the conflict between vergence eye movement and focusing mechanisms. Today's eye-tracking technology makes possible to know the viewers' gaze direction; hence, 3D imagery can be dynamically corrected based on this information. In this paper, I introduce a method to improve the visualization of stereoscopic imagery on planar displays based on emulating vergence and accommodation mechanisms of binocular human vision. Unlike other methods to improve the visual comfort that introduce depth distortions, in the stereoscopic visual media, this technique aims to produce a gentler and more natural binocular viewing experience without distorting the original depth of the scene.

  4. ATA algorithm suite for co-boresighted pmmw and ladar imagery

    NASA Astrophysics Data System (ADS)

    Stevens, Mark R.; Snorrason, Magnus; Ablavsky, Vitaly; Amphay, Sengvieng A.

    2001-08-01

    The need for air-to-ground missiles with day/night, adverse weather and pinpoint accuracy Autonomous Target Acquisition (ATA) seekers is essential for today's modern warfare scenarios. Passive millimeter wave (PMMW) sensors have the ability to see through clouds; in fact they tend to show metallic objects in high contrast regardless of weather conditions. However, their resolution is very low when compared with other ATA sensor such as laser radar (LADAR). We present an ATA algorithm suite that combines the superior target detection potential of PMMW with the high-quality segmentation and recognition abilities of LADAR. Preliminary detection and segmentation results are presented for a set of image-pairs of military vehicles that were collected for this project using an 89 Ghz, 18 inch aperture PMMW sensor from TRW and a 1.06 (mu) high-resolution LADAR.

  5. Anti-ship missile tracking with a chirped amplitude modulation ladar

    NASA Astrophysics Data System (ADS)

    Redman, Brian C.; Stann, Barry L.; Ruff, William C.; Giza, Mark M.; Aliberti, Keith; Lawler, William B.

    2004-09-01

    Shipboard infrared search and track (IRST) systems can detect sea-skimming anti-ship missiles at long ranges. Since IRST systems cannot measure range and velocity, they have difficulty distinguishing missiles from slowly moving false targets and clutter. ARL is developing a ladar based on its patented chirped amplitude modulation (AM) technique to provide unambiguous range and velocity measurements of targets handed over to it by the IRST. Using the ladar's range and velocity data, false alarms and clutter objects will be distinguished from valid targets. If the target is valid, it's angular location, range, and velocity, will be used to update the target track until remediation has been effected. By using an array receiver, ARL's ladar can also provide 3D imagery of potential threats in support of force protection. The ladar development program will be accomplished in two phases. In Phase I, currently in progress, ARL is designing and building a breadboard ladar test system for proof-of-principle static platform field tests. In Phase II, ARL will build a brassboard ladar test system that will meet operational goals in shipboard testing against realistic targets. The principles of operation for the chirped AM ladar for range and velocity measurements, the ladar performance model, and the top-level design for the Phase I breadboard are presented in this paper.

  6. Advances in HgCdTe APDs and LADAR Receivers

    NASA Technical Reports Server (NTRS)

    Bailey, Steven; McKeag, William; Wang, Jinxue; Jack, Michael; Amzajerdian, Farzin

    2010-01-01

    Raytheon is developing NIR sensor chip assemblies (SCAs) for scanning and staring 3D LADAR systems. High sensitivity is obtained by integrating high performance detectors with gain i.e. APDs with very low noise Readout Integrated Circuits. Unique aspects of these designs include: independent acquisition (non-gated) of pulse returns, multiple pulse returns with both time and intensity reported to enable full 3D reconstruction of the image. Recent breakthrough in device design has resulted in HgCdTe APDs operating at 300K with essentially no excess noise to gains in excess of 100, low NEP <1nW and GHz bandwidths and have demonstrated linear mode photon counting. SCAs utilizing these high performance APDs have been integrated and demonstrated excellent spatial and range resolution enabling detailed 3D imagery both at short range and long ranges. In this presentation we will review progress in high resolution scanning, staring and ultra-high sensitivity photon counting LADAR sensors.

  7. Imaging through obscurants with a heterodyne detection-based ladar system

    NASA Astrophysics Data System (ADS)

    Reibel, Randy R.; Roos, Peter A.; Kaylor, Brant M.; Berg, Trenton J.; Curry, James R.

    2014-06-01

    Bridger Photonics has been researching and developing a ladar system based on heterodyne detection for imaging through brownout and other DVEs. There are several advantages that an FMCW ladar system provides compared to direct detect pulsed time-of-flight systems including: 1) Higher average powers, 2) Single photon sensitive while remaining tolerant to strong return signals, 3) Doppler sensitivity for clutter removal, and 4) More flexible system for sensing during various stages of flight. In this paper, we provide a review of our sensor, discuss lessons learned during various DVE tests, and show our latest 3D imagery.

  8. Anti-ship missile tracking with a chirped AM ladar - Update: design, model predictions, and experimental results

    NASA Astrophysics Data System (ADS)

    Redman, Brian; Ruff, William; Stann, Barry; Giza, Mark; Lawler, William; Dammann, John; Potter, William

    2005-05-01

    Shipboard infrared search and track (IRST) systems can detect sea-skimming, anti-ship missiles at long ranges. Since IRST systems cannot measure range and line-of-sight (LOS) velocity, they have difficulty distinguishing missiles from false targets and clutter. In a joint Army-Navy program, the Army Research Laboratory (ARL) is developing a ladar based on the chirped amplitude modulation (AM) technique to provide range and velocity measurements of potential targets handed-over by the distributed aperture system - IRST (DAS-IRST) being developed by the Naval Research Laboratory (NRL) and sponsored by the Office of Naval Research (ONR). Using the ladar's range and velocity data, false alarms and clutter will be eliminated, and valid missile targets' tracks will be updated. By using an array receiver, ARL's ladar will also provide 3D imagery of potential threats for force protection/situational awareness. The concept of operation, the Phase I breadboard ladar design and performance model results, and the Phase I breadboard ladar development program were presented in paper 5413-16 at last year's symposium. This paper will present updated design and performance model results, as well as recent laboratory and field test results for the Phase I breadboard ladar. Implications of the Phase I program results on the design, development, and testing of the Phase II brassboard ladar will also be discussed.

  9. Characterization of scannerless ladar

    NASA Astrophysics Data System (ADS)

    Monson, Todd C.; Grantham, Jeffrey W.; Childress, Steve W.; Sackos, John T.; Nellums, Robert O.; Lebien, Steve M.

    1999-05-01

    Scannerless laser radar (LADAR) is the next revolutionary step in laser radar technology. It has the potential to dramatically increase the image frame rate over raster-scanned systems while eliminating mechanical moving parts. The system presented here uses a negative lens to diverge the light from a pulsed laser to floodlight illuminate a target. Return light is collected by a commercial camera lens, an image intensifier tube applies a modulated gain, and a relay lens focuses the resulting image onto a commercial CCD camera. To produce range data, a minimum of three snapshots is required while modulating the gain of the image intensifier tube's microchannel plate (MCP) at a MHz rate. Since November 1997 the scannerless LADAR designed by Sandia National Laboratories has undergone extensive testing. It has been taken on numerous field tests and has imaged calibrated panels up to a distance of 1 km on an outdoor range. Images have been taken at ranges over a kilometer and can be taken at much longer ranges with modified range gate settings. Sample imagery and potential applications are presented here. The accuracy of range imagery produced by this scannerless LADAR has been evaluated and the range resolution was found to be approximately 15 cm. Its sensitivity was also quantified and found to be many factors better than raster- scanned direct detection LADAR systems. Additionally, the effect of the number of snapshots and the phase spacing between them on the quality of the range data has been evaluated. Overall, the impressive results produced by scannerless LADAR are ideal for autonomous munitions guidance and various other applications.

  10. High Accuracy 3D Processing of Satellite Imagery

    NASA Technical Reports Server (NTRS)

    Gruen, A.; Zhang, L.; Kocaman, S.

    2007-01-01

    Automatic DSM/DTM generation reproduces not only general features, but also detailed features of the terrain relief. Height accuracy of around 1 pixel in cooperative terrain. RMSE values of 1.3-1.5 m (1.0-2.0 pixels) for IKONOS and RMSE values of 2.9-4.6 m (0.5-1.0 pixels) for SPOT5 HRS. For 3D city modeling, the manual and semi-automatic feature extraction capability of SAT-PP provides a good basis. The tools of SAT-PP allowed the stereo-measurements of points on the roofs in order to generate a 3D city model with CCM The results show that building models with main roof structures can be successfully extracted by HRSI. As expected, with Quickbird more details are visible.

  11. Impact of Building Heights on 3d Urban Density Estimation from Spaceborne Stereo Imagery

    NASA Astrophysics Data System (ADS)

    Peng, Feifei; Gong, Jianya; Wang, Le; Wu, Huayi; Yang, Jiansi

    2016-06-01

    In urban planning and design applications, visualization of built up areas in three dimensions (3D) is critical for understanding building density, but the accurate building heights required for 3D density calculation are not always available. To solve this problem, spaceborne stereo imagery is often used to estimate building heights; however estimated building heights might include errors. These errors vary between local areas within a study area and related to the heights of the building themselves, distorting 3D density estimation. The impact of building height accuracy on 3D density estimation must be determined across and within a study area. In our research, accurate planar information from city authorities is used during 3D density estimation as reference data, to avoid the errors inherent to planar information extracted from remotely sensed imagery. Our experimental results show that underestimation of building heights is correlated to underestimation of the Floor Area Ratio (FAR). In local areas, experimental results show that land use blocks with low FAR values often have small errors due to small building height errors for low buildings in the blocks; and blocks with high FAR values often have large errors due to large building height errors for high buildings in the blocks. Our study reveals that the accuracy of 3D density estimated from spaceborne stereo imagery is correlated to heights of buildings in a scene; therefore building heights must be considered when spaceborne stereo imagery is used to estimate 3D density to improve precision.

  12. High Quality 3D data capture from UAV imagery

    NASA Astrophysics Data System (ADS)

    Haala, Norbert; Cramer, Michael; Rothermel, Mathias

    2014-05-01

    The flexible use of unmanned airborne systems is especially beneficial while aiming at data capture for geodetic-photogrammetric applications within areas of limited extent. This can include tasks like topographical mapping in the context of land management and consolidation or natural hazard mapping for the documentation of landslide areas. Our presentation discusses the suitability of UAV-systems for such tasks based on a pilot project for the Landesamt für Geoinformation und Landentwicklung Baden-Württemberg (LGL BW). This study evaluated the efficiency and accuracy of photogrammetric image collection by UAV-systems for demands of national mapping authorities. For this purpose the use of different UAV platforms and cameras for the generation of photogrammetric standard products like ortho images and digital surface models were evaluated. However, main focus of the presentation is the investigation of the quality potential of UAV-based 3D data capture at high resolution and accuracies. This is exemplary evaluated by the documentation of a small size (700x350m2) landslide area by a UAV flight. For this purpose the UAV images were used to generate 3D point clouds at a resolution of 5-8cm, which corresponds to the ground sampling distance GSD of the original images. This was realized by dense, pixel-wise matching algorithms both available in off-the-shelf and research software tools. Suitable results can especially be derived if large redundancy is available from highly overlapping image blocks. Since UAV images can be collected easily at a high overlap due to their low cruising speed. Thus, our investigations clearly demonstrated the feasibility of relatively simple UAV-platforms and cameras for 3D point determination close to the sub-pixel level.

  13. Automatic building detection and 3D shape recovery from single monocular electro-optic imagery

    NASA Astrophysics Data System (ADS)

    Lavigne, Daniel A.; Saeedi, Parvaneh; Dlugan, Andrew; Goldstein, Norman; Zwick, Harold

    2007-04-01

    The extraction of 3D building geometric information from high-resolution electro-optical imagery is becoming a key element in numerous geospatial applications. Indeed, producing 3D urban models is a requirement for a variety of applications such as spatial analysis of urban design, military simulation, and site monitoring of a particular geographic location. However, almost all operational approaches developed over the years for 3D building reconstruction are semiautomated ones, where a skilled human operator is involved in the 3D geometry modeling of building instances, which results in a time-consuming process. Furthermore, such approaches usually require stereo image pairs, image sequences, or laser scanning of a specific geographic location to extract the 3D models from the imagery. Finally, with current techniques, the 3D geometric modeling phase may be characterized by the extraction of 3D building models with a low accuracy level. This paper describes the Automatic Building Detection (ABD) system and embedded algorithms currently under development. The ABD system provides a framework for the automatic detection of buildings and the recovery of 3D geometric models from single monocular electro-optic imagery. The system is designed in order to cope with multi-sensor imaging of arbitrary viewpoint variations, clutter, and occlusion. Preliminary results on monocular airborne and spaceborne images are provided. Accuracy assessment of detected buildings and extracted 3D building models from single airborne and spaceborne monocular imagery of real scenes are also addressed. Embedded algorithms are evaluated for their robustness to deal with relatively dense and complicated urban environments.

  14. The Maintenance Of 3-D Scene Databases Using The Analytical Imagery Matching System (Aims)

    NASA Astrophysics Data System (ADS)

    Hovey, Stanford T.

    1987-06-01

    The increased demand for multi-resolution displays of simulated scene data for aircraft training or mission planning has led to a need for digital databases of 3-dimensional topography and geographically positioned objects. This data needs to be at varying resolutions or levels of detail as well as be positionally accurate to satisfy close-up and long distance scene views. The generation and maintenance processes for this type of digital database requires that relative and absolute spatial positions of geographic and cultural features be carefully controlled in order for the scenes to be representative and useful for simulation applications. Autometric, Incorporated has designed a modular Analytical Image Matching System (AIMS) which allows digital 3-D terrain feature data to be derived from cartographic and imagery sources by a combination of automatic and man-machine techniques. This system provides a means for superimposing the scenes of feature information in 3-D over imagery for updating. It also allows for real-time operator interaction between a monoscopic digital imagery display, a digital map display, a stereoscopic digital imagery display and automatically detected feature changes for transferring 3-D data from one coordinate system's frame of reference to another for updating the scene simulation database. It is an advanced, state-of-the-art means for implementing a modular, 3-D scene database maintenance capability, where original digital or converted-to-digital analog source imagery is used as a basic input to perform accurate updating.

  15. Super-resolution for flash LADAR data

    NASA Astrophysics Data System (ADS)

    Hu, Shuowen; Young, S. Susan; Hong, Tsai; Reynolds, Joseph P.; Krapels, Keith; Miller, Brian; Thomas, Jim; Nguyen, Oanh

    2009-05-01

    Flash laser detection and ranging (LADAR) systems are increasingly used in robotics applications for autonomous navigation and obstacle avoidance. Their compact size, high frame rate, wide field of view, and low cost are key advantages over traditional scanning LADAR devices. However, these benefits are achieved at the cost of spatial resolution. Super-resolution enhancement can be applied to improve the resolution of flash LADAR devices, making them ideal for small robotics applications. Previous work by Rosenbush et al. applied the super-resolution algorithm of Vandewalle et al. to flash LADAR data, and observed quantitative improvement in image quality in terms of number of edges detected. This study uses the super-resolution algorithm of Young et al. to enhance the resolution of range data acquired with a SwissRanger SR-3000 flash LADAR camera. To improve the accuracy of sub-pixel shift estimation, a wavelet preprocessing stage was developed and applied to flash LADAR imagery. The authors used the triangle orientation discrimination (TOD) methodology for a subjective evaluation of the performance improvement (measured in terms of probability of target discrimination and subject response times) achieved with super-resolution. Super-resolution of flash LADAR imagery resulted in superior probabilities of target discrimination at the all investigated ranges while reducing subject response times.

  16. On Fundamental Evaluation Using Uav Imagery and 3d Modeling Software

    NASA Astrophysics Data System (ADS)

    Nakano, K.; Suzuki, H.; Tamino, T.; Chikatsu, H.

    2016-06-01

    Unmanned aerial vehicles (UAVs), which have been widely used in recent years, can acquire high-resolution images with resolutions in millimeters; such images cannot be acquired with manned aircrafts. Moreover, it has become possible to obtain a surface reconstruction of a realistic 3D model using high-overlap images and 3D modeling software such as Context capture, Pix4Dmapper, Photoscan based on computer vision technology such as structure from motion and multi-view stereo. 3D modeling software has many applications. However, most of them seem to not have obtained appropriate accuracy control in accordance with the knowledge of photogrammetry and/or computer vision. Therefore, we performed flight tests in a test field using an UAV equipped with a gimbal stabilizer and consumer grade digital camera. Our UAV is a hexacopter and can fly according to the waypoints for autonomous flight and can record flight logs. We acquired images from different altitudes such as 10 m, 20 m, and 30 m. We obtained 3D reconstruction results of orthoimages, point clouds, and textured TIN models for accuracy evaluation in some cases with different image scale conditions using 3D modeling software. Moreover, the accuracy aspect was evaluated for different units of input image—course unit and flight unit. This paper describes the fundamental accuracy evaluation for 3D modeling using UAV imagery and 3D modeling software from the viewpoint of close-range photogrammetry.

  17. Advances in ladar components and subsystems at Raytheon

    NASA Astrophysics Data System (ADS)

    Jack, Michael; Chapman, George; Edwards, John; Mc Keag, William; Veeder, Tricia; Wehner, Justin; Roberts, Tom; Robinson, Tom; Neisz, James; Andressen, Cliff; Rinker, Robert; Hall, Donald N. B.; Jacobson, Shane M.; Amzajerdian, Farzin; Cook, T. Dean

    2012-06-01

    Raytheon is developing NIR sensor chip assemblies (SCAs) for scanning and staring 3D LADAR systems. High sensitivity is obtained by integrating high performance detectors with gain, i.e., APDs with very low noise Readout Integrated Circuits (ROICs). Unique aspects of these designs include: independent acquisition (non-gated) of pulse returns, multiple pulse returns with both time and intensity reported to enable full 3D reconstruction of the image. Recent breakthrough in device design has resulted in HgCdTe APDs operating at 300K with essentially no excess noise to gains in excess of 100, low NEP <1nW and GHz bandwidths and have demonstrated linear mode photon counting. SCAs utilizing these high performance APDs have been integrated and demonstrated excellent spatial and range resolution enabling detailed 3D imagery both at short range and long ranges. In the following we will review progress in real-time 3D LADAR imaging receiver products in three areas: (1) scanning 256 × 4 configuration for the Multi-Mode Sensor Seeker (MMSS) program and (2) staring 256 × 256 configuration for the Autonomous Landing and Hazard Avoidance Technology (ALHAT) lunar landing mission and (3) Photon-Counting SCAs which have demonstrated a dramatic reduction in dark count rate due to improved design, operation and processing.

  18. Advances in LADAR Components and Subsystems at Raytheon

    NASA Technical Reports Server (NTRS)

    Jack, Michael; Chapman, George; Edwards, John; McKeag, William; Veeder, Tricia; Wehner, Justin; Roberts, Tom; Robinson, Tom; Neisz, James; Andressen, Cliff; Rinker, Robert; Hall, Donald N. B.; Jacobson, Shane M.; Amzajerdian, Farzin; Cook, T. Dean

    2012-01-01

    Raytheon is developing NIR sensor chip assemblies (SCAs) for scanning and staring 3D LADAR systems. High sensitivity is obtained by integrating high performance detectors with gain, i.e., APDs with very low noise Readout Integrated Circuits (ROICs). Unique aspects of these designs include: independent acquisition (non-gated) of pulse returns, multiple pulse returns with both time and intensity reported to enable full 3D reconstruction of the image. Recent breakthrough in device design has resulted in HgCdTe APDs operating at 300K with essentially no excess noise to gains in excess of 100, low NEP <1nW and GHz bandwidths and have demonstrated linear mode photon counting. SCAs utilizing these high performance APDs have been integrated and demonstrated excellent spatial and range resolution enabling detailed 3D imagery both at short range and long ranges. In the following we will review progress in real-time 3D LADAR imaging receiver products in three areas: (1) scanning 256 x 4 configuration for the Multi-Mode Sensor Seeker (MMSS) program and (2) staring 256 x 256 configuration for the Autonomous Landing and Hazard Avoidance Technology (ALHAT) lunar landing mission and (3) Photon-Counting SCAs which have demonstrated a dramatic reduction in dark count rate due to improved design, operation and processing.

  19. Influence of Gsd for 3d City Modeling and Visualization from Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Alrajhi, Muhamad; Alam, Zafare; Afroz Khan, Mohammad; Alobeid, Abdalla

    2016-06-01

    Ministry of Municipal and Rural Affairs (MOMRA), aims to establish solid infrastructure required for 3D city modelling, for decision making to set a mark in urban development. MOMRA is responsible for the large scale mapping 1:1,000; 1:2,500; 1:10,000 and 1:20,000 scales for 10cm, 20cm and 40 GSD with Aerial Triangulation data. As 3D city models are increasingly used for the presentation exploration, and evaluation of urban and architectural designs. Visualization capabilities and animations support of upcoming 3D geo-information technologies empower architects, urban planners, and authorities to visualize and analyze urban and architectural designs in the context of the existing situation. To make use of this possibility, first of all 3D city model has to be created for which MOMRA uses the Aerial Triangulation data and aerial imagery. The main concise for 3D city modelling in the Kingdom of Saudi Arabia exists due to uneven surface and undulations. Thus real time 3D visualization and interactive exploration support planning processes by providing multiple stakeholders such as decision maker, architects, urban planners, authorities, citizens or investors with a three - dimensional model. Apart from advanced visualization, these 3D city models can be helpful for dealing with natural hazards and provide various possibilities to deal with exotic conditions by better and advanced viewing technological infrastructure. Riyadh on one side is 5700m above sea level and on the other hand Abha city is 2300m, this uneven terrain represents a drastic change of surface in the Kingdom, for which 3D city models provide valuable solutions with all possible opportunities. In this research paper: influence of different GSD (Ground Sample Distance) aerial imagery with Aerial Triangulation is used for 3D visualization in different region of the Kingdom, to check which scale is more sophisticated for obtaining better results and is cost manageable, with GSD (7.5cm, 10cm, 20cm and 40cm

  20. A study of the effects of degraded imagery on tactical 3D model generation using structure-from-motion

    NASA Astrophysics Data System (ADS)

    Bolick, Leslie; Harguess, Josh

    2016-05-01

    An emerging technology in the realm of airborne intelligence, surveillance, and reconnaissance (ISR) systems is structure-from-motion (SfM), which enables the creation of three-dimensional (3D) point clouds and 3D models from two-dimensional (2D) imagery. There are several existing tools, such as VisualSFM and open source project OpenSfM, to assist in this process, however, it is well-known that pristine imagery is usually required to create meaningful 3D data from the imagery. In military applications, such as the use of unmanned aerial vehicles (UAV) for surveillance operations, imagery is rarely pristine. Therefore, we present an analysis of structure-from-motion packages on imagery that has been degraded in a controlled manner.

  1. Extracting Semantically Annotated 3d Building Models with Textures from Oblique Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Frommholz, D.; Linkiewicz, M.; Meissner, H.; Dahlke, D.; Poznanska, A.

    2015-03-01

    This paper proposes a method for the reconstruction of city buildings with automatically derived textures that can be directly used for façade element classification. Oblique and nadir aerial imagery recorded by a multi-head camera system is transformed into dense 3D point clouds and evaluated statistically in order to extract the hull of the structures. For the resulting wall, roof and ground surfaces high-resolution polygonal texture patches are calculated and compactly arranged in a texture atlas without resampling. The façade textures subsequently get analyzed by a commercial software package to detect possible windows whose contours are projected into the original oriented source images and sparsely ray-casted to obtain their 3D world coordinates. With the windows being reintegrated into the previously extracted hull the final building models are stored as semantically annotated CityGML "LOD-2.5" objects.

  2. Towards 3D Matching of Point Clouds Derived from Oblique and Nadir Airborne Imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Ming

    Because of the low-expense high-efficient image collection process and the rich 3D and texture information presented in the images, a combined use of 2D airborne nadir and oblique images to reconstruct 3D geometric scene has a promising market for future commercial usage like urban planning or first responders. The methodology introduced in this thesis provides a feasible way towards fully automated 3D city modeling from oblique and nadir airborne imagery. In this thesis, the difficulty of matching 2D images with large disparity is avoided by grouping the images first and applying the 3D registration afterward. The procedure starts with the extraction of point clouds using a modified version of the RIT 3D Extraction Workflow. Then the point clouds are refined by noise removal and surface smoothing processes. Since the point clouds extracted from different image groups use independent coordinate systems, there are translation, rotation and scale differences existing. To figure out these differences, 3D keypoints and their features are extracted. For each pair of point clouds, an initial alignment and a more accurate registration are applied in succession. The final transform matrix presents the parameters describing the translation, rotation and scale requirements. The methodology presented in the thesis has been shown to behave well for test data. The robustness of this method is discussed by adding artificial noise to the test data. For Pictometry oblique aerial imagery, the initial alignment provides a rough alignment result, which contains a larger offset compared to that of test data because of the low quality of the point clouds themselves, but it can be further refined through the final optimization. The accuracy of the final registration result is evaluated by comparing it to the result obtained from manual selection of matched points. Using the method introduced, point clouds extracted from different image groups could be combined with each other to build a

  3. Flexible simulation strategy for modeling 3D cultural objects based on multisource remotely sensed imagery

    NASA Astrophysics Data System (ADS)

    Guienko, Guennadi; Levin, Eugene

    2003-01-01

    New ideas and solutions never come alone. Although automated feature extraction is not sufficiently mature to move from the realm of scientific investigation into the category of production technology, a new goal has arisen: 3D simulation of real-world objects, extracted from images. This task, which evolved from feature extraction and is not an easy task itself, becomes even more complex, multi-leveled, and often uncertain and fuzzy when one exploits time-sequenced multi-source remotely sensed visual data. The basic components of the process are familiar image processing tasks: fusion of various types of imagery, automatic recognition of objects, removng those objects from the source images, and replacing them in the images with their realistic simulated "twin" object rendering. This paper discusses how to aggregate the most appropriate approach to each task into one technological process in order to develop a Manipulator for Visual Simulation of 3D objects (ManVIS) that is independent or imagery/format/media. The technology could be made general by combining a number of competent special purpose algorithms under appropriate contextual, geometric, spatial, and temporal constraints derived from a-priori knowledge. This could be achieved by planning the simulation in an Open Structure Simulation Strategy Manager (O3SM) a distinct component of ManVIS building the simulation strategy before beginning actual image manipulation.

  4. Real-time scene and signature generation for ladar and imaging sensors

    NASA Astrophysics Data System (ADS)

    Swierkowski, Leszek; Christie, Chad L.; Antanovskii, Leonid; Gouthas, Efthimios

    2014-05-01

    This paper describes development of two key functionalities within the VIRSuite scene simulation program, broadening its scene generation capabilities and increasing accuracy of thermal signatures. Firstly, a new LADAR scene generation module has been designed. It is capable of simulating range imagery for Geiger mode LADAR, in addition to the already existing functionality for linear mode systems. Furthermore, a new 3D heat diffusion solver has been developed within the VIRSuite signature prediction module. It is capable of calculating the temperature distribution in complex three-dimensional objects for enhanced dynamic prediction of thermal signatures. With these enhancements, VIRSuite is now a robust tool for conducting dynamic simulation for missiles with multi-mode seekers.

  5. Accuracy evaluation of segmentation for high resolution imagery and 3D laser point cloud data

    NASA Astrophysics Data System (ADS)

    Ni, Nina; Chen, Ninghua; Chen, Jianyu

    2014-09-01

    High resolution satellite imagery and 3D laser point cloud data provide precise geometry, rich spectral information and clear texture of feature. The segmentation of high resolution remote sensing images and 3D laser point cloud is the basis of object-oriented remote sensing image analysis, for the segmentation results will directly influence the accuracy of subsequent analysis and discrimination. Currently, there still lacks a common segmentation theory to support these algorithms. So when we face a specific problem, we should determine applicability of the segmentation method through segmentation accuracy assessment, and then determine an optimal segmentation. To today, the most common method for evaluating the effectiveness of a segmentation method is subjective evaluation and supervised evaluation. For providing a more objective evaluation result, we have carried out following work. Analysis and comparison previous proposed image segmentation accuracy evaluation methods, which are area-based metrics, location-based metrics and combinations metrics. 3D point cloud data, which was gathered by Reigl VZ1000, was used to make two-dimensional transformation of point cloud data. The object-oriented segmentation result of aquaculture farm, building and farmland polygons were used as test object and adopted to evaluate segmentation accuracy.

  6. Quality Analysis on 3d Buidling Models Reconstructed from Uav Imagery

    NASA Astrophysics Data System (ADS)

    Jarzabek-Rychard, M.; Karpina, M.

    2016-06-01

    Recent developments in UAV technology and structure from motion techniques have effected that UAVs are becoming standard platforms for 3D data collection. Because of their flexibility and ability to reach inaccessible urban parts, drones appear as optimal solution for urban applications. Building reconstruction from the data collected with UAV has the important potential to reduce labour cost for fast update of already reconstructed 3D cities. However, especially for updating of existing scenes derived from different sensors (e.g. airborne laser scanning), a proper quality assessment is necessary. The objective of this paper is thus to evaluate the potential of UAV imagery as an information source for automatic 3D building modeling at LOD2. The investigation process is conducted threefold: (1) comparing generated SfM point cloud to ALS data; (2) computing internal consistency measures of the reconstruction process; (3) analysing the deviation of Check Points identified on building roofs and measured with a tacheometer. In order to gain deep insight in the modeling performance, various quality indicators are computed and analysed. The assessment performed according to the ground truth shows that the building models acquired with UAV-photogrammetry have the accuracy of less than 18 cm for the plannimetric position and about 15 cm for the height component.

  7. Experiments with Uas Imagery for Automatic Modeling of Power Line 3d Geometry

    NASA Astrophysics Data System (ADS)

    Jóźków, G.; Vander Jagt, B.; Toth, C.

    2015-08-01

    The ideal mapping technology for transmission line inspection is the airborne LiDAR executed from helicopter platforms. It allows for full 3D geometry extraction in highly automated manner. Large scale aerial images can be also used for this purpose, however, automation is possible only for finding transmission line positions (2D geometry), and the sag needs to be estimated manually. For longer lines, these techniques are less expensive than ground surveys, yet they are still expensive. UAS technology has the potential to reduce these costs, especially if using inexpensive platforms with consumer grade cameras. This study investigates the potential of using high resolution UAS imagery for automatic modeling of transmission line 3D geometry. The key point of this experiment was to employ dense matching algorithms to appropriately acquired UAS images to have points created also on wires. This allowed to model the 3D geometry of transmission lines similarly to LiDAR acquired point clouds. Results showed that the transmission line modeling is possible with a high internal accuracy for both, horizontal and vertical directions, even when wires were represented by a partial (sparse) point cloud.

  8. Spectral ladar as a UGV navigation sensor

    NASA Astrophysics Data System (ADS)

    Powers, Michael A.; Davis, Christopher C.

    2011-06-01

    We demonstrate new results using our Spectral LADAR prototype, which highlight the benefits of this sensor for Unmanned Ground Vehicle (UGV) navigation applications. This sensor is an augmentation of conventional LADAR and uses a polychromatic source to obtain range-resolved 3D spectral point clouds. These point cloud images can be used to identify objects based on combined spatial and spectral features in three dimensions and at long standoff range. The Spectral LADAR transmits nanosecond supercontinuum pulses generated in a photonic crystal fiber. Backscatter from distant targets is dispersed into 25 spectral bands, where each spectral band is independently range resolved with multiple return pulse recognition. Our new results show that Spectral LADAR can spectrally differentiate hazardous terrain (mud) from favorable driving surfaces (dry ground). This is a critical capability, since in UGV contexts mud is potentially hazardous, requires modified vehicle dynamics, and is difficult to identify based on 3D spatial signatures. Additionally, we demonstrate the benefits of range resolved spectral imaging, where highly cluttered 3D images of scenes (e.g. containing camouflage, foliage) are spectrally unmixed by range separation and segmented accordingly. Spectral LADAR can achieve this unambiguously and without the need for stereo correspondence, sub-pixel detection algorithms, or multi-sensor registration and data fusion.

  9. Automatic registration of optical imagery with 3d lidar data using local combined mutual information

    NASA Astrophysics Data System (ADS)

    Parmehr, E. G.; Fraser, C. S.; Zhang, C.; Leach, J.

    2013-10-01

    Automatic registration of multi-sensor data is a basic step in data fusion for photogrammetric and remote sensing applications. The effectiveness of intensity-based methods such as Mutual Information (MI) for automated registration of multi-sensor image has been previously reported for medical and remote sensing applications. In this paper, a new multivariable MI approach that exploits complementary information of inherently registered LiDAR DSM and intensity data to improve the robustness of registering optical imagery and LiDAR point cloud, is presented. LiDAR DSM and intensity information has been utilised in measuring the similarity of LiDAR and optical imagery via the Combined MI. An effective histogramming technique is adopted to facilitate estimation of a 3D probability density function (pdf). In addition, a local similarity measure is introduced to decrease the complexity of optimisation at higher dimensions and computation cost. Therefore, the reliability of registration is improved due to the use of redundant observations of similarity. The performance of the proposed method for registration of satellite and aerial images with LiDAR data in urban and rural areas is experimentally evaluated and the results obtained are discussed.

  10. 3D modeling of building indoor spaces and closed doors from imagery and point clouds.

    PubMed

    Díaz-Vilariño, Lucía; Khoshelham, Kourosh; Martínez-Sánchez, Joaquín; Arias, Pedro

    2015-01-01

    3D models of indoor environments are increasingly gaining importance due to the wide range of applications to which they can be subjected: from redesign and visualization to monitoring and simulation. These models usually exist only for newly constructed buildings; therefore, the development of automatic approaches for reconstructing 3D indoors from imagery and/or point clouds can make the process easier, faster and cheaper. Among the constructive elements defining a building interior, doors are very common elements and their detection can be very useful either for knowing the environment structure, to perform an efficient navigation or to plan appropriate evacuation routes. The fact that doors are topologically connected to walls by being coplanar, together with the unavoidable presence of clutter and occlusions indoors, increases the inherent complexity of the automation of the recognition process. In this work, we present a pipeline of techniques used for the reconstruction and interpretation of building interiors based on point clouds and images. The methodology analyses the visibility problem of indoor environments and goes in depth with door candidate detection. The presented approach is tested in real data sets showing its potential with a high door detection rate and applicability for robust and efficient envelope reconstruction. PMID:25654723

  11. 3D Modeling of Building Indoor Spaces and Closed Doors from Imagery and Point Clouds

    PubMed Central

    Díaz-Vilariño, Lucía; Khoshelham, Kourosh; Martínez-Sánchez, Joaquín; Arias, Pedro

    2015-01-01

    3D models of indoor environments are increasingly gaining importance due to the wide range of applications to which they can be subjected: from redesign and visualization to monitoring and simulation. These models usually exist only for newly constructed buildings; therefore, the development of automatic approaches for reconstructing 3D indoors from imagery and/or point clouds can make the process easier, faster and cheaper. Among the constructive elements defining a building interior, doors are very common elements and their detection can be very useful either for knowing the environment structure, to perform an efficient navigation or to plan appropriate evacuation routes. The fact that doors are topologically connected to walls by being coplanar, together with the unavoidable presence of clutter and occlusions indoors, increases the inherent complexity of the automation of the recognition process. In this work, we present a pipeline of techniques used for the reconstruction and interpretation of building interiors based on point clouds and images. The methodology analyses the visibility problem of indoor environments and goes in depth with door candidate detection. The presented approach is tested in real data sets showing its potential with a high door detection rate and applicability for robust and efficient envelope reconstruction. PMID:25654723

  12. 3D visualization of movements can amplify motor cortex activation during subsequent motor imagery.

    PubMed

    Sollfrank, Teresa; Hart, Daniel; Goodsell, Rachel; Foster, Jonathan; Tan, Tele

    2015-01-01

    A repetitive movement practice by motor imagery (MI) can influence motor cortical excitability in the electroencephalogram (EEG). This study investigated if a realistic visualization in 3D of upper and lower limb movements can amplify motor related potentials during subsequent MI. We hypothesized that a richer sensory visualization might be more effective during instrumental conditioning, resulting in a more pronounced event related desynchronization (ERD) of the upper alpha band (10-12 Hz) over the sensorimotor cortices thereby potentially improving MI based brain-computer interface (BCI) protocols for motor rehabilitation. The results show a strong increase of the characteristic patterns of ERD of the upper alpha band components for left and right limb MI present over the sensorimotor areas in both visualization conditions. Overall, significant differences were observed as a function of visualization modality (VM; 2D vs. 3D). The largest upper alpha band power decrease was obtained during MI after a 3-dimensional visualization. In total in 12 out of 20 tasks the end-user of the 3D visualization group showed an enhanced upper alpha ERD relative to 2D VM group, with statistical significance in nine tasks.With a realistic visualization of the limb movements, we tried to increase motor cortex activation during subsequent MI. The feedback and the feedback environment should be inherently motivating and relevant for the learner and should have an appeal of novelty, real-world relevance or aesthetic value (Ryan and Deci, 2000; Merrill, 2007). Realistic visual feedback, consistent with the participant's MI, might be helpful for accomplishing successful MI and the use of such feedback may assist in making BCI a more natural interface for MI based BCI rehabilitation. PMID:26347642

  13. 3D visualization of movements can amplify motor cortex activation during subsequent motor imagery

    PubMed Central

    Sollfrank, Teresa; Hart, Daniel; Goodsell, Rachel; Foster, Jonathan; Tan, Tele

    2015-01-01

    A repetitive movement practice by motor imagery (MI) can influence motor cortical excitability in the electroencephalogram (EEG). This study investigated if a realistic visualization in 3D of upper and lower limb movements can amplify motor related potentials during subsequent MI. We hypothesized that a richer sensory visualization might be more effective during instrumental conditioning, resulting in a more pronounced event related desynchronization (ERD) of the upper alpha band (10–12 Hz) over the sensorimotor cortices thereby potentially improving MI based brain-computer interface (BCI) protocols for motor rehabilitation. The results show a strong increase of the characteristic patterns of ERD of the upper alpha band components for left and right limb MI present over the sensorimotor areas in both visualization conditions. Overall, significant differences were observed as a function of visualization modality (VM; 2D vs. 3D). The largest upper alpha band power decrease was obtained during MI after a 3-dimensional visualization. In total in 12 out of 20 tasks the end-user of the 3D visualization group showed an enhanced upper alpha ERD relative to 2D VM group, with statistical significance in nine tasks.With a realistic visualization of the limb movements, we tried to increase motor cortex activation during subsequent MI. The feedback and the feedback environment should be inherently motivating and relevant for the learner and should have an appeal of novelty, real-world relevance or aesthetic value (Ryan and Deci, 2000; Merrill, 2007). Realistic visual feedback, consistent with the participant’s MI, might be helpful for accomplishing successful MI and the use of such feedback may assist in making BCI a more natural interface for MI based BCI rehabilitation. PMID:26347642

  14. Multiaspect high-resolution ladar data collection

    NASA Astrophysics Data System (ADS)

    Trussel, C. Ward; Barr, Dallas N.; Schilling, Bradley W.; Templeton, Glen C.; Mizerka, Lawrence J.; Warner, Chris; Hummel, Robert; Hauge, Robert O.

    2003-08-01

    The Jigsaw program, sponsored by the Defense Advanced Research Projects Agency (DARPA), will demonstrate a multi-observation concept to identify obscured combat vehicles that cannot be discerned from a single aspect angle. Three-dimensional (3-D) laser radar (ladar) images of a nearly hidden target are collected from several observation points. Image pieces of the target taken from all the data sets are then assembled to obtain a more complete image that will allow identification by a human observer. In this effort a test bed ladar, constructed by the Night Vision and Electronic Sensors Directorate (NVESD), is used to provide three-dimensional (3-D) images in which the voxels have dimensions of the order of centimeters on each side. Ultimately a UAV born Jigsaw sensor will fly by a suspect location while collecting the multiple images. This paper will describe a simulated flight in which 800 images were taken of two targets obscured by foliage. The vehicle mounted laser radar used for the collection was moved in 0.076 meter steps along a 61 meter path. Survey data were collected for the sensor and target locations as well as for several unobscured fiducial markers near the targets, to aid in image reconstruction. As part of a separate DARPA contractual effort, target returns were extracted from individual images and assembled to form a final 3-D view of the vehicles for human identification. These results are reported separately. The laser radar employs a diode pumped, passively Q-switched, Nd:YAG, micro-chip laser. The transmitted 1.06 micron radiation was produced in six micro-joule pulses that occurred at a rate of 3 kHz and had a duration of 1.2 nanoseconds at the output of the detector electronics. An InGaAs avalanche photodiode/amplifier with a bandwidth of 0.5 GHz was used as the receiver and the signal was digitized at a rate of 2 GS/s. Details of the laser radar and sample imagery will be discussed and presented.

  15. Resolution limits in imaging LADAR systems

    NASA Astrophysics Data System (ADS)

    Khoury, Jed; Woods, Charles L.; Lorenzo, Joseph P.; Kierstead, John; Pyburn, Dana; Sengupta, S. K.

    2004-04-01

    In this paper, we introduce a new design concept of laser radar systems that combines both phase comparison and time-of-flight methods. We show from signal to noise ration considerations that there is a fundamental limit to the overall resolution in 3-D imaging range laser radar (LADAR). We introduce a new metric, volume of resolution (VOR), and we show from quantum noise considerations, that there is a maximum resolution volume, that can be achieved, for a given set of system parameters. Consequently, there is a direct tradeoff between range resolution and spatial resolution. Thus in a LADAR system, range resolution may be maximized at the expense of spatial image resolution and vice versa. We introduce resolution efficiency, ηr, as a new figure of merit for LADAR, that describes system resolution under the constraints of a specific design, compared to its optimal resolution performance derived from quantum noise considerations. We analyze how the resolution efficiency could be utilized to improve the resolution performance of a LADAR system. Our analysis could be extended to all LADAR systems, regardless of whether they are flash imaging or scanning laser systems.

  16. MEMS-scanned ladar sensor for small ground robots

    NASA Astrophysics Data System (ADS)

    Stann, Barry L.; Dammann, John F.; Giza, Mark M.; Jian, Pey-Schuan; Lawler, William B.; Nguyen, Hung M.; Sadler, Laurel C.

    2010-04-01

    The Army Research Laboratory (ARL) is researching a short-range ladar imager for small unmanned ground vehicles for navigation, obstacle/collision avoidance, and target detection and identification. To date, commercial ladars for this application have been flawed by one or more factors including, low pixelization, insufficient range or range resolution, image artifacts, no daylight operation, large size, high power consumption, and high cost. In the prior year we conceived a scanned ladar design based on a newly developed but commercial MEMS mirror and a pulsed Erbium fiber laser. We initiated construction, and performed in-lab tests that validated the basic ladar architecture. This year we improved the transmitter and receiver modules and successfully tested a new low-cost and compact Erbium laser candidate. We further developed the existing software to allow adjustment of operating parameters on-the-fly and display of the imaged data in real-time. For our most significant achievement we mounted the ladar on an iRobot PackBot and wrote software to integrate PackBot and ladar control signals and ladar imagery on the PackBot's computer network. We recently remotely drove the PackBot over an inlab obstacle course while displaying the ladar data real-time over a wireless link. The ladar has a 5-6 Hz frame rate, an image size of 256 (h) × 128 (v) pixels, a 60° x 30° field of regard, 20 m range, eyesafe operation, and 40 cm range resolution (with provisions for super-resolution or accuracy). This paper will describe the ladar design and update progress in its development and performance.

  17. New developments in HgCdTe APDs and LADAR receivers

    NASA Astrophysics Data System (ADS)

    McKeag, William; Veeder, Tricia; Wang, Jinxue; Jack, Michael; Roberts, Tom; Robinson, Tom; Neisz, James; Andressen, Cliff; Rinker, Robert; Cook, T. Dean; Amzajerdian, Farzin

    2011-06-01

    Raytheon is developing NIR sensor chip assemblies (SCAs) for scanning and staring 3D LADAR systems. High sensitivity is obtained by integrating high performance detectors with gain, i.e., APDs with very low noise Readout Integrated Circuits (ROICs). Unique aspects of these designs include: independent acquisition (non-gated) of pulse returns, multiple pulse returns with both time and intensity reported to enable full 3D reconstruction of the image. Recent breakthrough in device design has resulted in HgCdTe APDs operating at 300K with essentially no excess noise to gains in excess of 100, low NEP <1nW and GHz bandwidths and have demonstrated linear mode photon counting. SCAs utilizing these high performance APDs have been integrated and demonstrated excellent spatial and range resolution enabling detailed 3D imagery both at short range and long ranges. In the following we will review progress in real-time 3D LADAR imaging receiver products in two areas: (1) scanning 256 × 4 configuration for the Multi-Mode Sensor Seeker (MMSS) program and (2) staring 256 × 256 configuration for the Autonomous Landing and Hazard Avoidance Technology (ALHAT) lunar landing mission.

  18. A novel window based method for approximating the Hausdorff in 3D range imagery.

    SciTech Connect

    Koch, Mark William

    2004-10-01

    Matching a set of 3D points to another set of 3D points is an important part of any 3D object recognition system. The Hausdorff distance is known for it robustness in the face of obscuration, clutter, and noise. We show how to approximate the 3D Hausdorff fraction with linear time complexity and quadratic space complexity. We empirically demonstrate that the approximation is very good when compared to actual Hausdorff distances.

  19. Meteoroid and debris special investigation group; status of 3-D crater analysis from binocular imagery

    NASA Technical Reports Server (NTRS)

    Sapp, Clyde A.; See, Thomas H.; Zolensky, Michael E.

    1992-01-01

    During the 3 month deintegration of the LDEF, the M&D SIG generated approximately 5000 digital color stereo image pairs of impact related features from all space exposed surfaces. Currently, these images are being processed at JSC to yield more accurate feature information. Work is currently underway to determine the minimum number of data points necessary to parametrically define impact crater morphologies in order to minimize the man-hour intensive task of tie point selection. Initial attempts at deriving accurate crater depth and diameter measurements from binocular imagery were based on the assumption that the crater geometries were best defined by paraboloid. We made no assumptions regarding the crater depth/diameter ratios but instead allowed each crater to define its own coefficients by performing a least-squares fit based on user-selected tiepoints. Initial test cases resulted in larger errors than desired, so it was decided to test our basic assumptions that the crater geometries could be parametrically defined as paraboloids. The method for testing this assumption was to carefully slice test craters (experimentally produced in an appropriate aluminum alloy) vertically through the center resulting in a readily visible cross-section of the crater geometry. Initially, five separate craters were cross-sectioned in this fashion. A digital image of each cross-section was then created, and the 2-D crater geometry was then hand-digitized to create a table of XY position for each crater. A 2nd order polynomial (parabolic) was fitted to the data using a least-squares approach. The differences between the fit equation and the actual data were fairly significant, and easily large enough to account for the errors found in the 3-D fits. The differences between the curve fit and the actual data were consistent between the caters. This consistency suggested that the differences were due to the fact that a parabola did not sufficiently define the generic crater geometry

  20. 3D Building Modeling and Reconstruction using Photometric Satellite and Aerial Imageries

    NASA Astrophysics Data System (ADS)

    Izadi, Mohammad

    In this thesis, the problem of three dimensional (3D) reconstruction of building models using photometric satellite and aerial images is investigated. Here, two systems are pre-sented: 1) 3D building reconstruction using a nadir single-view image, and 2) 3D building reconstruction using slant multiple-view aerial images. The first system detects building rooftops in orthogonal aerial/satellite images using a hierarchical segmentation algorithm and a shadow verification approach. The heights of detected buildings are then estimated using a fuzzy rule-based method, which measures the height of a building by comparing its predicted shadow region with the actual shadow evidence in the image. This system finally generated a KML (Keyhole Markup Language) file as the output, that contains 3D models of detected buildings. The second system uses the geolocation information of a scene containing a building of interest and uploads all slant-view images that contain this scene from an input image dataset. These images are then searched automatically to choose image pairs with different views of the scene (north, east, south and west) based on the geolocation and auxiliary data accompanying the input data (metadata that describes the acquisition parameters at the capture time). The camera parameters corresponding to these images are refined using a novel point matching algorithm. Next, the system independently reconstructs 3D flat surfaces that are visible in each view using an iterative algorithm. 3D surfaces generated for all views are combined, and redundant surfaces are removed to create a complete set of 3D surfaces. Finally, the combined 3D surfaces are connected together to generate a more complete 3D model. For the experimental results, both presented systems are evaluated quantitatively and qualitatively and different aspects of the two systems including accuracy, stability, and execution time are discussed.

  1. 3D exploitation of large urban photo archives

    NASA Astrophysics Data System (ADS)

    Cho, Peter; Snavely, Noah; Anderson, Ross

    2010-04-01

    Recent work in computer vision has demonstrated the potential to automatically recover camera and scene geometry from large collections of uncooperatively-collected photos. At the same time, aerial ladar and Geographic Information System (GIS) data are becoming more readily accessible. In this paper, we present a system for fusing these data sources in order to transfer 3D and GIS information into outdoor urban imagery. Applying this system to 1000+ pictures shot of the lower Manhattan skyline and the Statue of Liberty, we present two proof-of-concept examples of geometry-based photo enhancement which are difficult to perform via conventional image processing: feature annotation and image-based querying. In these examples, high-level knowledge projects from 3D world-space into georegistered 2D image planes and/or propagates between different photos. Such automatic capabilities lay the groundwork for future real-time labeling of imagery shot in complex city environments by mobile smart phones.

  2. Uncertainty preserving patch-based online modeling for 3D model acquisition and integration from passive motion imagery

    NASA Astrophysics Data System (ADS)

    Tang, Hao; Chang, Peng; Molina, Edgardo; Zhu, Zhigang

    2012-06-01

    In both military and civilian applications, abundant data from diverse sources captured on airborne platforms are often available for a region attracting interest. Since the data often includes motion imagery streams collected from multiple platforms flying at different altitudes, with sensors of different field of views (FOVs), resolutions, frame rates and spectral bands, it is imperative that a cohesive site model encompassing all the information can be quickly built and presented to the analysts. In this paper, we propose to develop an Uncertainty Preserving Patch-based Online Modeling System (UPPOMS) leading towards the automatic creation and updating of a cohesive, geo-registered, uncertaintypreserving, efficient 3D site terrain model from passive imagery with varying field-of-views and phenomenologies. The proposed UPPOMS has the following technical thrusts that differentiate our approach from others: (1) An uncertaintypreserved, patch-based 3D model is generated, which enables the integration of images captured with a mixture of NFOV and WFOV and/or visible and infrared motion imagery sensors. (2) Patch-based stereo matching and multi-view 3D integration are utilized, which are suitable for scenes with many low texture regions, particularly in mid-wave infrared images. (3) In contrast to the conventional volumetric algorithms, whose computational and storage costs grow exponentially with the amount of input data and the scale of the scene, the proposed UPPOMS system employs an online algorithmic pipeline, and scales well to large amount of input data. Experimental results and discussions of future work will be provided.

  3. Extracting and analyzing micro-Doppler from ladar signatures

    NASA Astrophysics Data System (ADS)

    Tahmoush, Dave

    2015-05-01

    Ladar and other 3D imaging modalities have the capability of creating 3D micro-Doppler to analyze the micro-motions of human subjects. An additional capability to the recognition of micro-motion is the recognition of the moving part, such as the hand or arm. Combined with measured RCS values of the body, ladar imaging can be used to ground-truth the more sensitive radar micro-Doppler measurements and associate the moving part of the subject with the measured Doppler and RCS from the radar system. The 3D ladar signatures can also be used to classify activities and actions on their own, achieving an 86% accuracy using a micro-Doppler based classification strategy.

  4. Lift-Off: Using Reference Imagery and Freehand Sketching to Create 3D Models in VR.

    PubMed

    Jackson, Bret; Keefe, Daniel F

    2016-04-01

    Three-dimensional modeling has long been regarded as an ideal application for virtual reality (VR), but current VR-based 3D modeling tools suffer from two problems that limit creativity and applicability: (1) the lack of control for freehand modeling, and (2) the difficulty of starting from scratch. To address these challenges, we present Lift-Off, an immersive 3D interface for creating complex models with a controlled, handcrafted style. Artists start outside of VR with 2D sketches, which are then imported and positioned in VR. Then, using a VR interface built on top of image processing algorithms, 2D curves within the sketches are selected interactively and "lifted" into space to create a 3D scaffolding for the model. Finally, artists sweep surfaces along these curves to create 3D models. Evaluations are presented for both long-term users and for novices who each created a 3D sailboat model from the same starting sketch. Qualitative results are positive, with the visual style of the resulting models of animals and other organic subjects as well as architectural models matching what is possible with traditional fine art media. In addition, quantitative data from logging features built into the software are used to characterize typical tool use and suggest areas for further refinement of the interface. PMID:26780801

  5. Dubai 3d Textuerd Mesh Using High Quality Resolution Vertical/oblique Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Tayeb Madani, Adib; Ziad Ahmad, Abdullateef; Christoph, Lueken; Hammadi, Zamzam; Manal Abdullah Sabeal, Manal Abdullah x.

    2016-06-01

    Providing high quality 3D data with reasonable quality and cost were always essential, affording the core data and foundation for developing an information-based decision-making tool of urban environments with the capability of providing decision makers, stakeholders, professionals, and public users with 3D views and 3D analysis tools of spatial information that enables real-world views. Helps and assist in improving users' orientation and also increase their efficiency in performing their tasks related to city planning, Inspection, infrastructures, roads, and cadastre management. In this paper, the capability of multi-view Vexcel UltraCam Osprey camera images is examined to provide a 3D model of building façades using an efficient image-based modeling workflow adopted by commercial software's. The main steps of this work include: Specification, point cloud generation, and 3D modeling. After improving the initial values of interior and exterior parameters at first step, an efficient image matching technique such as Semi Global Matching (SGM) is applied on the images to generate point cloud. Then, a mesh model of points is calculated using and refined to obtain an accurate model of buildings. Finally, a texture is assigned to mesh in order to create a realistic 3D model. The resulting model has provided enough LoD2 details of the building based on visual assessment. The objective of this paper is neither comparing nor promoting a specific technique over the other and does not mean to promote a sensor-based system over another systems or mechanism presented in existing or previous paper. The idea is to share experience.

  6. Comprehensive high-speed simulation software for ladar systems

    NASA Astrophysics Data System (ADS)

    Kim, Seongjoon; Hwang, Seran; Son, Minsoo; Lee, Impyeong

    2011-11-01

    Simulation of LADAR systems is particularly important for the verification of the system design through the performance assessment. Although many researchers attempted to develop various kinds of LADAR simulators, most of them have some limitations in being practically used for the general design of diverse types of LADAR system. We thus attempt to develop high-speed simulation software that is applicable to different types of LADAR system. In summary, we analyzed the previous studies related to LADAR simulation and, based on those existing works, performed the sensor modeling in various aspects. For the high-speed operation, we incorporate time-efficient incremental coherent ray-tracing algorithms, 3D spatial database systems for efficient spatial query, and CUDA based parallel computing. The simulator is mainly composed of three modules: geometry, radiometry, and visualization modules. Regarding the experimental results, our simulation software could successfully generate the simulated data based on the pre-defined system parameters. The validation of simulation results is performed by the comparison with the real LADAR data, and the intermediate results are promising. We believe that the developed simulator can be widely useful for various fields.

  7. Assimilation of high resolution satellite imagery into the 3D-CMCC forest ecosystem model

    NASA Astrophysics Data System (ADS)

    Natali, S.; Collalti, A.; Candini, A.; Della Vecchia, A.; Valentini, R.

    2012-04-01

    The use of satellite observations for the accurate monitoring of the terrestrial biosphere has been carried out since the very early stage of remote sensing applications. The possibility to observe the ground surface with different wavelengths and different observation modes (namely active and passive observations) has given to the scientific community an invaluable tool for the observation of wide areas with a resolution down to the single tree. On the other hand, the continuous development of forest ecosystem models has permitted to perform simulations of complex ("natural") forest scenarios to evaluate forest status, forest growth and future dynamics. Both remote sensing and modelling forest assessment methods have advantages and disadvantages that could be overcome by the adoption of an integrated approach. In the framework of the European Space Agency Project KLAUS, high resolution optical satellite data has been integrated /assimilated into a forest ecosystem model (named 3D-CMCC) specifically developed for multi-specie, multi-age forests. 3D-CMCC permits to simulate forest areas with different forest layers, with different trees at different age on the same point. Moreover, the model permits to simulate management activities on the forest, thus evaluating the carbon stock evolution following a specific management scheme. The model has been modified including satellite data at 10m resolution, permitting the use of directly measured information, adding to the model the real phenological cycle of each simulated point. Satellite images have been collected by the JAXA ALOS-AVNIR-2 sensor. The integration schema has permitted to identify a spatial domain in which each pixel is characterised by a forest structure (species, ages, soil parameters), meteo-climatological parameters and estimated Leaf Area Index from satellite. The resulting software package (3D-CMCC-SAT) is built around 3D-CMCC: 2D / 3D input datasets are processed iterating on each point of the

  8. Single-photon sensitive Geiger-mode LADAR cameras

    NASA Astrophysics Data System (ADS)

    Yuan, Ping; Sudharsanan, Rengarajan; Bai, Xiaogang; McDonald, Paul; Labios, Eduardo; Morris, Bryan; Nicholson, John P.; Stuart, Gary M.; Danny, Harrison

    2012-10-01

    Three-dimensional (3D) imaging with Short wavelength infrared (SWIR) Laser Detection and Range (LADAR) systems have been successfully demonstrated on various platforms. It has been quickly adopted in many military and civilian applications. In order to minimize the LADAR system size, weight, and power (SWAP), it is highly desirable to maximize the camera sensitivity. Recently Spectrolab has demonstrated a compact 32x32 LADAR camera with single photo-level sensitivity at 1064. This camera has many special features such as non-uniform bias correction, variable range gate width from 2 microseconds to 6 microseconds, windowing for smaller arrays, and short pixel protection. Boeing integrated this camera with a 1.06 μm pulse laser on various platforms and demonstrated 3D imaging. The features and recent test results of the 32x128 camera under development will be introduced.

  9. 3-D Raman Imagery and Atomic Force Microscopy of Ancient Microscopic Fossils

    NASA Astrophysics Data System (ADS)

    Schopf, J.

    2003-12-01

    Investigations of the Precambrian (~540- to ~3,500-Ma-old) fossil record depend critically on identification of authentic microbial fossils. Combined with standard paleontologic studies (e.g., of paleoecologic setting, population structure, cellular morphology, preservational variants), two techniques recently introduced to such studies -- Raman imagery and atomic force microscopy -- can help meet this need. Laser-Raman imagery is a non-intrusive, non-destructive technique that can be used to demonstrate a micron-scale one-to-one correlation between optically discernable morphology and the organic (kerogenous) composition of individual microbial fossils(1,2), a prime indicator of biogencity. Such analyses can be used to characterize the molecular-structural makeup of organic-walled microscopic fossils both in acid-resistant residues and in petrographic thin sections, and whether the fossils analyzed are exposed at the upper surface of, or are embedded within (to depths >65 microns), the section studied. By providing means to map chemically, in three dimensions, whole fossils or parts of such fossils(3), Raman imagery can also show the presence of cell lumina, interior cellular cavities, another prime indicator of biogenicity. Atomic force microscopy (AFM) has been used to visualize the nanometer-scale structure of the kerogenous components of single Precambrian microscopic fossils(4). Capable of analyzing minute fragments of ancient organic matter exposed at the upper surface of thin sections (or of kerogen particles deposited on flat surfaces), such analyses hold promise not only for discriminating between biotic and abiotic micro-objects but for elucidation of the domain size -- and, thus, the degree of graphitization -- of the graphene subunits of the carbonaceous matter analyzed. These techniques -- both new to paleobiology -- can provide useful insight into the biogenicity and geochemical maturity of ancient organic matter. References: (1) Kudryavtsev, A.B. et

  10. 3D target tracking in infrared imagery by SIFT-based distance histograms

    NASA Astrophysics Data System (ADS)

    Yan, Ruicheng; Cao, Zhiguo

    2011-11-01

    SIFT tracking algorithm is an excellent point-based tracking algorithm, which has high tracking performance and accuracy due to its robust capability against rotation, scale change and occlusion. However, when tracking a huge 3D target in complicated real scenarios in a forward-looking infrared (FLIR) image sequence taken from an airborne moving platform, the tracked point locating in the vertical surface usually shifts away from the correct position. In this paper, we propose a novel algorithm for 3D target tracking in FLIR image sequences. Our approach uses SIFT keypoints detected in consecutive frames for point correspondence. The candidate position of the tracked point is firstly estimated by computing the affine transformation using local corresponding SIFT keypoints. Then the correct position is located via an optimal method. Euclidean distances between a candidate point and SIFT keypoints nearby are calculated and formed into a SIFT-based distance histogram. The distance histogram is defined a cost of associating each candidate point to a correct tracked point using the constraint based on the topology of each candidate point with its surrounding SIFT keypoints. Minimization of the cost is formulated as a combinatorial optimization problem. Experiments demonstrate that the proposed algorithm efficiently improves the tracking performance and accuracy.

  11. The Maradi fault zone: 3-D imagery of a classic wrench fault in Oman

    SciTech Connect

    Neuhaus, D. )

    1993-09-01

    The Maradi fault zone extends for almost 350 km in a north-northwest-south-southeast direction from the Oman Mountain foothills into the Arabian Sea, thereby dissecting two prolific hydrocarbon provinces, the Ghaba and Fahud salt basins. During its major Late Cretaceous period of movement, the Maradi fault zone acted as a left-lateral wrench fault. An early exploration campaign based on two-dimensional seismic targeted at fractured Cretaceous carbonates had mixed success and resulted in the discovery of one producing oil field. The structural complexity, rapidly varying carbonate facies, and uncertain fracture distribution prevented further drilling activity. In 1990 a three-dimensional (3-D) seismic survey covering some 500 km[sup 2] was acquired over the transpressional northern part of the Maradi fault zone. The good data quality and the focusing power of 3-D has enabled stunning insight into the complex structural style of a [open quotes]textbook[close quotes] wrench fault, even at deeper levels and below reverse faults hitherto unexplored. Subtle thickness changes within the carbonate reservoir and the unconformably overlying shale seal provided the tool for the identification of possible shoals and depocenters. Horizon attribute maps revealed in detail the various structural components of the wrench assemblage and highlighted areas of increased small-scale faulting/fracturing. The results of four recent exploration wells will be demonstrated and their impact on the interpretation discussed.

  12. Stereo 3-D Imagery Uses for Definition of Geologic Structures and Geomorphic Features (Anaglyph colored glasses employed)

    NASA Astrophysics Data System (ADS)

    Hicks, B. G.; Fuente, J. D.

    2008-12-01

    Recently completed projects incorporating TopoMorpher* digital images as adjuncts to commonly employed tools has emphasized the distinct advantage gained with STEREO 3-D DIGITAL IMAGERY. By manipulating scale, relief (four types of digital shading), sun angle, direction of viewing and tilt of scene, etc. -- to produce differing views of the same terrain -- aids in identifying, tracing, and interpreting ground surface anomalies. *TopoMorpher is a digital software product of Eighteen Software (18 software.com). The advantage of Stereo 3-D views combined with digital removal of vegetation which blocked interpretation (commonly called 'bare earth/naked' views) cannot be over-emphasized. The TopoMorpher program creates scenes transferable to disk for printing at any size. Included is with computer projector which allows large display and discussion ease for groups. The examples include (1) fault systems for targeting water well locations in bedrock and (2) delineation of debris slide and avalanche terrain. Combining geologic mapping and spring locations with Stereo 3-D TopoMorpher tracing of fault lineaments has allowed targeting of water well drilling sites. Selection of geophysical study areas for well siting has been simplified. Stereo 3-D TopoMorpher has a specific "relief/terrain setting" to define potential failure sites by producing detailed colored slope maps keyed to field-data derived parameters. Posters display individual project images and large scale overviews for identifying unusual major terrain features. Images at scales using 10 and 30 meter digital data as well as Lidar (< 1 meter) will be shown.

  13. Initial Results of 3D Topographic Mapping Using Lunar Reconnaissance Orbiter Camera (LROC) Stereo Imagery

    NASA Astrophysics Data System (ADS)

    Li, R.; Oberst, J.; McEwen, A. S.; Archinal, B. A.; Beyer, R. A.; Thomas, P. C.; Chen, Y.; Hwangbo, J.; Lawver, J. D.; Scholten, F.; Mattson, S. S.; Howington-Kraus, A. E.; Robinson, M. S.

    2009-12-01

    The Lunar Reconnaissance Orbiter (LRO), launched June 18, 2009, carries the Lunar Reconnaissance Orbiter Camera (LROC) as one of seven remote sensing instruments on board. The camera system is equipped with a Wide Angle Camera (WAC) and two Narrow Angle Cameras (NAC) for systematic lunar surface mapping and detailed site characterization for potential landing site selection and resource identification. The LROC WAC is a pushframe camera with five 14-line by 704-sample framelets for visible light bands and two 16-line by 512-sample (summed 4x to 4 by 128) UV bands. The WAC can also acquire monochrome images with a 14-line by 1024-sample format. At the nominal 50-km orbit the visible bands ground scale is 75-m/pixel and the UV 383-m/pixel. Overlapping WAC images from adjacent orbits can be used to map topography at a scale of a few hundred meters. The two panchromatic NAC cameras are pushbroom imaging sensors each with a Cassegrain telescope of a 700-mm focal length. The two NAC cameras are aligned with a small overlap in the cross-track direction so that they cover a 5-km swath with a combined field-of-view (FOV) of 5.6°. At an altitude of 50-km, the NAC can provide panchromatic images from its 5,000-pixel linear CCD at a ground scale of 0.5-m/pixel. Calibration of the cameras was performed by using precision collimator measurements to determine the camera principal points and radial lens distortion. The orientation of the two NAC cameras is estimated by a boresight calibration using double and triple overlapping NAC images of the lunar surface. The resulting calibration results are incorporated into a photogrammetric bundle adjustment (BA), which models the LROC camera imaging geometry, in order to refine the exterior orientation (EO) parameters initially retrieved from the SPICE kernels. Consequently, the improved EO parameters can significantly enhance the quality of topographic products derived from LROC NAC imagery. In addition, an analysis of the spacecraft

  14. Inlining 3d Reconstruction, Multi-Source Texture Mapping and Semantic Analysis Using Oblique Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Frommholz, D.; Linkiewicz, M.; Poznanska, A. M.

    2016-06-01

    This paper proposes an in-line method for the simplified reconstruction of city buildings from nadir and oblique aerial images that at the same time are being used for multi-source texture mapping with minimal resampling. Further, the resulting unrectified texture atlases are analyzed for façade elements like windows to be reintegrated into the original 3D models. Tests on real-world data of Heligoland/ Germany comprising more than 800 buildings exposed a median positional deviation of 0.31 m at the façades compared to the cadastral map, a correctness of 67% for the detected windows and good visual quality when being rendered with GPU-based perspective correction. As part of the process building reconstruction takes the oriented input images and transforms them into dense point clouds by semi-global matching (SGM). The point sets undergo local RANSAC-based regression and topology analysis to detect adjacent planar surfaces and determine their semantics. Based on this information the roof, wall and ground surfaces found get intersected and limited in their extension to form a closed 3D building hull. For texture mapping the hull polygons are projected into each possible input bitmap to find suitable color sources regarding the coverage and resolution. Occlusions are detected by ray-casting a full-scale digital surface model (DSM) of the scene and stored in pixel-precise visibility maps. These maps are used to derive overlap statistics and radiometric adjustment coefficients to be applied when the visible image parts for each building polygon are being copied into a compact texture atlas without resampling whenever possible. The atlas bitmap is passed to a commercial object-based image analysis (OBIA) tool running a custom rule set to identify windows on the contained façade patches. Following multi-resolution segmentation and classification based on brightness and contrast differences potential window objects are evaluated against geometric constraints and

  15. Very fast road database verification using textured 3D city models obtained from airborne imagery

    NASA Astrophysics Data System (ADS)

    Bulatov, Dimitri; Ziems, Marcel; Rottensteiner, Franz; Pohl, Melanie

    2014-10-01

    Road databases are known to be an important part of any geodata infrastructure, e.g. as the basis for urban planning or emergency services. Updating road databases for crisis events must be performed quickly and with the highest possible degree of automation. We present a semi-automatic algorithm for road verification using textured 3D city models, starting from aerial or even UAV-images. This algorithm contains two processes, which exchange input and output, but basically run independently from each other. These processes are textured urban terrain reconstruction and road verification. The first process contains a dense photogrammetric reconstruction of 3D geometry of the scene using depth maps. The second process is our core procedure, since it contains various methods for road verification. Each method represents a unique road model and a specific strategy, and thus is able to deal with a specific type of roads. Each method is designed to provide two probability distributions, where the first describes the state of a road object (correct, incorrect), and the second describes the state of its underlying road model (applicable, not applicable). Based on the Dempster-Shafer Theory, both distributions are mapped to a single distribution that refers to three states: correct, incorrect, and unknown. With respect to the interaction of both processes, the normalized elevation map and the digital orthophoto generated during 3D reconstruction are the necessary input - together with initial road database entries - for the road verification process. If the entries of the database are too obsolete or not available at all, sensor data evaluation enables classification of the road pixels of the elevation map followed by road map extraction by means of vectorization and filtering of the geometrically and topologically inconsistent objects. Depending on the time issue and availability of a geo-database for buildings, the urban terrain reconstruction procedure has semantic models

  16. Two Eyes, 3D Early Results: Stereoscopic vs 2D Representations of Highly Spatial Scientific Imagery

    NASA Astrophysics Data System (ADS)

    Price, Aaron

    2013-06-01

    "Two Eyes, 3D" is a 3-year NSF funded research project to study the educational impacts of using stereoscopic representations in informal settings. The first study conducted as part of the project tested children 5-12 on their ability to perceive spatial elements of slides of scientific objects shown to them in either stereoscopic or 2D format. Children were also tested for prior spatial ability. Early results suggest that stereoscopy does not have a major impact on perceiving spatial elements of an image, but it does have a more significant impact on how the children apply that knowledge when presented with a common sense situation. The project is run by the AAVSO and this study was conducted at the Boston Museum of Science.

  17. Learning structured models for segmentation of 2-D and 3-D imagery.

    PubMed

    Lucchi, Aurelien; Marquez-Neila, Pablo; Becker, Carlos; Li, Yunpeng; Smith, Kevin; Knott, Graham; Fua, Pascal

    2015-05-01

    Efficient and accurate segmentation of cellular structures in microscopic data is an essential task in medical imaging. Many state-of-the-art approaches to image segmentation use structured models whose parameters must be carefully chosen for optimal performance. A popular choice is to learn them using a large-margin framework and more specifically structured support vector machines (SSVM). Although SSVMs are appealing, they suffer from certain limitations. First, they are restricted in practice to linear kernels because the more powerful nonlinear kernels cause the learning to become prohibitively expensive. Second, they require iteratively finding the most violated constraints, which is often intractable for the loopy graphical models used in image segmentation. This requires approximation that can lead to reduced quality of learning. In this paper, we propose three novel techniques to overcome these limitations. We first introduce a method to "kernelize" the features so that a linear SSVM framework can leverage the power of nonlinear kernels without incurring much additional computational cost. Moreover, we employ a working set of constraints to increase the reliability of approximate subgradient methods and introduce a new way to select a suitable step size at each iteration. We demonstrate the strength of our approach on both 2-D and 3-D electron microscopic (EM) image data and show consistent performance improvement over state-of-the-art approaches. PMID:25438309

  18. Automatic Detection, Segmentation and Classification of Retinal Horizontal Neurons in Large-scale 3D Confocal Imagery

    SciTech Connect

    Karakaya, Mahmut; Kerekes, Ryan A; Gleason, Shaun Scott; Martins, Rodrigo; Dyer, Michael

    2011-01-01

    Automatic analysis of neuronal structure from wide-field-of-view 3D image stacks of retinal neurons is essential for statistically characterizing neuronal abnormalities that may be causally related to neural malfunctions or may be early indicators for a variety of neuropathies. In this paper, we study classification of neuron fields in large-scale 3D confocal image stacks, a challenging neurobiological problem because of the low spatial resolution imagery and presence of intertwined dendrites from different neurons. We present a fully automated, four-step processing approach for neuron classification with respect to the morphological structure of their dendrites. In our approach, we first localize each individual soma in the image by using morphological operators and active contours. By using each soma position as a seed point, we automatically determine an appropriate threshold to segment dendrites of each neuron. We then use skeletonization and network analysis to generate the morphological structures of segmented dendrites, and shape-based features are extracted from network representations of each neuron to characterize the neuron. Based on qualitative results and quantitative comparisons, we show that we are able to automatically compute relevant features that clearly distinguish between normal and abnormal cases for postnatal day 6 (P6) horizontal neurons.

  19. Quantification of gully volume using very high resolution DSM generated through 3D reconstruction from airborne and field digital imagery

    NASA Astrophysics Data System (ADS)

    Castillo, Carlos; Zarco-Tejada, Pablo; Laredo, Mario; Gómez, Jose Alfonso

    2013-04-01

    Major advances have been made recently in automatic 3D photo-reconstruction techniques using uncalibrated and non-metric cameras (James and Robson, 2012). However, its application on soil conservation studies and landscape feature identification is currently at the outset. The aim of this work is to compare the performance of a remote sensing technique using a digital camera mounted on an airborne platform, with 3D photo-reconstruction, a method already validated for gully erosion assessment purposes (Castillo et al., 2012). A field survey was conducted in November 2012 in a 250 m-long gully located in field crops on a Vertisol in Cordoba (Spain). The airborne campaign was conducted with a 4000x3000 digital camera installed onboard an aircraft flying at 300 m above ground level to acquire 6 cm resolution imagery. A total of 990 images were acquired over the area ensuring a large overlap in the across- and along-track direction of the aircraft. An ortho-mosaic and the digital surface model (DSM) were obtained through automatic aerial triangulation and camera calibration methods. For the field-level photo-reconstruction technique, the gully was divided in several reaches to allow appropriate reconstruction (about 150 pictures taken per reach) and, finally, the resulting point clouds were merged into a unique mesh. A centimetric-accuracy GPS provided a benchmark dataset for gully perimeter and distinguishable reference points in order to allow the assessment of measurement errors of the airborne technique and the georeferenciation of the photo-reconstruction 3D model. The uncertainty on the gully limits definition was explicitly addressed by comparison of several criteria obtained by 3D models (slope and second derivative) with the outer perimeter obtained by the GPS operator identifying visually the change in slope at the top of the gully walls. In this study we discussed the magnitude of planimetric and altimetric errors and the differences observed between the

  20. Knowledge Based 3d Building Model Recognition Using Convolutional Neural Networks from LIDAR and Aerial Imageries

    NASA Astrophysics Data System (ADS)

    Alidoost, F.; Arefi, H.

    2016-06-01

    In recent years, with the development of the high resolution data acquisition technologies, many different approaches and algorithms have been presented to extract the accurate and timely updated 3D models of buildings as a key element of city structures for numerous applications in urban mapping. In this paper, a novel and model-based approach is proposed for automatic recognition of buildings' roof models such as flat, gable, hip, and pyramid hip roof models based on deep structures for hierarchical learning of features that are extracted from both LiDAR and aerial ortho-photos. The main steps of this approach include building segmentation, feature extraction and learning, and finally building roof labeling in a supervised pre-trained Convolutional Neural Network (CNN) framework to have an automatic recognition system for various types of buildings over an urban area. In this framework, the height information provides invariant geometric features for convolutional neural network to localize the boundary of each individual roofs. CNN is a kind of feed-forward neural network with the multilayer perceptron concept which consists of a number of convolutional and subsampling layers in an adaptable structure and it is widely used in pattern recognition and object detection application. Since the training dataset is a small library of labeled models for different shapes of roofs, the computation time of learning can be decreased significantly using the pre-trained models. The experimental results highlight the effectiveness of the deep learning approach to detect and extract the pattern of buildings' roofs automatically considering the complementary nature of height and RGB information.

  1. 3D Case Studies of Monitoring Dynamic Structural Tests using Long Exposure Imagery

    NASA Astrophysics Data System (ADS)

    McCarthy, D. M. J.; Chandler, J. H.; Palmeri, A.

    2014-06-01

    Structural health monitoring uses non-destructive testing programmes to detect long-term degradation phenomena in civil engineering structures. Structural testing may also be carried out to assess a structure's integrity following a potentially damaging event. Such investigations are increasingly carried out with vibration techniques, in which the structural response to artificial or natural excitations is recorded and analysed from a number of monitoring locations. Photogrammetry is of particular interest here since a very high number of monitoring locations can be measured using just a few images. To achieve the necessary imaging frequency to capture the vibration, it has been necessary to reduce the image resolution at the cost of spatial measurement accuracy. Even specialist sensors are limited by a compromise between sensor resolution and imaging frequency. To alleviate this compromise, a different approach has been developed and is described in this paper. Instead of using high-speed imaging to capture the instantaneous position at each epoch, long-exposure images are instead used, in which the localised image of the object becomes blurred. The approach has been extended to create 3D displacement vectors for each target point via multiple camera locations, which allows the simultaneous detection of transverse and torsional mode shapes. The proposed approach is frequency invariant allowing monitoring of higher modal frequencies irrespective of a sampling frequency. Since there is no requirement for imaging frequency, a higher image resolution is possible for the most accurate spatial measurement. The results of a small scale laboratory test using off-the-shelf consumer cameras are demonstrated. A larger experiment also demonstrates the scalability of the approach.

  2. Differential Synthetic Aperture Ladar

    SciTech Connect

    Stappaerts, E A; Scharlemann, E

    2005-02-07

    We report a differential synthetic aperture ladar (DSAL) concept that relaxes platform and laser requirements compared to conventional SAL. Line-of-sight translation/vibration constraints are reduced by several orders of magnitude, while laser frequency stability is typically relaxed by an order of magnitude. The technique is most advantageous for shorter laser wavelengths, ultraviolet to mid-infrared. Analytical and modeling results, including the effect of speckle and atmospheric turbulence, are presented. Synthetic aperture ladars are of growing interest, and several theoretical and experimental papers have been published on the subject. Compared to RF synthetic aperture radar (SAR), platform/ladar motion and transmitter bandwidth constraints are especially demanding at optical wavelengths. For mid-IR and shorter wavelengths, deviations from a linear trajectory along the synthetic aperture length have to be submicron, or their magnitude must be measured to that precision for compensation. The laser coherence time has to be the synthetic aperture transit time, or transmitter phase has to be recorded and a correction applied on detection.

  3. Flight test results of ladar brownout look-through capability

    NASA Astrophysics Data System (ADS)

    Stelmash, Stephen; Münsterer, Thomas; Kramper, Patrick; Samuelis, Christian; Bühler, Daniel; Wegner, Matthias; Sheth, Sagar

    2015-06-01

    The paper discusses recent results of flight tests performed with the Airbus Defence and Space ladar system at Yuma Proving Grounds. The ladar under test was the SferiSense® system which is in operational use as an in-flight obstacle warning and avoidance system on the NH90 transport helicopter. Just minor modifications were done on the sensor firmware to optimize its performance in brownout. Also a new filtering algorithm fitted to segment dust artefacts out of the collected 3D data in real-time was employed. The results proved that this ladar sensor is capable to detect obstacles through brownout dust clouds with a depth extending up to 300 meters from the landing helicopter.

  4. Mapping tropical biodiversity using spectroscopic imagery : characterization of structural and chemical diversity with 3-D radiative transfer modeling

    NASA Astrophysics Data System (ADS)

    Feret, J. B.; Gastellu-Etchegorry, J. P.; Lefèvre-Fonollosa, M. J.; Proisy, C.; Asner, G. P.

    2014-12-01

    The accelerating loss of biodiversity is a major environmental trend. Tropical ecosystems are particularly threatened due to climate change, invasive species, farming and natural resources exploitation. Recent advances in remote sensing of biodiversity confirmed the potential of high spatial resolution spectroscopic imagery for species identification and biodiversity mapping. Such information bridges the scale-gap between small-scale, highly detailed field studies and large-scale, low-resolution satellite observations. In order to produce fine-scale resolution maps of canopy alpha-diversity and beta-diversity of the Peruvian Amazonian forest, we designed, applied and validated a method based on spectral variation hypothesis to CAO AToMS (Carnegie Airborne Observatory Airborne Taxonomic Mapping System) images, acquired from 2011 to 2013. There is a need to understand on a quantitative basis the physical processes leading to this spectral variability. This spectral variability mainly depends on canopy chemistry, structure, and sensor's characteristics. 3D radiative transfer modeling provides a powerful framework for the study of the relative influence of each of these factors in dense and complex canopies. We simulated series of spectroscopic images with the 3D radiative model DART, with variability gradients in terms of leaf chemistry, individual tree structure, spatial and spectral resolution, and applied methods for biodiversity mapping. This sensitivity study allowed us to determine the relative influence of these factors on the radiometric signal acquired by different types of sensors. Such study is particularly important to define the domain of validity of our approach, to refine requirements for the instrumental specifications, and to help preparing hyperspectral spatial missions to be launched at the horizon 2015-2025 (EnMAP, PRISMA, HISUI, SHALOM, HYSPIRI, HYPXIM). Simulations in preparation include topographic variations in order to estimate the robustness

  5. Combining Public Domain and Professional Panoramic Imagery for the Accurate and Dense 3d Reconstruction of the Destroyed Bel Temple in Palmyra

    NASA Astrophysics Data System (ADS)

    Wahbeh, W.; Nebiker, S.; Fangi, G.

    2016-06-01

    This paper exploits the potential of dense multi-image 3d reconstruction of destroyed cultural heritage monuments by either using public domain touristic imagery only or by combining the public domain imagery with professional panoramic imagery. The focus of our work is placed on the reconstruction of the temple of Bel, one of the Syrian heritage monuments, which was destroyed in September 2015 by the so called "Islamic State". The great temple of Bel is considered as one of the most important religious buildings of the 1st century AD in the East with a unique design. The investigations and the reconstruction were carried out using two types of imagery. The first are freely available generic touristic photos collected from the web. The second are panoramic images captured in 2010 for documenting those monuments. In the paper we present a 3d reconstruction workflow for both types of imagery using state-of-the art dense image matching software, addressing the non-trivial challenges of combining uncalibrated public domain imagery with panoramic images with very wide base-lines. We subsequently investigate the aspects of accuracy and completeness obtainable from the public domain touristic images alone and from the combination with spherical panoramas. We furthermore discuss the challenges of co-registering the weakly connected 3d point cloud fragments resulting from the limited coverage of the touristic photos. We then describe an approach using spherical photogrammetry as a virtual topographic survey allowing the co-registration of a detailed and accurate single 3d model of the temple interior and exterior.

  6. Brassboard development of a MEMS-scanned ladar sensor for small ground robots

    NASA Astrophysics Data System (ADS)

    Stann, Barry L.; Dammann, John F.; Enke, Joseph A.; Jian, Pey-Schuan; Giza, Mark M.; Lawler, William B.; Powers, Michael A.

    2011-06-01

    The Army Research Laboratory (ARL) is researching a short-range ladar imager for navigation, obstacle/collision avoidance, and target detection/identification on small unmanned ground vehicles (UGV).To date, commercial UGV ladars have been flawed by one or more factors including low pixelization, insufficient range or range resolution, image artifacts, no daylight operation, large size, high power consumption, and high cost. ARL built a breadboard ladar based on a newly developed but commercially available micro-electro-mechanical system (MEMS) mirror coupled to a lowcost pulsed Erbium fiber laser transmitter that largely addresses these problems. Last year we integrated the ladar and associated control software on an iRobot PackBot and distributed the ladar imagery data via the PackBot's computer network. The un-tethered PackBot was driven through an indoor obstacle course while displaying the ladar data realtime on a remote laptop computer over a wireless link. We later conducted additional driving experiments in cluttered outdoor environments. This year ARL partnered with General Dynamics Robotics Systems to start construction of a brass board ladar design. This paper will discuss refinements and rebuild of the various subsystems including the transmitter and receiver module, the data acquisition and data processing board, and software that will lead to a more compact, lower cost, and better performing ladar. The current ladar breadboard has a 5-6 Hz frame rate, an image size of 256 (h) × 128 (v) pixels, a 60° × 30° field of regard, 20 m range, eyesafe operation, and 40 cm range resolution (with provisions for super-resolution or accuracy).

  7. Study on key techniques for synthetic aperture ladar system

    NASA Astrophysics Data System (ADS)

    Cao, Changqing; Zeng, Xiaodong; Feng, Zhejun; Zhang, Wenrui; Su, Lei

    2008-03-01

    The spatial resolution of a conventional imaging LADAR system is constrained by the diffraction limit of the telescope aperture. The purpose of this work is to investigate Synthetic Aperture Imaging LADAR (SAIL), which employs aperture synthesis with coherent laser radar to overcome the diffraction limit and achieve fine-resolution, long range, two-dimensional imaging with modest aperture diameters. Because of many advantages, LADAR based on synthetic aperture theory is becoming research hotspot and practicality. Synthetic Aperture LADAR (SAL) technology satisfies the critical need for reliable, long-range battlefield awareness. An image that takes radar tens of seconds to produce can be produced in a few thousands of a second at optical frequencies. While radar waves respond to macroscopic features such as corners, edges, and facets, laser waves interact with microscopic surface characteristics, which results in imagery that appears more familiar and is more easily interpreted. SAL could provide high resolution optical/infrared imaging. In the present paper we have tried to answer three questions: (1) the process of collecting the samples over the large "synthetic" aperture; (2) differences between SAR and SAL; (3) the key techniques for SAL system. The principle and progress of SAL are introduced and a typical SAL system is described. Beam stabilization, chirp laser, and heterodyne detection, which are among the most challenging aspects of SAL, are discussed in detail.

  8. Research on key technologies of LADAR echo signal simulator

    NASA Astrophysics Data System (ADS)

    Xu, Rui; Shi, Rui; Ye, Jiansen; Wang, Xin; Li, Zhuo

    2015-10-01

    LADAR echo signal simulator is one of the most significant components of hardware-in-the-loop (HWIL) simulation systems for LADAR, which is designed to simulate the LADAR return signal in laboratory conditions. The device can provide the laser echo signal of target and background for imaging LADAR systems to test whether it is of good performance. Some key technologies are investigated in this paper. Firstly, the 3D model of typical target is built, and transformed to the data of the target echo signal based on ranging equation and targets reflection characteristics. Then, system model and time series model of LADAR echo signal simulator are established. Some influential factors which could induce fixed delay error and random delay error on the simulated return signals are analyzed. In the simulation system, the signal propagating delay of circuits and the response time of pulsed lasers are belong to fixed delay error. The counting error of digital delay generator, the jitter of system clock and the desynchronized between trigger signal and clock signal are a part of random delay error. Furthermore, these system insertion delays are analyzed quantitatively, and the noisy data are obtained. The target echo signals are got by superimposing of the noisy data and the pure target echo signal. In order to overcome these disadvantageous factors, a method of adjusting the timing diagram of the simulation system is proposed. Finally, the simulated echo signals are processed by using a detection algorithm to complete the 3D model reconstruction of object. The simulation results reveal that the range resolution can be better than 8 cm.

  9. Airborne ladar man-in-the-loop operations in tactical environments

    NASA Astrophysics Data System (ADS)

    Grobmyer, Joseph E., Jr.; Lum, Tommy; Morris, Robert E.; Hard, Sarah J.; Pratt, H. L.; Florence, Tom; Peddycoart, Ed

    2004-09-01

    The U.S. Army Research, Development and Engineering Command (RDECOM) is developing approaches and processes that will exploit the characteristics of current and future Laser Radar (LADAR) sensor systems for critical man-in-the-loop tactical processes. The importance of timely and accurate target detection, classification, identification, and engagement for future combat systems has been documented and is viewed as a critical enabling factor for FCS survivability and lethality. Recent work has demonstrated the feasibility of using low cost but relatively capable personal computer class systems to exploit the information available in Ladar sensor frames to present the war fighter or analyst with compelling and usable imagery for use in the target identification and engagement processes in near real time. The advantages of LADAR imagery are significant in environments presenting cover for targets and the associated difficulty for automated target recognition (ATR) technologies.

  10. Comparison of 3D representations depicting micro folds: overlapping imagery vs. time-of-flight laser scanner

    NASA Astrophysics Data System (ADS)

    Vaiopoulos, Aristidis D.; Georgopoulos, Andreas; Lozios, Stylianos G.

    2012-10-01

    A relatively new field of interest, which continuously gains grounds nowadays, is digital 3D modeling. However, the methodologies, the accuracy and the time and effort required to produce a high quality 3D model have been changing drastically the last few years. Whereas in the early days of digital 3D modeling, 3D models were only accessible to computer experts in animation, working many hours in expensive sophisticated software, today 3D modeling has become reasonably fast and convenient. On top of that, with online 3D modeling software, such as 123D Catch, nearly everyone can produce 3D models with minimum effort and at no cost. The only requirement is panoramic overlapping images, of the (still) objects the user wishes to model. This approach however, has limitations in the accuracy of the model. An objective of the study is to examine these limitations by assessing the accuracy of this 3D modeling methodology, with a Terrestrial Laser Scanner (TLS). Therefore, the scope of this study is to present and compare 3D models, produced with two different methods: 1) Traditional TLS method with the instrument ScanStation 2 by Leica and 2) Panoramic overlapping images obtained with DSLR camera and processed with 123D Catch free software. The main objective of the study is to evaluate advantages and disadvantages of the two 3D model producing methodologies. The area represented with the 3D models, features multi-scale folding in a cipollino marble formation. The most interesting part and most challenging to capture accurately, is an outcrop which includes vertically orientated micro folds. These micro folds have dimensions of a few centimeters while a relatively strong relief is evident between them (perhaps due to different material composition). The area of interest is located in Mt. Hymittos, Greece.

  11. Low-cost ladar imagers

    NASA Astrophysics Data System (ADS)

    Vasile, S.; Lipson, J.

    2008-04-01

    We have developed low-cost LADAR imagers using photon-counting Geiger avalanche photodiode (GPD) arrays, signal amplification and conditioning interface with integrated active quenching circuits (AQCs) and readout integrated circuit (ROIC) arrays for time to digital conversion (TDC) implemented in FPGA. Our goal is to develop a compact, low-cost LADAR receiver that could be operated with room temperature Si-GPD arrays and cooled InGaAs GPD arrays. We report on architecture selection criteria, integration issues of the GPD, AQC and TDC, gating and programmable features for flexible and low-cost re-configuration, as well as on timing resolution, precision and accuracy of our latest LADAR designs.

  12. 3D Visualisation and Artistic Imagery to Enhance Interest in "Hidden Environments"--New Approaches to Soil Science

    ERIC Educational Resources Information Center

    Gilford, J.; Falconer, R. E.; Wade, R.; Scott-Brown, K. C.

    2014-01-01

    Interactive Virtual Environments (VEs) have the potential to increase student interest in soil science. Accordingly a bespoke "soil atlas" was created using Java3D as an interactive 3D VE, to show soil information in the context of (and as affected by) the over-lying landscape. To display the below-ground soil characteristics, four sets…

  13. Generation of 3D Model for Urban area using Ikonos and Cartosat-1 Satellite Imageries with RS and GIS Techniques

    NASA Astrophysics Data System (ADS)

    Rajpriya, N. R.; Vyas, A.; Sharma, S. A.

    2014-11-01

    Urban design is a subject that is concerned with the shape, the surface and its physical arrangement of all kinds of urban elements. Although urban design is a practice process and needs much detailed and multi-dimensional description. 3D city models based spatial analysis gives the possibility of solving these problems. Ahmedabad is third fastest growing cities in the world with large amount of development in infrastructure and planning. The fabric of the city is changing and expanding at the same time, which creates need of 3d visualization of the city to develop a sustainable planning for the city. These areas have to be monitored and mapped on a regular basis and satellite remote sensing images provide a valuable and irreplaceable source for urban monitoring. With this, the derivation of structural urban types or the mapping of urban biotopes becomes possible. The present study focused at development of technique for 3D modeling of buildings for urban area analysis and to implement encoding standards prescribed in "OGC City GML" for urban features. An attempt has been to develop a 3D city model with level of details 1 (LOD 1) for part of city of Ahmedabad in State of Gujarat, India. It shows the capability to monitor urbanization in 2D and 3D.

  14. Geiger-mode ladar cameras

    NASA Astrophysics Data System (ADS)

    Yuan, Ping; Sudharsanan, Rengarajan; Bai, Xiaogang; Boisvert, Joseph; McDonald, Paul; Labios, Eduardo; Morris, Bryan; Nicholson, John P.; Stuart, Gary M.; Danny, Harrison; Van Duyne, Stephen; Pauls, Greg; Gaalema, Stephen

    2011-06-01

    The performance of Geiger-mode LAser Detection and Ranging (LADAR) cameras is primarily defined by individual pixel attributes, such as dark count rate (DCR), photon detection efficiency (PDE), jitter, and crosstalk. However, for the expanding LADAR imaging applications, other factors, such as image uniformity, component tolerance, manufacturability, reliability, and operational features, have to be considered. Recently we have developed new 32×32 and 32×128 Read-Out Integrated Circuits (ROIC) for LADAR applications. With multiple filter and absorber structures, the 50-μm-pitch arrays demonstrate pixel crosstalk less than 100 ppm level, while maintaining a PDE greater than 40% at 4 V overbias. Besides the improved epitaxial and process uniformity of the APD arrays, the new ROICs implement a Non-uniform Bias (NUB) circuit providing 4-bit bias voltage tunability over a 2.5 V range to individually bias each pixel. All these features greatly increase the performance uniformity of the LADAR camera. Cameras based on these ROICs were integrated with a data acquisition system developed by Boeing DES. The 32×32 version has a range gate of up to 7 μs and can cover a range window of about 1 km with 14-bit and 0.5 ns timing resolution. The 32×128 camera can be operated at a frame rate of up to 20 kHz with 0.3 ns and 14-bit time resolution through a full CameraLink. The performance of the 32×32 LADAR camera has been demonstrated in a series of field tests on various vehicles.

  15. Geological interpretation and analysis of surface based, spatially referenced planetary imagery data using PRoGIS 2.0 and Pro3D.

    NASA Astrophysics Data System (ADS)

    Barnes, R.; Gupta, S.; Giordano, M.; Morley, J. G.; Muller, J. P.; Tao, Y.; Sprinks, J.; Traxler, C.; Hesina, G.; Ortner, T.; Sander, K.; Nauschnegg, B.; Paar, G.; Willner, K.; Pajdla, T.

    2015-10-01

    We apply the capabilities of the geospatial environment PRoGIS 2.0 and the real time rendering viewer PRo3D to geological analysis of NASA's Mars Exploration Rover-B (MER-B Opportunity rover) and Mars Science Laboratory (MSL Curiosity rover) datasets. Short baseline and serendipitous long baseline stereo Pancam rover imagery are used to create 3D point clouds which can be combined with super-resolution images derived from Mars Reconnaissance Orbiter HiRISE orbital data, andsuper-resolution outcrop images derived from MER Pancam, as well as hand-lens scale images for geology and outcrop characterization at all scales. Data within the PRoViDE database are presented and accessed through the PRoGIS interface. Simple geological measurement tools are implemented within the PRoGIS and PRo3D web software to accurately measure the dip and strike of bedding in outcrops, create detailed stratigraphic logs for correlation between the areas investigated, and to develop realistic 3D models for the characterization of planetary surface processes. Annotation tools are being developed to aid discussion and dissemination of the observations within the planetary science community.

  16. Optimization of space borne imaging ladar sensor for asteroid studies using parameter design

    NASA Astrophysics Data System (ADS)

    Wheel, Peter J.; Dobbs, Michael E.; Sharp, William E.

    2002-10-01

    Imaging LADAR is a hybrid technology that offers the ability to measure basic physical and morphological characteristics (topography, rotational state, and density) of a small body from a single fast flyby, without requiring months in orbit. In addition, the imaging LADAR provides key flight navigation information including range, altitude, hazard/target avoidance, and closed-loop landing/fly-by navigation information. The Near Laser Ranger demonstrated many of these capabilities as part of the NEAR mission. The imaging LADAR scales the concept of a laser ranger into a full 3D imager. Imaging LADAR systems combine laser illumination of the target (which means that imaging is independent of solar illumination and the image SNR is controlled by the observer), with laser ranging and imaging (producing high resolution 3D images in a fraction of the time necessary for a passive imager). The technical concept described below alters the traditional design space (dominated by pulsed LADAR systems) with the introduction of a pseudo-noise (PN) coded continuous wave (CW) laser system which allows for variable range resolution mapping and leverages enormous commercial investments in high power, long-life lasers for telecommunications.

  17. Forest Inventory Attribute Estimation Using Airborne Laser Scanning, Aerial Stereo Imagery, Radargrammetry and Interferometry-Finnish Experiences of the 3d Techniques

    NASA Astrophysics Data System (ADS)

    Holopainen, M.; Vastaranta, M.; Karjalainen, M.; Karila, K.; Kaasalainen, S.; Honkavaara, E.; Hyyppä, J.

    2015-03-01

    Three-dimensional (3D) remote sensing has enabled detailed mapping of terrain and vegetation heights. Consequently, forest inventory attributes are estimated more and more using point clouds and normalized surface models. In practical applications, mainly airborne laser scanning (ALS) has been used in forest resource mapping. The current status is that ALS-based forest inventories are widespread, and the popularity of ALS has also raised interest toward alternative 3D techniques, including airborne and spaceborne techniques. Point clouds can be generated using photogrammetry, radargrammetry and interferometry. Airborne stereo imagery can be used in deriving photogrammetric point clouds, as very-high-resolution synthetic aperture radar (SAR) data are used in radargrammetry and interferometry. ALS is capable of mapping both the terrain and tree heights in mixed forest conditions, which is an advantage over aerial images or SAR data. However, in many jurisdictions, a detailed ALS-based digital terrain model is already available, and that enables linking photogrammetric or SAR-derived heights to heights above the ground. In other words, in forest conditions, the height of single trees, height of the canopy and/or density of the canopy can be measured and used in estimation of forest inventory attributes. In this paper, first we review experiences of the use of digital stereo imagery and spaceborne SAR in estimation of forest inventory attributes in Finland, and we compare techniques to ALS. In addition, we aim to present new implications based on our experiences.

  18. JAVA implemented MSE optimal bit-rate allocation applied to 3-D hyperspectral imagery using JPEG2000 compression

    NASA Astrophysics Data System (ADS)

    Melchor, J. L., Jr.; Cabrera, S. D.; Aguirre, A.; Kosheleva, O. M.; Vidal, E., Jr.

    2005-08-01

    This paper describes an efficient algorithm and its Java implementation for a recently developed mean-squared error (MSE) rate-distortion optimal (RDO) inter-slice bit-rate allocation (BRA) scheme applicable to the JPEG2000 Part 2 (J2KP2) framework. Its performance is illustrated on hyperspectral imagery data using the J2KP2 with the Karhunen- Loeve transform (KLT) for decorrelation. The results are contrasted with those obtained using the traditional logvariance based BRA method and with the original RDO algorithm. The implementation has been developed as a Java plug-in to be incorporated into our evolving multi-dimensional data compression software tool denoted CompressMD. The RDO approach to BRA uses discrete rate distortion curves (RDCs) for each slice of transform coefficients. The generation of each point on a RDC requires a full decompression of that slice, therefore, the efficient version minimizes the number of RDC points needed from each slice by using a localized coarse-to-fine approach denoted RDOEfficient. The scheme is illustrated in detail using a subset of 10 bands of hyperspectral imagery data and is contrasted to the original RDO implementation and the traditional (log-variance) method of BRA showing that better results are obtained with the RDO methods. The three schemes are also tested on two hyperspectral imagery data sets with all bands present: the Cuprite radiance data from AVIRIS and a set derived from the Hyperion satellite. The results from the RDO and RDOEfficient are very close to each other in the MSE sense indicating that the adaptive approach can find almost the same BRA solution. Surprisingly, the traditional method also performs very close to the RDO methods, indicating that it is very close to being optimal for these types of data sets.

  19. Initial progress in the recording of crime scene simulations using 3D laser structured light imagery techniques for law enforcement and forensic applications

    NASA Astrophysics Data System (ADS)

    Altschuler, Bruce R.; Monson, Keith L.

    1998-03-01

    Representation of crime scenes as virtual reality 3D computer displays promises to become a useful and important tool for law enforcement evaluation and analysis, forensic identification and pathological study and archival presentation during court proceedings. Use of these methods for assessment of evidentiary materials demands complete accuracy of reproduction of the original scene, both in data collection and in its eventual virtual reality representation. The recording of spatially accurate information as soon as possible after first arrival of law enforcement personnel is advantageous for unstable or hazardous crime scenes and reduces the possibility that either inadvertent measurement error or deliberate falsification may occur or be alleged concerning processing of a scene. Detailed measurements and multimedia archiving of critical surface topographical details in a calibrated, uniform, consistent and standardized quantitative 3D coordinate method are needed. These methods would afford professional personnel in initial contact with a crime scene the means for remote, non-contacting, immediate, thorough and unequivocal documentation of the contents of the scene. Measurements of the relative and absolute global positions of object sand victims, and their dispositions within the scene before their relocation and detailed examination, could be made. Resolution must be sufficient to map both small and large objects. Equipment must be able to map regions at varied resolution as collected from different perspectives. Progress is presented in devising methods for collecting and archiving 3D spatial numerical data from crime scenes, sufficient for law enforcement needs, by remote laser structured light and video imagery. Two types of simulation studies were done. One study evaluated the potential of 3D topographic mapping and 3D telepresence using a robotic platform for explosive ordnance disassembly. The second study involved using the laser mapping system on a

  20. Lossless to lossy compression for hyperspectral imagery based on wavelet and integer KLT transforms with 3D binary EZW

    NASA Astrophysics Data System (ADS)

    Cheng, Kai-jen; Dill, Jeffrey

    2013-05-01

    In this paper, a lossless to lossy transform based image compression of hyperspectral images based on Integer Karhunen-Loève Transform (IKLT) and Integer Discrete Wavelet Transform (IDWT) is proposed. Integer transforms are used to accomplish reversibility. The IKLT is used as a spectral decorrelator and the 2D-IDWT is used as a spatial decorrelator. The three-dimensional Binary Embedded Zerotree Wavelet (3D-BEZW) algorithm efficiently encodes hyperspectral volumetric image by implementing progressive bitplane coding. The signs and magnitudes of transform coefficients are encoded separately. Lossy and lossless compressions of signs are implemented by conventional EZW algorithm and arithmetic coding respectively. The efficient 3D-BEZW algorithm is applied to code magnitudes. Further compression can be achieved using arithmetic coding. The lossless and lossy compression performance is compared with other state of the art predictive and transform based image compression methods on Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) images. Results show that the 3D-BEZW performance is comparable to predictive algorithms. However, its computational cost is comparable to transform- based algorithms.

  1. Deep space LADAR, phase 1

    NASA Astrophysics Data System (ADS)

    Frey, Randy W.; Rawlins, Greg; Zepkin, Neil; Bohlin, John

    1989-03-01

    A pseudo-ranging laser radar (PRLADAR) concept is proposed to provide extended range capability to tracking LADAR systems meeting the long-range requirements of SDI mission scenarios such as the SIE midcourse program. The project will investigate the payoff of several transmitter modulation techniques and a feasibility demonstration using a breadboard implementation of a new receiver concept called the Phase Multiplexed Correlator (PMC) will be accomplished. The PRLADAR concept has specific application to spaceborne LADAR tracking missions where increased CNR/SNR performance gained by the proposed technique may reduce the laser power and/or optical aperture requirement for a given mission. The reduction in power/aperture has similar cost reduction advantages in commercial ranging applications. A successful Phase 1 program will lay the groundwork for a quick reaction upgrade to the AMOS/LASE system in support of near term SIE measurement objectives.

  2. 3D visualisation and artistic imagery to enhance interest in `hidden environments' - new approaches to soil science

    NASA Astrophysics Data System (ADS)

    Gilford, J.; Falconer, R. E.; Wade, R.; Scott-Brown, K. C.

    2014-09-01

    Interactive Virtual Environments (VEs) have the potential to increase student interest in soil science. Accordingly a bespoke 'soil atlas' was created using Java3D as an interactive 3D VE, to show soil information in the context of (and as affected by) the over-lying landscape. To display the below-ground soil characteristics, four sets of artistic illustrations were produced, each set showing the effects of soil organic-matter density and water content on fungal density, to determine potential for visualisations and interactivity in stimulating interest in soil and soil illustrations, interest being an important factor in facilitating learning. The illustrations were created using 3D modelling packages, and a wide range of styles were produced. This allowed a preliminary study of the relative merits of different artistic styles, scientific-credibility, scale, abstraction and 'realism' (e.g. photo-realism or realism of forms), and any relationship between these and the level of interest indicated by the study participants in the soil visualisations and VE. The study found significant differences in mean interest ratings for different soil illustration styles, as well as in the perception of scientific-credibility of these styles, albeit for both measures there was considerable difference of attitude between participants about particular styles. There was also found to be a highly significant positive correlation between participants rating styles highly for interest and highly for scientific-credibility. There was furthermore a particularly high interest rating among participants for seeing temporal soil processes illustrated/animated, suggesting this as a particularly promising method for further stimulating interest in soil illustrations and soil itself.

  3. Use of stereoscopic satellite imagery for 3D mapping of bedrock structure in West Antarctica: An example from the northern Ford Ranges

    NASA Astrophysics Data System (ADS)

    Contreras, A.; Siddoway, C. S.; Porter, C.; Gottfried, M.

    2012-12-01

    In coastal West Antarctica, crustal-scale faults have been minimally mapped using traditional ground-based methods but regional scale structures are inferred mainly on the basis of low resolution potential fields data from airborne geophysical surveys (15 km flightline spacing). We use a new approach to detailed mapping of faults, shear zones, and intrusive relationships using panchromatic and multispectral imagery draped upon a digital elevation model (DEM). Our work focuses on the Fosdick Mountains, a culmination of lower middle crustal rocks exhumed at c. 100 Ma by dextral oblique detachment faulting. Ground truth exists for extensive areas visited during field studies in 2005-2011, providing a basis for spectral analysis of 8-band WorldView-02 imagery for detailed mapping of complex granite- migmatite relationships on the north side of the Fosdick range. A primary aim is the creation of a 3D geological map using the results of spectral analysis merged with a DEM computed from a stereographic pair of high resolution panchromatic images (sequential scenes, acquired 45 seconds apart). DEMs were computed using ERDAS Imagine™ LPS eATE, refined by MATLAB-based interpolation scripts to remove artifacts in the terrain model according to procedures developed by the Polar Geospatial Center (U. Minnesota). Orthorectified satellite imagery that covers the area of the DEMs was subjected to principal component analysis in ESRI ArcGIS™ 10.1, then the different rock types were identified using various combinations of spectral bands in order to map the geology of rock exposures that could not be accessed directly from the ground. Renderings in 3D of the satellite scenes draped upon the DEMs were created using Global Mapper™. The 3D perspective views reveal structural and geological features that are not observed in either the DEM nor the satellite imagery alone. The detailed map is crucial for an ongoing petrological / geochemical investigation of Cretaceous crustal

  4. Research into a Single-aperture Light Field Camera System to Obtain Passive Ground-based 3D Imagery of LEO Objects

    NASA Astrophysics Data System (ADS)

    Bechis, K.; Pitruzzello, A.

    2014-09-01

    This presentation describes our ongoing research into using a ground-based light field camera to obtain passive, single-aperture 3D imagery of LEO objects. Light field cameras are an emerging and rapidly evolving technology for passive 3D imaging with a single optical sensor. The cameras use an array of lenslets placed in front of the camera focal plane, which provides angle of arrival information for light rays originating from across the target, allowing range to target and 3D image to be obtained from a single image using monocular optics. The technology, which has been commercially available for less than four years, has the potential to replace dual-sensor systems such as stereo cameras, dual radar-optical systems, and optical-LIDAR fused systems, thus reducing size, weight, cost, and complexity. We have developed a prototype system for passive ranging and 3D imaging using a commercial light field camera and custom light field image processing algorithms. Our light field camera system has been demonstrated for ground-target surveillance and threat detection applications, and this paper presents results of our research thus far into applying this technology to the 3D imaging of LEO objects. The prototype 3D imaging camera system developed by Northrop Grumman uses a Raytrix R5 C2GigE light field camera connected to a Windows computer with an nVidia graphics processing unit (GPU). The system has a frame rate of 30 Hz, and a software control interface allows for automated camera triggering and light field image acquisition to disk. Custom image processing software then performs the following steps: (1) image refocusing, (2) change detection, (3) range finding, and (4) 3D reconstruction. In Step (1), a series of 2D images are generated from each light field image; the 2D images can be refocused at up to 100 different depths. Currently, steps (1) through (3) are automated, while step (4) requires some user interaction. A key requirement for light field camera

  5. Preliminary Pseudo 3-D Imagery of the State Line Fault, Stewart Valley, Nevada Using Seismic Reflection Data

    NASA Astrophysics Data System (ADS)

    Saldaña, S. C.; Snelson, C. M.; Taylor, W. J.; Beachly, M.; Cox, C. M.; Davis, R.; Stropky, M.; Phillips, R.; Robins, C.; Cothrun, C.

    2007-12-01

    The Pahrump Fault system is located in the central Basin and Range region and consists of three main fault zones: the Nopah range front fault zone, the State Line fault zone and the Spring Mountains range fault zone. The State Line fault zone is made up north-west trending dextral strike-slip faults that run parallel to the Nevada- California border. Previous geologic and geophysical studies conducted in and around Stewart Valley, located ~90 km from Las Vegas, Nevada, have constrained the location of the State Line fault zone to within a few kilometers. The goals of this project were to use seismic methods to definitively locate the northwestern most trace of the State Line fault and produce pseudo 3-D seismic cross-sections that can then be used to characterize the subsurface geometry and determine the slip of the State Line fault. During July 2007, four seismic lines were acquired in Stewart Valley: two normal and two parallel to the mapped traces of the State Line fault. Presented here are preliminary results from the two seismic lines acquired normal to the fault. These lines were acquired utilizing a 144-channel geode system with each of the 4.5 Hz vertical geophones set out at 5 m intervals to produce a 595 m long profile to the north and a 715 m long profile to the south. The vibroseis was programmed to produce an 8 s linear sweep from 20-160 Hz. These data returned excellent signal to noise and reveal subsurface lithology that will subsequently be used to resolve the subsurface geometry of the State Line fault. This knowledge will then enhance our understanding of the evolution of the State Line fault. Knowing how the State Line fault has evolved gives insight into the stick-slip fault evolution for the region and may improve understanding of how stress has been partitioned from larger strike-slip systems such as the San Andreas fault.

  6. Europeana and 3D

    NASA Astrophysics Data System (ADS)

    Pletinckx, D.

    2011-09-01

    The current 3D hype creates a lot of interest in 3D. People go to 3D movies, but are we ready to use 3D in our homes, in our offices, in our communication? Are we ready to deliver real 3D to a general public and use interactive 3D in a meaningful way to enjoy, learn, communicate? The CARARE project is realising this for the moment in the domain of monuments and archaeology, so that real 3D of archaeological sites and European monuments will be available to the general public by 2012. There are several aspects to this endeavour. First of all is the technical aspect of flawlessly delivering 3D content over all platforms and operating systems, without installing software. We have currently a working solution in PDF, but HTML5 will probably be the future. Secondly, there is still little knowledge on how to create 3D learning objects, 3D tourist information or 3D scholarly communication. We are still in a prototype phase when it comes to integrate 3D objects in physical or virtual museums. Nevertheless, Europeana has a tremendous potential as a multi-facetted virtual museum. Finally, 3D has a large potential to act as a hub of information, linking to related 2D imagery, texts, video, sound. We describe how to create such rich, explorable 3D objects that can be used intuitively by the generic Europeana user and what metadata is needed to support the semantic linking.

  7. Detection and delineation of buildings from airborne ladar measurements

    NASA Astrophysics Data System (ADS)

    Swirski, Yoram; Wolowelsky, Karni; Adar, Renen; Figov, Zvi

    2004-11-01

    Automatic delineation of buildings is very attractive for both civilian and military applications. Such applications include general mapping, detection of unauthorized constructions, change detection, etc. For military applications, high demand exists for accurate building change updates, covering large areas, and over short time periods. We present two algorithms coupled together. The height image algorithm is a fast coarse algorithm operating on large areas. This algorithm is capable of defining blocks of buildings and regions of interest. The point-cloud algorithm is a fine, 3D-based, accurate algorithm for building delineation. Since buildings may be separated by alleys, whose width is similar or narrower than the LADAR resolution, the height image algorithm marks those crowded buildings as a single object. The point-cloud algorithm separates and accurately delineates individual building boundaries and building sub-sections utilizing roof shape analysis in 3D. Our focus is on the ability to cover large areas with accuracy and high rejection of non-building objects, like trees. We report a very good detection performance with only few misses and false alarms. It is believed that LADAR measurements, coupled with good segmentation algorithms, may replace older systems and methods that require considerable manual work for such applications.

  8. Ladar-based IED detection

    NASA Astrophysics Data System (ADS)

    Engström, Philip; Larsson, Hâkan; Letalick, Dietmar

    2014-05-01

    An improvised explosive device (IED) is a bomb constructed and deployed in a non-standard manor. Improvised means that the bomb maker took whatever he could get his hands on, making it very hard to predict and detect. Nevertheless, the matters in which the IED's are deployed and used, for example as roadside bombs, follow certain patterns. One possible approach for early warning is to record the surroundings when it is safe and use this as reference data for change detection. In this paper a LADAR-based system for IED detection is presented. The idea is to measure the area in front of the vehicle when driving and comparing this to the previously recorded reference data. By detecting new, missing or changed objects the system can make the driver aware of probable threats.

  9. Phase gradient algorithm method for three-dimensional holographic ladar imaging.

    PubMed

    Stafford, Jason W; Duncan, Bradley D; Rabb, David J

    2016-06-10

    Three-dimensional (3D) holographic ladar uses digital holography with frequency diversity to add the ability to resolve targets in range. A key challenge is that since individual frequency samples are not recorded simultaneously, differential phase aberrations may exist between them, making it difficult to achieve range compression. We describe steps specific to this modality so that phase gradient algorithms (PGA) can be applied to 3D holographic ladar data for phase corrections across multiple temporal frequency samples. Substantial improvement of range compression is demonstrated with a laboratory experiment where our modified PGA technique is applied. Additionally, the PGA estimator is demonstrated to be efficient for this application, and the maximum entropy saturation behavior of the estimator is analytically described. PMID:27409018

  10. Real-time range generation for ladar hardware-in-the-loop testing

    NASA Astrophysics Data System (ADS)

    Olson, Eric M.; Coker, Charles F.

    1996-05-01

    Real-time closed loop simulation of LADAR seekers in a hardware-in-the-loop facility can reduce program risk and cost. This paper discusses an implementation of real-time range imagery generated in a synthetic environment at the Kinetic Kill Vehicle Hardware-in-the Loop facility at Eglin AFB, for the stimulation of LADAR seekers and algorithms. The computer hardware platform used was a Silicon Graphics Incorporated Onyx Reality Engine. This computer contains graphics hardware, and is optimized for generating visible or infrared imagery in real-time. A by-produce of the rendering process, in the form of a depth buffer, is generated from all objects in view during its rendering process. The depth buffer is an array of integer values that contributes to the proper rendering of overlapping objects and can be converted to range values using a mathematical formula. This paper presents an optimized software approach to the generation of the scenes, calculation of the range values, and outputting the range data for a LADAR seeker.

  11. LADAR scene projector for hardware-in-the-loop testing

    NASA Astrophysics Data System (ADS)

    Cornell, Michael C.; Naumann, Charles B.; Stockbridge, Robert G.; Snyder, Donald R.

    2002-07-01

    Future types of direct detection LADAR seekers will employ focal plane arrays in their receivers. Existing LADAR scene projection technology cannot meet the needs of testing these types of seekers in a Hardware-in-the-Loop environment. It is desired that the simulated LADAR return signals generated by the projection hardware be representative of the complex targets and background of a real LADAR image. A LADAR scene projector has been developed that is capable of meeting these demanding test needs. It can project scenes of simulated 2D LADAR return signals without scanning. In addition, each pixel in the projection can be represented by a 'complex' optical waveform, which can be delivered with sub-nanosecond precision. Finally, the modular nature of the projector allows it to be configured to operate at different wavelengths. This paper describes the LADAR Scene Projector and its full capabilities.

  12. A low-power CMOS trans-impedance amplifier for FM/cw ladar imaging system

    NASA Astrophysics Data System (ADS)

    Hu, Kai; Zhao, Yi-qiang; Sheng, Yun; Zhao, Hong-liang; Yu, Hai-xia

    2013-09-01

    A scannerless ladar imaging system based on a unique frequency modulation/continuous wave (FM/cw) technique is able to entirely capture the target environment, using a focal plane array to construct a 3D picture of the target. This paper presents a low power trans-impedance amplifier (TIA) designed and implemented by 0.18 μm CMOS technology, which is used in the FM/cw imaging ladar with a 64×64 metal-semiconductor-metal(MSM) self-mixing detector array. The input stage of the operational amplifier (op amp) in TIA is realized with folded cascade structure to achieve large open loop gain and low offset. The simulation and test results of TIA with MSM detectors indicate that the single-end trans-impedance gain is beyond 100 kΩ, and the -3 dB bandwidth of Op Amp is beyond 60 MHz. The input common mode voltage ranges from 0.2 V to 1.5 V, and the power dissipation is reduced to 1.8 mW with a supply voltage of 3.3 V. The performance test results show that the TIA is a candidate for preamplifier of the read-out integrated circuit (ROIC) in the FM/cw scannerless ladar imaging system.

  13. Multi-dimensional, non-contact metrology using trilateration and high resolution FMCW ladar.

    PubMed

    Mateo, Ana Baselga; Barber, Zeb W

    2015-07-01

    Here we propose, describe, and provide experimental proof-of-concept demonstrations of a multidimensional, non-contact-length metrology system design based on high resolution (millimeter to sub-100 micron) frequency modulated continuous wave (FMCW) ladar and trilateration based on length measurements from multiple, optical fiber-connected transmitters. With an accurate FMCW ladar source, the trilateration-based design provides 3D resolution inherently independent of standoff range and allows self-calibration to provide flexible setup of a field system. A proof-of-concept experimental demonstration was performed using a highly stabilized, 2 THz bandwidth chirped laser source, two emitters, and one scanning emitter/receiver providing 1D surface profiles (2D metrology) of diffuse targets. The measured coordinate precision of <200 microns was determined to be limited by laser speckle issues caused by diffuse scattering of the targets. PMID:26193132

  14. Implementing torsional-mode Doppler ladar.

    PubMed

    Fluckiger, David U

    2002-08-20

    Laguerre-Gaussian laser modes carry orbital angular momentum as a consequence of their helical-phase front screw dislocation. This torsional beam structure interacts with rotating targets, changing the orbital angular momentum (azimuthal Doppler) of the scattered beam because angular momentum is a conserved quantity. I show how to measure this change independently from the usual longitudinal momentum (normal Doppler shift) and derive the apropos coherent mixing efficiencies for monostatic, truncated Laguerre and Gaussian-mode ladar antenna patterns. PMID:12206220

  15. New High-Resolution 3D Imagery of Fault Deformation and Segmentation of the San Onofre and San Mateo Trends in the Inner California Borderlands

    NASA Astrophysics Data System (ADS)

    Holmes, J. J.; Driscoll, N. W.; Kent, G. M.; Bormann, J. M.; Harding, A. J.

    2015-12-01

    The Inner California Borderlands (ICB) is situated off the coast of southern California and northern Baja. The structural and geomorphic characteristics of the area record a middle Oligocene transition from subduction to microplate capture along the California coast. Marine stratigraphic evidence shows large-scale extension and rotation overprinted by modern strike-slip deformation. Geodetic and geologic observations indicate that approximately 6-8 mm/yr of Pacific-North American relative plate motion is accommodated by offshore strike-slip faulting in the ICB. The farthest inshore fault system, the Newport-Inglewood Rose Canyon (NIRC) fault complex is a dextral strike-slip system that extends primarily offshore approximately 120 km from San Diego to the San Joaquin Hills near Newport Beach, California. Based on trenching and well data, the NIRC fault system Holocene slip rate is 1.5-2.0 mm/yr to the south and 0.5-1.0 mm/yr along its northern extent. An earthquake rupturing the entire length of the system could produce an Mw 7.0 earthquake or larger. West of the main segments of the NIRC fault complex are the San Mateo and San Onofre fault trends along the continental slope. Previous work concluded that these were part of a strike-slip system that eventually merged with the NIRC complex. Others have interpreted these trends as deformation associated with the Oceanside Blind Thrust fault purported to underlie most of the region. In late 2013, we acquired the first high-resolution 3D P-Cable seismic surveys (3.125 m bin resolution) of the San Mateo and San Onofre trends as part of the Southern California Regional Fault Mapping project aboard the R/V New Horizon. Analysis of these volumes provides important new insights and constraints on the fault segmentation and transfer of deformation. Based on the new 3D sparker seismic data, our preferred interpretation for the San Mateo and San Onofre fault trends is they are transpressional features associated with westward

  16. Ladar-based terrain cover classification

    NASA Astrophysics Data System (ADS)

    Macedo, Jose; Manduchi, Roberto; Matthies, Larry H.

    2001-09-01

    An autonomous vehicle driving in a densely vegetated environment needs to be able to discriminate between obstacles (such as rocks) and penetrable vegetation (such as tall grass). We propose a technique for terrain cover classification based on the statistical analysis of the range data produced by a single-axis laser rangefinder (ladar). We first present theoretical models for the range distribution in the presence of homogeneously distributed grass and of obstacles partially occluded by grass. We then validate our results with real-world cases, and propose a simple algorithm to robustly discriminate between vegetation and obstacles based on the local statistical analysis of the range data.

  17. Foliage discrimination using a rotating ladar

    NASA Technical Reports Server (NTRS)

    Castano, A.; Matthies, L.

    2003-01-01

    We present a real time algorithm that detects foliage using range from a rotating laser. Objects not classified as foliage are conservatively labeled as non-driving obstacles. In contrast to related work that uses range statistics to classify objects, we exploit the expected localities and continuities of an obstacle, in both space and time. Also, instead of attempting to find a single accurate discriminating factor for every ladar return, we hypothesize the class of some few returns and then spread the confidence (and classification) to other returns using the locality constraints. The Urbie robot is presently using this algorithm to descriminate drivable grass from obstacles during outdoor autonomous navigation tasks.

  18. AMCOM RDEC ladar HWIL simulation system development

    NASA Astrophysics Data System (ADS)

    Kim, Hajin J.; Mobley, Scottie B.; Buford, James A., Jr.

    2003-09-01

    Hardware-in-the-loop (HWIL) testing has, for many years, been an integral part of the modeling and simulation efforts at the U.S. Army Aviation and Missile Command"s (AMCOM) Aviation and Missile Research, Engineering, and Development Center (AMRDEC). AMCOM"s history includes the development, characterization, and implementation of several unique technologies for the creation of synthetic environments in the visible, infrared, and radio frequency spectral regions and AMCOM has continued significant efforts in these areas. This paper describes recent advancements at AMCOM"s Advanced Simulation Center (ASC) and concentrates on Ladar HWIL simulation system development.

  19. New High-Resolution 3D Seismic Imagery of Deformation and Fault Architecture Along Newport-Inglewood/Rose Canyon Fault in the Inner California Borderlands

    NASA Astrophysics Data System (ADS)

    Holmes, J. J.; Bormann, J. M.; Driscoll, N. W.; Kent, G.; Harding, A. J.; Wesnousky, S. G.

    2014-12-01

    The tectonic deformation and geomorphology of the Inner California Borderlands (ICB) records the transition from a convergent plate margin to a predominantly dextral strike-slip system. Geodetic measurements of plate boundary deformation onshore indicate that approximately 15%, or 6-8 mm/yr, of the total Pacific-North American relative plate motion is accommodated by faults offshore. The largest near-shore fault system, the Newport-Inglewood/Rose Canyon (NI/RC) fault complex, has a Holocene slip rate estimate of 1.5-2.0 mm/yr, according to onshore trenching, and current models suggest the potential to produce an Mw 7.0+ earthquake. The fault zone extends approximately 120 km, initiating from the south near downtown San Diego and striking northwards with a constraining bend north of Mt. Soledad in La Jolla and continuing northwestward along the continental shelf, eventually stepping onshore at Newport Beach, California. In late 2013, we completed the first high-resolution 3D seismic survey (3.125 m bins) of the NI/RC fault offshore of San Onofre as part of the Southern California Regional Fault Mapping project. We present new constraints on fault geometry and segmentation of the fault system that may play a role in limiting the extent of future earthquake ruptures. In addition, slip rate estimates using piercing points such as offset channels will be explored. These new observations will allow us to investigate recent deformation and strain transfer along the NI/RC fault system.

  20. Measurement of liquid level using ladar

    NASA Astrophysics Data System (ADS)

    Qi, Bing; Peng, Wei; Lin, Junxiu; Ding, Jianhua

    1996-09-01

    In this paper, a new method of liquid level measurement using discrete frequency IMCW ladar and optical fiber is described. The distance measurement is effectively made using the absolute technique of ladar in which the phase of an amplitude modulated light wave reflected from the liquid level is compared with that of the original modulation signal. To compensate the phase drift due to the change of delay time in electric circuit, a symmetry optical fiber is used as a reference path. The signal path and the reference path are measured in turn, and the difference between two paths is in proportion to the distance from the sensor head to the surface of the liquid. The optical unit is installed at a fixed reference point above the surface of liquid, and it connects with the electric unit by optical fibers. The main attributes of this system are that it neither requires electrical supplies or produces electrical signals in situ. It can be used in the oil industry due to the intrinsic safety. According to the test results, the accuracy of this system is better than 0.2%.

  1. LADAR vision technology for automated rendezvous and capture

    NASA Technical Reports Server (NTRS)

    Frey, Randy W.

    1991-01-01

    LADAR Vision Technology at Autonomous Technologies Corporation consists of two sensor/processing technology elements: high performance long range multifunction coherent Doppler laser radar (LADAR) technology; and short range integrated CCD camera with direct detection laser ranging sensors. Algorithms and specific signal processing implementations have been simulated for both sensor/processing approaches to position and attitude tracking applicable to AR&C. Experimental data supporting certain sensor measurement accuracies have been generated.

  2. Multiple-input multiple-output 3D imaging laser radar

    NASA Astrophysics Data System (ADS)

    Liu, Chunbo; Wu, Chao; Han, Xiang'e.

    2015-10-01

    A 3D (angle-angle-range) imaging laser radar (LADAR) based on multiple-input multiple-output structure is proposed. In the LADAR, multiple coherent beams are randomly phased to form the structured light field and an APD array detector is utilized to receive the echoes from target. The sampled signals from each element of APD are correlated with the referenced light to reconstruct the local 3D images of target. The 3D panorama of target can be obtained by stitching the local images of all the elements. The system composition is described first, then the operation principle is presented and numerical simulations are provided to show the validity of the proposed scheme.

  3. Metrological characterization of 3D imaging devices

    NASA Astrophysics Data System (ADS)

    Guidi, G.

    2013-04-01

    Manufacturers often express the performance of a 3D imaging device in various non-uniform ways for the lack of internationally recognized standard requirements for metrological parameters able to identify the capability of capturing a real scene. For this reason several national and international organizations in the last ten years have been developing protocols for verifying such performance. Ranging from VDI/VDE 2634, published by the Association of German Engineers and oriented to the world of mechanical 3D measurements (triangulation-based devices), to the ASTM technical committee E57, working also on laser systems based on direct range detection (TOF, Phase Shift, FM-CW, flash LADAR), this paper shows the state of the art about the characterization of active range devices, with special emphasis on measurement uncertainty, accuracy and resolution. Most of these protocols are based on special objects whose shape and size are certified with a known level of accuracy. By capturing the 3D shape of such objects with a range device, a comparison between the measured points and the theoretical shape they should represent is possible. The actual deviations can be directly analyzed or some derived parameters can be obtained (e.g. angles between planes, distances between barycenters of spheres rigidly connected, frequency domain parameters, etc.). This paper shows theoretical aspects and experimental results of some novel characterization methods applied to different categories of active 3D imaging devices based on both principles of triangulation and direct range detection.

  4. AMRDEC's HWIL synthetic environment development efforts for LADAR sensors

    NASA Astrophysics Data System (ADS)

    Kim, Hajin J.; Cornell, Michael C.; Naumann, Charles B.

    2004-08-01

    Hardware-in-the-loop (HWIL) testing has been an integral part of the modeling and simulation efforts at the U.S. Army Aviation and Missile Research, Engineering, and Development Center (AMRDEC). AMRDEC's history includes the development and implementation of several unique technologies for producing synthetic environments in the visible, infrared, MMW and RF regions. With the emerging sensor/electronics technology, LADAR sensors are becoming more viable option as an integral part of weapon systems, and AMRDEC has been expending efforts to develop the capabilities for testing LADAR sensors in a HWIL environment. There are several areas of challenges in LADAR HWIL testing, since the simulation requirements for the electronics and computation are stressing combinations of the passive image and active sensor HWIL testing. There have been several key areas where advancements have been made to address the challenges in developing a synthetic environment for the LADAR sensor testing. In this paper, we will present the latest results from the LADAR projector development and test efforts at AMRDEC's Advanced Simulation Center (ASC).

  5. Monostatic all-fiber scanning LADAR system.

    PubMed

    Leach, Jeffrey H; Chinn, Stephen R; Goldberg, Lew

    2015-11-20

    A compact scanning LADAR system based on a fiber-coupled, monostatic configuration which transmits (TX) and receives (RX) through the same aperture has been developed. A small piezo-electric stripe actuator was used to resonantly vibrate a fiber cantilever tip and scan the transmitted near-single-mode optical beam and the cladding mode receiving aperture. When compared to conventional bi-static systems with polygon, galvo, or Risley-prism beam scanners, the described system offers several advantages: the inherent alignment of the receiver field-of-view (FOV) relative to the TX beam angle, small size and weight, and power efficiency. Optical alignment of the system was maintained at all ranges since there is no parallax between the TX beam and the receiver FOV. A position-sensing detector (PSD) was used to sense the instantaneous fiber tip position. The Si PSD operated in a two-photon absorption mode to detect the transmitted 1.5 μm pulses. The prototype system collected 50,000 points per second with a 6° full scan angle and a 27 mm clear aperture/40 mm focal length TX/RX lens, had a range precision of 4.7 mm, and was operated at a maximum range of 26 m. PMID:26836533

  6. DVE flight test results of a sensor enhanced 3D conformal pilot support system

    NASA Astrophysics Data System (ADS)

    Münsterer, Thomas; Völschow, Philipp; Singer, Bernhard; Strobel, Michael; Kramper, Patrick

    2015-06-01

    The paper presents results and findings of flight tests of the Airbus Defence and Space DVE system SFERION performed at Yuma Proving Grounds. During the flight tests ladar information was fused with a priori DB knowledge in real-time and 3D conformal symbology was generated for display on an HMD. The test flights included low level flights as well as numerous brownout landings.

  7. Remote sensing solution using 3-D flash LADAR for automated control of aircraft

    NASA Astrophysics Data System (ADS)

    Neff, Brian J.; Fuka, Jennifer A.; Burwell, Alan C.; Gray, Stephen W.; Hubbard, Mason J.; Schenkel, Joseph W.

    2015-09-01

    The majority of image quality studies in the field of remote sensing have been performed on systems with conventional aperture functions. These systems have well-understood image quality tradeoffs, characterized by the General Image Quality Equation (GIQE). Advanced, next-generation imaging systems present challenges to both post-processing and image quality prediction. Examples include sparse apertures, synthetic apertures, coded apertures and phase elements. As a result of the non-conventional point spread functions of these systems, post-processing becomes a critical step in the imaging process and artifacts arise that are more complicated than simple edge overshoot. Previous research at the Rochester Institute of Technology's Digital Imaging and Remote Sensing Laboratory has resulted in a modeling methodology for sparse and segmented aperture systems, the validation of which will be the focus of this work. This methodology has predicted some unique post-processing artifacts that arise when a sparse aperture system with wavefront error is used over a large (panchromatic) spectral bandpass. Since these artifacts are unique to sparse aperture systems, they have not yet been observed in any real-world data. In this work, a laboratory setup and initial results for a model validation study will be described. Initial results will focus on the validation of spatial frequency response predictions and verification of post-processing artifacts. The goal of this study is to validate the artifact and spatial frequency response predictions of this model. This will allow model predictions to be used in image quality studies, such as aperture design optimization, and the signal-to-noise vs. post-processing artifact tradeoff resulting from choosing a panchromatic vs. multispectral system.

  8. ALLFlight: detection of moving objects in IR and ladar images

    NASA Astrophysics Data System (ADS)

    Doehler, H.-U.; Peinecke, Niklas; Lueken, Thomas; Schmerwitz, Sven

    2013-05-01

    Supporting a helicopter pilot during landing and takeoff in degraded visual environment (DVE) is one of the challenges within DLR's project ALLFlight (Assisted Low Level Flight and Landing on Unprepared Landing Sites). Different types of sensors (TV, Infrared, mmW radar and laser radar) are mounted onto DLR's research helicopter FHS (flying helicopter simulator) for gathering different sensor data of the surrounding world. A high performance computer cluster architecture acquires and fuses all the information to get one single comprehensive description of the outside situation. While both TV and IR cameras deliver images with frame rates of 25 Hz or 30 Hz, Ladar and mmW radar provide georeferenced sensor data with only 2 Hz or even less. Therefore, it takes several seconds to detect or even track potential moving obstacle candidates in mmW or Ladar sequences. Especially if the helicopter is flying with higher speed, it is very important to minimize the detection time of obstacles in order to initiate a re-planning of the helicopter's mission timely. Applying feature extraction algorithms on IR images in combination with data fusion algorithms of extracted features and Ladar data can decrease the detection time appreciably. Based on real data from flight tests, the paper describes applied feature extraction methods for moving object detection, as well as data fusion techniques for combining features from TV/IR and Ladar data.

  9. Ladar range image denoising by a nonlocal probability statistics algorithm

    NASA Astrophysics Data System (ADS)

    Xia, Zhi-Wei; Li, Qi; Xiong, Zhi-Peng; Wang, Qi

    2013-01-01

    According to the characteristic of range images of coherent ladar and the basis of nonlocal means (NLM), a nonlocal probability statistics (NLPS) algorithm is proposed in this paper. The difference is that NLM performs denoising using the mean of the conditional probability distribution function (PDF) while NLPS using the maximum of the marginal PDF. In the algorithm, similar blocks are found out by the operation of block matching and form a group. Pixels in the group are analyzed by probability statistics and the gray value with maximum probability is used as the estimated value of the current pixel. The simulated range images of coherent ladar with different carrier-to-noise ratio and real range image of coherent ladar with 8 gray-scales are denoised by this algorithm, and the results are compared with those of median filter, multitemplate order mean filter, NLM, median nonlocal mean filter and its incorporation of anatomical side information, and unsupervised information-theoretic adaptive filter. The range abnormality noise and Gaussian noise in range image of coherent ladar are effectively suppressed by NLPS.

  10. Anomaly detection in clutter using spectrally enhanced LADAR

    NASA Astrophysics Data System (ADS)

    Chhabra, Puneet S.; Wallace, Andrew M.; Hopgood, James R.

    2015-05-01

    Discrete return (DR) Laser Detection and Ranging (Ladar) systems provide a series of echoes that reflect from objects in a scene. These can be first, last or multi-echo returns. In contrast, Full-Waveform (FW)-Ladar systems measure the intensity of light reflected from objects continuously over a period of time. In a camflouaged scenario, e.g., objects hidden behind dense foliage, a FW-Ladar penetrates such foliage and returns a sequence of echoes including buried faint echoes. The aim of this paper is to learn local-patterns of co-occurring echoes characterised by their measured spectra. A deviation from such patterns defines an abnormal event in a forest/tree depth profile. As far as the authors know, neither DR or FW-Ladar, along with several spectral measurements, has not been applied to anomaly detection. This work presents an algorithm that allows detection of spectral and temporal anomalies in FW-Multi Spectral Ladar (FW-MSL) data samples. An anomaly is defined as a full waveform temporal and spectral signature that does not conform to a prior expectation, represented using a learnt subspace (dictionary) and set of coefficients that capture co-occurring local-patterns using an overlapping temporal window. A modified optimization scheme is proposed for subspace learning based on stochastic approximations. The objective function is augmented with a discriminative term that represents the subspace's separability properties and supports anomaly characterisation. The algorithm detects several man-made objects and anomalous spectra hidden in a dense clutter of vegetation and also allows tree species classification.

  11. Interactive photogrammetric system for mapping 3D objects

    NASA Astrophysics Data System (ADS)

    Knopp, Dave E.

    1990-08-01

    A new system, FOTO-G, has been developed for 3D photogrammetric applications. It is a production-oriented software system designed to work with highly unconventional photogrammetric image configurations which result when photographing 3D objects. A demonstration with imagery from an actual 3D-mapping project is reported.

  12. A Λ-type soft-aperture LADAR SNR improvement with quantum-enhanced receiver

    NASA Astrophysics Data System (ADS)

    Yang, Song; Ruan, Ningjuan; Lin, Xuling; Wu, Zhiqiang

    2015-08-01

    A quantum-enhanced receiver that uses squeezed vacuum injection (SVI) and phase sensitive amplification (PSA) is in principle capable of obtaining effective signal to noise ratio (SNR) improvement in a soft-aperture homodyne-detection LAser Detection And Ranging (LADAR) system over the classical homodyne LADAR to image a far-away target. Here we investigate the performance of quantum-enhanced receiver in Λ-type soft aperture LADAR for target imaging. We also use fast Fourier transform (FFT) Algorithm to simulate LADAR intensity image, and give a comparison of the SNR improvement of soft aperture case and hard aperture case.

  13. Interactive 3D Mars Visualization

    NASA Technical Reports Server (NTRS)

    Powell, Mark W.

    2012-01-01

    The Interactive 3D Mars Visualization system provides high-performance, immersive visualization of satellite and surface vehicle imagery of Mars. The software can be used in mission operations to provide the most accurate position information for the Mars rovers to date. When integrated into the mission data pipeline, this system allows mission planners to view the location of the rover on Mars to 0.01-meter accuracy with respect to satellite imagery, with dynamic updates to incorporate the latest position information. Given this information so early in the planning process, rover drivers are able to plan more accurate drive activities for the rover than ever before, increasing the execution of science activities significantly. Scientifically, this 3D mapping information puts all of the science analyses to date into geologic context on a daily basis instead of weeks or months, as was the norm prior to this contribution. This allows the science planners to judge the efficacy of their previously executed science observations much more efficiently, and achieve greater science return as a result. The Interactive 3D Mars surface view is a Mars terrain browsing software interface that encompasses the entire region of exploration for a Mars surface exploration mission. The view is interactive, allowing the user to pan in any direction by clicking and dragging, or to zoom in or out by scrolling the mouse or touchpad. This set currently includes tools for selecting a point of interest, and a ruler tool for displaying the distance between and positions of two points of interest. The mapping information can be harvested and shared through ubiquitous online mapping tools like Google Mars, NASA WorldWind, and Worldwide Telescope.

  14. Optical design of a synthetic aperture ladar antenna system

    NASA Astrophysics Data System (ADS)

    Cao, Changqing; Zeng, Xiaodong; Zhao, Xiaoyan; Liu, Huanhuan; Man, Xiangkun

    2008-03-01

    The spatial resolution of a conventional imaging LADAR system is constrained by the diffraction limit of the telescope aperture. The purpose of this work is to investigate Synthetic Aperture Imaging LADAR (SAIL), which employs aperture synthesis with coherent laser radar to overcome the diffraction limit and achieve fine-resolution, long range, two-dimensional imaging with modest aperture diameters. According to the demands of the Synthetic Aperture LADAR (SAL), the key techniques are analyzed briefly. The preliminary design of the optical antenna is also introduced in this paper. We investigate the design method and relevant problems of efficient optical antenna that are required in SAL. The design is pursued on the basis of the same method as is used at microwave frequency. The method is based on numerical analysis and the error values obtained by present manufacturing technology. According to the requirement to SAL with the trial of little size, light mass, low cost and high image quality, the result by ZEMAX will result.

  15. Ladar System Identifies Obstacles Partly Hidden by Grass

    NASA Technical Reports Server (NTRS)

    Castano, Andres

    2003-01-01

    A ladar-based system now undergoing development is intended to enable an autonomous mobile robot in an outdoor environment to avoid moving toward trees, large rocks, and other obstacles that are partly hidden by tall grass. The design of the system incorporates the assumption that the robot is capable of moving through grass and provides for discrimination between grass and obstacles on the basis of geometric properties extracted from ladar readings as described below. The system (see figure) includes a ladar system that projects a range-measuring pulsed laser beam that has a small angular width of radians and is capable of measuring distances of reflective objects from a minimum of dmin to a maximum of dmax. The system is equipped with a rotating mirror that scans the beam through a relatively wide angular range of in a horizontal plane at a suitable small height above the ground. Successive scans are performed at time intervals of seconds. During each scan, the laser beam is fired at relatively small angular intervals of radians to make range measurements, so that the total number of range measurements acquired in a scan is Ne = / .

  16. Target recognition for ladar range image using slice image

    NASA Astrophysics Data System (ADS)

    Xia, Wenze; Han, Shaokun; Wang, Liang

    2015-12-01

    A shape descriptor and a complete shape-based recognition system using slice images as geometric feature descriptor for ladar range images are introduced. A slice image is a two-dimensional image generated by three-dimensional Hough transform and the corresponding mathematical transformation. The system consists of two processes, the model library construction and recognition. In the model library construction process, a series of range images are obtained after the model object is sampled at preset attitude angles. Then, all the range images are converted into slice images. The number of slice images is reduced by clustering analysis and finding a representation to reduce the size of the model library. In the recognition process, the slice image of the scene is compared with the slice image in the model library. The recognition results depend on the comparison. Simulated ladar range images are used to analyze the recognition and misjudgment rates, and comparison between the slice image representation method and moment invariants representation method is performed. The experimental results show that whether in conditions without noise or with ladar noise, the system has a high recognition rate and low misjudgment rate. The comparison experiment demonstrates that the slice image has better representation ability than moment invariants.

  17. A 3-D Look at Post-Tropical Cyclone Hermine

    NASA Video Gallery

    This 3-D flyby animation of GPM imagery shows Post-Tropical Storm Hermine on Sept. 6. Rain was falling at a rate of over 1.1 inches (27 mm) per hour between the Atlantic coast and Hermine's center ...

  18. 3d-3d correspondence revisited

    NASA Astrophysics Data System (ADS)

    Chung, Hee-Joong; Dimofte, Tudor; Gukov, Sergei; Sułkowski, Piotr

    2016-04-01

    In fivebrane compactifications on 3-manifolds, we point out the importance of all flat connections in the proper definition of the effective 3d {N}=2 theory. The Lagrangians of some theories with the desired properties can be constructed with the help of homological knot invariants that categorify colored Jones polynomials. Higgsing the full 3d theories constructed this way recovers theories found previously by Dimofte-Gaiotto-Gukov. We also consider the cutting and gluing of 3-manifolds along smooth boundaries and the role played by all flat connections in this operation.

  19. Sensor based 3D conformal cueing for safe and reliable HC operation specifically for landing in DVE

    NASA Astrophysics Data System (ADS)

    Münsterer, Thomas; Kress, Martin; Klasen, Stephanus

    2013-05-01

    The paper describes the approach of a sensor based landing aid for helicopters in degraded visual conditions. The system concept presented employs a long range high resolution ladar sensor allowing for identifying obstacles in the flight and in the approach path as well as measuring landing site conditions like slope, roughness and precise position relative to the helicopter during long final approach. All these measurements are visualized to the pilot. Cueing is done by 3D conformal symbology displayed in a head-tracked HMD enhanced by 2D symbols for data which is perceived easier by 2D symbols than by 3D cueing. All 3D conformal symbology is placed on the measured landing site surface which is further visualized by a grid structure for displaying landing site slope, roughness and small obstacles. Due to the limited resolution of the employed HMD a specific scheme of blending in the information during the approach is employed. The interplay between in flight and in approach obstacle warning and CFIT warning symbology with this landing aid symbology is also investigated and exemplarily evaluated for the NH90 helicopter which has already today implemented a long range high resolution ladar sensor based obstacle warning and CFIT symbology. The paper further describes the results of simulator and flight tests performed with this system employing a ladar sensor and a head-tracked head mounted display system. In the simulator trials a full model of the ladar sensor producing 3D measurement points was used working with the same algorithms used in flight tests.

  20. Ladar scene projector for a hardware-in-the-loop simulation system.

    PubMed

    Xu, Rui; Wang, Xin; Tian, Yi; Li, Zhuo

    2016-07-20

    In order to test a direct-detection ladar in a hardware-in-the-loop simulation system, a ladar scene projector is proposed. A model based on the ladar range equation is developed to calculate the profile of the ladar return signal. The influences of both the atmosphere and the target's surface properties are considered. The insertion delays of different channels of the ladar scene projector are investigated and compensated for. A target range image with 108 pixels is generated. The simulation range is from 0 to 15 km, the range resolution is 1.04 m, the range error is 1.28 cm, and the peak-valley error for different channels is 15 cm. PMID:27463932

  1. Positional Awareness Map 3D (PAM3D)

    NASA Technical Reports Server (NTRS)

    Hoffman, Monica; Allen, Earl L.; Yount, John W.; Norcross, April Louise

    2012-01-01

    The Western Aeronautical Test Range of the National Aeronautics and Space Administration s Dryden Flight Research Center needed to address the aging software and hardware of its current situational awareness display application, the Global Real-Time Interactive Map (GRIM). GRIM was initially developed in the late 1980s and executes on older PC architectures using a Linux operating system that is no longer supported. Additionally, the software is difficult to maintain due to its complexity and loss of developer knowledge. It was decided that a replacement application must be developed or acquired in the near future. The replacement must provide the functionality of the original system, the ability to monitor test flight vehicles in real-time, and add improvements such as high resolution imagery and true 3-dimensional capability. This paper will discuss the process of determining the best approach to replace GRIM, and the functionality and capabilities of the first release of the Positional Awareness Map 3D.

  2. 3D and Education

    NASA Astrophysics Data System (ADS)

    Meulien Ohlmann, Odile

    2013-02-01

    Today the industry offers a chain of 3D products. Learning to "read" and to "create in 3D" becomes an issue of education of primary importance. 25 years professional experience in France, the United States and Germany, Odile Meulien set up a personal method of initiation to 3D creation that entails the spatial/temporal experience of the holographic visual. She will present some different tools and techniques used for this learning, their advantages and disadvantages, programs and issues of educational policies, constraints and expectations related to the development of new techniques for 3D imaging. Although the creation of display holograms is very much reduced compared to the creation of the 90ies, the holographic concept is spreading in all scientific, social, and artistic activities of our present time. She will also raise many questions: What means 3D? Is it communication? Is it perception? How the seeing and none seeing is interferes? What else has to be taken in consideration to communicate in 3D? How to handle the non visible relations of moving objects with subjects? Does this transform our model of exchange with others? What kind of interaction this has with our everyday life? Then come more practical questions: How to learn creating 3D visualization, to learn 3D grammar, 3D language, 3D thinking? What for? At what level? In which matter? for whom?

  3. 3D Imaging.

    ERIC Educational Resources Information Center

    Hastings, S. K.

    2002-01-01

    Discusses 3 D imaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3 D imaging for libraries and museums. (LRW)

  4. Status report on next-generation LADAR for driving unmanned ground vehicles

    NASA Astrophysics Data System (ADS)

    Juberts, Maris; Barbera, Anthony J.

    2004-12-01

    The U.S. Department of Defense has initiated plans for the deployment of autonomous robotic vehicles in various tactical military operations starting in about seven years. Most of these missions will require the vehicles to drive autonomously over open terrain and on roads which may contain traffic, obstacles, military personnel as well as pedestrians. Unmanned Ground Vehicles (UGVs) must therefore be able to detect, recognize and track objects and terrain features in very cluttered environments. Although several LADAR sensors exist today which have successfully been implemented and demonstrated to provide somewhat reliable obstacle detection and can be used for path planning and selection, they tend to be limited in performance, are effected by obscurants, and are quite large and expensive. In addition, even though considerable effort and funding has been provided by the DOD R&D community, nearly all of the development has been for target detection (ATR) and tracking from various flying platforms. Participation in the Army and DARPA sponsored UGV programs has helped NIST to identify requirement specifications for LADAR to be used for on and off-road autonomous driving. This paper describes the expected requirements for a next generation LADAR for driving UGVs and presents an overview of proposed LADAR design concepts and a status report on current developments in scannerless Focal Plane Array (FPA) LADAR and advanced scanning LADAR which may be able to achieve the stated requirements. Examples of real-time range images taken with existing LADAR prototypes will be presented.

  5. Self-mixing detector candidates for an FM/cw ladar architecture

    NASA Astrophysics Data System (ADS)

    Ruff, William C.; Bruno, John D.; Kennerly, Stephen W.; Ritter, Ken; Shen, Paul H.; Stann, Barry L.; Stead, Michael R.; Sztankay, Zoltan G.; Tobin, Mary S.

    2000-09-01

    The U.S. Army Research Laboratory (ARL) is currently investigating unique self-mixing detectors for ladar systems. These detectors have the ability to internally detect and down-convert light signals that are amplitude modulated at ultra-high frequencies (UHF). ARL is also investigating a ladar architecture based on FM/cw radar principles, whereby the range information is contained in the low-frequency mixing product derived by mixing a reference UHF chirp with a detected, time-delayed UHF chirp. When inserted into the ARL FM/cw ladar architecture, the self-mixing detector eliminates the need for wide band transimpedance amplifiers in the ladar receiver because the UHF mixing is done internal to the detector, thereby reducing both the cost and complexity of the system and enhancing its range capability. This fits well with ARL's goal of developing low-cost, high-speed line array ladars for submunition applications and extremely low-cost, single pixel ladars for ranging applications. Several candidate detectors have been investigated for this application, with metal-semiconductor-metal (MSM) detectors showing the most promise. This paper discusses the requirements for a self-mixing detector, characterization measurements from several candidate detectors and experimental results from their insertion in a laboratory FM/cw ladar.

  6. Range resolution improvement of eyesafe ladar testbed (ELT) measurements using sparse signal deconvolution

    NASA Astrophysics Data System (ADS)

    Budge, Scott E.; Gunther, Jacob H.

    2014-06-01

    The Eyesafe Ladar Test-bed (ELT) is an experimental ladar system with the capability of digitizing return laser pulse waveforms at 2 GHz. These waveforms can then be exploited off-line in the laboratory to develop signal processing techniques for noise reduction, range resolution improvement, and range discrimination between two surfaces of similar range interrogated by a single laser pulse. This paper presents the results of experiments with new deconvolution algorithms with the hoped-for gains of improving the range discrimination of the ladar system. The sparsity of ladar returns is exploited to solve the deconvolution problem in two steps. The first step is to estimate a point target response using a database of measured calibration data. This basic target response is used to construct a dictionary of target responses with different delays/ranges. Using this dictionary ladar returns from a wide variety of surface configurations can be synthesized by taking linear combinations. A sparse linear combination matches the physical reality that ladar returns consist of the overlapping of only a few pulses. The dictionary construction process is a pre-processing step that is performed only once. The deconvolution step is performed by minimizing the error between the measured ladar return and the dictionary model while constraining the coefficient vector to be sparse. Other constraints such as the non-negativity of the coefficients are also applied. The results of the proposed technique are presented in the paper and are shown to compare favorably with previously investigated deconvolution techniques.

  7. Use of 3D laser radar for navigation of unmanned aerial and ground vehicles in urban and indoor environments

    NASA Astrophysics Data System (ADS)

    Uijt de Haag, Maarten; Venable, Don; Smearcheck, Mark

    2007-04-01

    This paper discusses the integration of Inertial measurements with measurements from a three-dimensional (3D) imaging sensor for position and attitude determination of unmanned aerial vehicles (UAV) and autonomous ground vehicles (AGV) in urban or indoor environments. To enable operation of UAVs and AGVs at any time in any environment a Precision Navigation, Attitude, and Time (PNAT) capability is required that is robust and not solely dependent on the Global Positioning System (GPS). In urban and indoor environments a GPS position capability may not only be unavailable due to shadowing, significant signal attenuation or multipath, but also due to intentional denial or deception. Although deep integration of GPS and Inertial Measurement Unit (IMU) data may prove to be a viable solution an alternative method is being discussed in this paper. The alternative solution is based on 3D imaging sensor technologies such as Flash Ladar (Laser Radar). Flash Ladar technology consists of a modulated laser emitter coupled with a focal plane array detector and the required optics. Like a conventional camera this sensor creates an "image" of the environment, but producing a 2D image where each pixel has associated intensity vales the flash Ladar generates an image where each pixel has an associated range and intensity value. Integration of flash Ladar with the attitude from the IMU allows creation of a 3-D scene. Current low-cost Flash Ladar technology is capable of greater than 100 x 100 pixel resolution with 5 mm depth resolution at a 30 Hz frame rate. The proposed algorithm first converts the 3D imaging sensor measurements to a point cloud of the 3D, next, significant environmental features such as planar features (walls), line features or point features (corners) are extracted and associated from one 3D imaging sensor frame to the next. Finally, characteristics of these features such as the normal or direction vectors are used to compute the platform position and attitude

  8. TRACE 3-D documentation

    SciTech Connect

    Crandall, K.R.

    1987-08-01

    TRACE 3-D is an interactive beam-dynamics program that calculates the envelopes of a bunched beam, including linear space-charge forces, through a user-defined transport system. TRACE 3-D provides an immediate graphics display of the envelopes and the phase-space ellipses and allows nine types of beam-matching options. This report describes the beam-dynamics calculations and gives detailed instruction for using the code. Several examples are described in detail.

  9. Ground vehicle based ladar for standoff detection of road-side hazards

    NASA Astrophysics Data System (ADS)

    Hollinger, Jim; Close, Ryan

    2015-05-01

    In recent years, the number of commercially available LADAR (also referred to as LIDAR) systems have grown with the increased interest in ground vehicle robotics and aided navigation/collision avoidance in various industries. With this increased demand the cost of these systems has dropped and their capabilities have increased. As a result of this trend, LADAR systems are becoming a cost effective sensor to use in a number of applications of interest to the US Army. One such application is the standoff detection of road-side hazards from ground vehicles. This paper will discuss detection of road-side hazards partially concealed by light to medium vegetation. Current algorithms using commercially available LADAR systems for detecting these targets will be presented, along with results from relevant data sets. Additionally, optimization of commercial LADAR sensors and/or fusion with Radar will be discussed as ways of increasing detection ability.

  10. Comb-calibrated frequency-modulated continuous-wave ladar for absolute distance measurements.

    PubMed

    Baumann, Esther; Giorgetta, Fabrizio R; Coddington, Ian; Sinclair, Laura C; Knabe, Kevin; Swann, William C; Newbury, Nathan R

    2013-06-15

    We demonstrate a comb-calibrated frequency-modulated continuous-wave laser detection and ranging (FMCW ladar) system for absolute distance measurements. The FMCW ladar uses a compact external cavity laser that is swept quasi-sinusoidally over 1 THz at a 1 kHz rate. The system simultaneously records the heterodyne FMCW ladar signal and the instantaneous laser frequency at sweep rates up to 3400 THz/s, as measured against a free-running frequency comb (femtosecond fiber laser). Demodulation of the ladar signal against the instantaneous laser frequency yields the range to the target with 1 ms update rates, bandwidth-limited 130 μm resolution and a ~100 nm accuracy that is directly linked to the counted repetition rate of the comb. The precision is <100 nm at the 1 ms update rate and reaches ~6 nm for a 100 ms average. PMID:23938965

  11. Context-driven automated target detection in 3D data

    NASA Astrophysics Data System (ADS)

    West, Karen F.; Webb, Brian N.; Lersch, James R.; Pothier, Steven; Triscari, Joseph M.; Iverson, A. E.

    2004-09-01

    This paper summarizes a system, and its component algorithms, for context-driven target vehicle detection in 3-D data that was developed under the Defense Advanced Research Projects Agency (DARPA) Exploitation of 3-D Data (E3D) Program. In order to determine the power of shape and geometry for the extraction of context objects and the detection of targets, our algorithm research and development concentrated on the geometric aspects of the problem and did not utilize intensity information. Processing begins with extraction of context information and initial target detection at reduced resolution, followed by a detailed, full-resolution analysis of candidate targets. Our reduced-resolution processing includes a probabilistic procedure for finding the ground that is effective even in rough terrain; a hierarchical, graph-based approach for the extraction of context objects and potential vehicle hide sites; and a target detection process that is driven by context-object and hide-site locations. Full-resolution processing includes statistical false alarm reduction and decoy mitigation. When results are available from previously collected data, we also perform object-level change detection, which affects the probabilities that objects are context objects or targets. Results are presented for both synthetic and collected LADAR data.

  12. Radiochromic 3D Detectors

    NASA Astrophysics Data System (ADS)

    Oldham, Mark

    2015-01-01

    Radiochromic materials exhibit a colour change when exposed to ionising radiation. Radiochromic film has been used for clinical dosimetry for many years and increasingly so recently, as films of higher sensitivities have become available. The two principle advantages of radiochromic dosimetry include greater tissue equivalence (radiologically) and the lack of requirement for development of the colour change. In a radiochromic material, the colour change arises direct from ionising interactions affecting dye molecules, without requiring any latent chemical, optical or thermal development, with important implications for increased accuracy and convenience. It is only relatively recently however, that 3D radiochromic dosimetry has become possible. In this article we review recent developments and the current state-of-the-art of 3D radiochromic dosimetry, and the potential for a more comprehensive solution for the verification of complex radiation therapy treatments, and 3D dose measurement in general.

  13. Bootstrapping 3D fermions

    NASA Astrophysics Data System (ADS)

    Iliesiu, Luca; Kos, Filip; Poland, David; Pufu, Silviu S.; Simmons-Duffin, David; Yacoby, Ran

    2016-03-01

    We study the conformal bootstrap for a 4-point function of fermions < ψψψψ> in 3D. We first introduce an embedding formalism for 3D spinors and compute the conformal blocks appearing in fermion 4-point functions. Using these results, we find general bounds on the dimensions of operators appearing in the ψ × ψ OPE, and also on the central charge C T . We observe features in our bounds that coincide with scaling dimensions in the GrossNeveu models at large N . We also speculate that other features could coincide with a fermionic CFT containing no relevant scalar operators.

  14. High-resolution 3D imaging laser radar flight test experiments

    NASA Astrophysics Data System (ADS)

    Marino, Richard M.; Davis, W. R.; Rich, G. C.; McLaughlin, J. L.; Lee, E. I.; Stanley, B. M.; Burnside, J. W.; Rowe, G. S.; Hatch, R. E.; Square, T. E.; Skelly, L. J.; O'Brien, M.; Vasile, A.; Heinrichs, R. M.

    2005-05-01

    Situation awareness and accurate Target Identification (TID) are critical requirements for successful battle management. Ground vehicles can be detected, tracked, and in some cases imaged using airborne or space-borne microwave radar. Obscurants such as camouflage net and/or tree canopy foliage can degrade the performance of such radars. Foliage can be penetrated with long wavelength microwave radar, but generally at the expense of imaging resolution. The goals of the DARPA Jigsaw program include the development and demonstration of high-resolution 3-D imaging laser radar (ladar) ensor technology and systems that can be used from airborne platforms to image and identify military ground vehicles that may be hiding under camouflage or foliage such as tree canopy. With DARPA support, MIT Lincoln Laboratory has developed a rugged and compact 3-D imaging ladar system that has successfully demonstrated the feasibility and utility of this application. The sensor system has been integrated into a UH-1 helicopter for winter and summer flight campaigns. The sensor operates day or night and produces high-resolution 3-D spatial images using short laser pulses and a focal plane array of Geiger-mode avalanche photo-diode (APD) detectors with independent digital time-of-flight counting circuits at each pixel. The sensor technology includes Lincoln Laboratory developments of the microchip laser and novel focal plane arrays. The microchip laser is a passively Q-switched solid-state frequency-doubled Nd:YAG laser transmitting short laser pulses (300 ps FWHM) at 16 kilohertz pulse rate and at 532 nm wavelength. The single photon detection efficiency has been measured to be > 20 % using these 32x32 Silicon Geiger-mode APDs at room temperature. The APD saturates while providing a gain of typically > 106. The pulse out of the detector is used to stop a 500 MHz digital clock register integrated within the focal-plane array at each pixel. Using the detector in this binary response mode

  15. High accuracy LADAR scene projector calibration sensor development

    NASA Astrophysics Data System (ADS)

    Kim, Hajin J.; Cornell, Michael C.; Naumann, Charles B.; Bowden, Mark H.

    2008-04-01

    A sensor system for the characterization of infrared laser radar scene projectors has been developed. Available sensor systems do not provide sufficient range resolution to evaluate the high precision LADAR projector systems developed by the U.S. Army Research, Development and Engineering Command (RDECOM) Aviation and Missile Research, Development and Engineering Center (AMRDEC). With timing precision capability to a fraction of a nanosecond, it can confirm the accuracy of simulated return pulses from a nominal range of up to 6.5 km to a resolution of 4cm. Increased range can be achieved through firmware reconfiguration. Two independent amplitude triggers measure both rise and fall time providing a judgment of pulse shape and allowing estimation of the contained energy. Each return channel can measure up to 32 returns per trigger characterizing each return pulse independently. Currently efforts include extending the capability to 8 channels. This paper outlines the development, testing, capabilities and limitations of this new sensor system.

  16. Asymptotic modeling of synthetic aperture ladar sensor phenomenology

    NASA Astrophysics Data System (ADS)

    Neuroth, Robert M.; Rigling, Brian D.; Zelnio, Edmund G.; Watson, Edward A.; Velten, Vincent J.; Rovito, Todd V.

    2015-05-01

    Interest in the use of active electro-optical(EO) sensors for non-cooperative target identification has steadily increased as the quality and availability of EO sources and detectors have improved. A unique and recent innovation has been the development of an airborne synthetic aperture imaging capability at optical wavelengths. To effectively exploit this new data source for target identification, one must develop an understanding of target-sensor phenomenology at those wavelengths. Current high-frequency, asymptotic EM predictors are computationally intractable for such conditions, as their ray density is inversely proportional to wavelength. As a more efficient alternative, we have developed a geometric optics based simulation for synthetic aperture ladar that seeks to model the second order statistics of the diffuse scattering commonly found at those wavelengths but with much lesser ray density. Code has been developed, ported to high-performance computing environments, and tested on a variety of target models.

  17. Time reversed photonic beamforming of arbitrary waveform ladar arrays

    NASA Astrophysics Data System (ADS)

    Cox, Joseph L.; Zmuda, Henry; Bussjaeger, Rebecca J.; Erdmann, Reinhard K.; Fanto, Michael L.; Hayduk, Michael J.; Malowicki, John E.

    2007-04-01

    Herein is described a novel approach of performing adaptive photonic beam forming of an array of optical fibers with the expressed purpose of performing laser ranging. The beam forming technique leverages the concepts of time reversal, previously implemented in the sonar community, and wherein photonic implementation has recently been described for use by beamforming of ultra-wideband radar arrays. Photonic beam forming is also capable of combining the optical output of several fiber lasers into a coherent source, exactly phase matched on a pre-determined target. By implementing electro-optically modulated pulses from frequency chirped femtosecond-scale laser pulses, ladar waveforms can be generated with arbitrary spectral and temporal characteristics within the limitations of the wide-band system. Also described is a means of generating angle/angle/range measurements of illuminated targets.

  18. Concepts using optical MEMS array for ladar scene projection

    NASA Astrophysics Data System (ADS)

    Smith, J. Lynn

    2003-09-01

    Scene projection for HITL testing of LADAR seekers is unique because the 3rd dimension is time delay. Advancement in AFRL for electronic delay and pulse shaping circuits, VCSEL emitters, fiber optic and associated scene generation is underway, and technology hand-off to test facilities is expected eventually. However, size and cost currently projected behooves cost mitigation through further innovation in system design, incorporating new developments, cooperation, and leveraging of dual-purpose technology. Therefore a concept is offered which greatly reduces the number (thus cost) of pulse shaping circuits and enables the projector to be installed on the mobile arm of a flight motion simulator table without fiber optic cables. The concept calls for an optical MEMS (micro-electromechanical system) steerable micro-mirror array. IFOV"s are a cluster of four micro-mirrors, each of which steers through a unique angle to a selected light source with the appropriate delay and waveform basis. An array of such sources promotes angle-to-delay mapping. Separate pulse waveform basis circuits for each scene IFOV are not required because a single set of basis functions is broadcast to all MEMS elements simultaneously. Waveform delivery to spatial filtering and collimation optics is addressed by angular selection at the MEMS array. Emphasis is on technology in existence or under development by the government, its contractors and the telecommunications industry. Values for components are first assumed as those that are easily available. Concept adequacy and upgrades are then discussed. In conclusion an opto-mechanical scan option ranks as the best light source for near-term MEMS-based projector testing of both flash and scan LADAR seekers.

  19. 3D microscope

    NASA Astrophysics Data System (ADS)

    Iizuka, Keigo

    2008-02-01

    In order to circumvent the fact that only one observer can view the image from a stereoscopic microscope, an attachment was devised for displaying the 3D microscopic image on a large LCD monitor for viewing by multiple observers in real time. The principle of operation, design, fabrication, and performance are presented, along with tolerance measurements relating to the properties of the cellophane half-wave plate used in the design.

  20. Amazing Space: Explanations, Investigations, & 3D Visualizations

    NASA Astrophysics Data System (ADS)

    Summers, Frank

    2011-05-01

    The Amazing Space website is STScI's online resource for communicating Hubble discoveries and other astronomical wonders to students and teachers everywhere. Our team has developed a broad suite of materials, readings, activities, and visuals that are not only engaging and exciting, but also standards-based and fully supported so that they can be easily used within state and national curricula. These products include stunning imagery, grade-level readings, trading card games, online interactives, and scientific visualizations. We are currently exploring the potential use of stereo 3D in astronomy education.

  1. Multiviewer 3D monitor

    NASA Astrophysics Data System (ADS)

    Kostrzewski, Andrew A.; Aye, Tin M.; Kim, Dai Hyun; Esterkin, Vladimir; Savant, Gajendra D.

    1998-09-01

    Physical Optics Corporation has developed an advanced 3-D virtual reality system for use with simulation tools for training technical and military personnel. This system avoids such drawbacks of other virtual reality (VR) systems as eye fatigue, headaches, and alignment for each viewer, all of which are due to the need to wear special VR goggles. The new system is based on direct viewing of an interactive environment. This innovative holographic multiplexed screen technology makes it unnecessary for the viewer to wear special goggles.

  2. 3D Audio System

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Ames Research Center research into virtual reality led to the development of the Convolvotron, a high speed digital audio processing system that delivers three-dimensional sound over headphones. It consists of a two-card set designed for use with a personal computer. The Convolvotron's primary application is presentation of 3D audio signals over headphones. Four independent sound sources are filtered with large time-varying filters that compensate for motion. The perceived location of the sound remains constant. Possible applications are in air traffic control towers or airplane cockpits, hearing and perception research and virtual reality development.

  3. Multi Sensor Data Integration for AN Accurate 3d Model Generation

    NASA Astrophysics Data System (ADS)

    Chhatkuli, S.; Satoh, T.; Tachibana, K.

    2015-05-01

    The aim of this paper is to introduce a novel technique of data integration between two different data sets, i.e. laser scanned RGB point cloud and oblique imageries derived 3D model, to create a 3D model with more details and better accuracy. In general, aerial imageries are used to create a 3D city model. Aerial imageries produce an overall decent 3D city models and generally suit to generate 3D model of building roof and some non-complex terrain. However, the automatically generated 3D model, from aerial imageries, generally suffers from the lack of accuracy in deriving the 3D model of road under the bridges, details under tree canopy, isolated trees, etc. Moreover, the automatically generated 3D model from aerial imageries also suffers from undulated road surfaces, non-conforming building shapes, loss of minute details like street furniture, etc. in many cases. On the other hand, laser scanned data and images taken from mobile vehicle platform can produce more detailed 3D road model, street furniture model, 3D model of details under bridge, etc. However, laser scanned data and images from mobile vehicle are not suitable to acquire detailed 3D model of tall buildings, roof tops, and so forth. Our proposed approach to integrate multi sensor data compensated each other's weakness and helped to create a very detailed 3D model with better accuracy. Moreover, the additional details like isolated trees, street furniture, etc. which were missing in the original 3D model derived from aerial imageries could also be integrated in the final model automatically. During the process, the noise in the laser scanned data for example people, vehicles etc. on the road were also automatically removed. Hence, even though the two dataset were acquired in different time period the integrated data set or the final 3D model was generally noise free and without unnecessary details.

  4. 3D Surgical Simulation

    PubMed Central

    Cevidanes, Lucia; Tucker, Scott; Styner, Martin; Kim, Hyungmin; Chapuis, Jonas; Reyes, Mauricio; Proffit, William; Turvey, Timothy; Jaskolka, Michael

    2009-01-01

    This paper discusses the development of methods for computer-aided jaw surgery. Computer-aided jaw surgery allows us to incorporate the high level of precision necessary for transferring virtual plans into the operating room. We also present a complete computer-aided surgery (CAS) system developed in close collaboration with surgeons. Surgery planning and simulation include construction of 3D surface models from Cone-beam CT (CBCT), dynamic cephalometry, semi-automatic mirroring, interactive cutting of bone and bony segment repositioning. A virtual setup can be used to manufacture positioning splints for intra-operative guidance. The system provides further intra-operative assistance with the help of a computer display showing jaw positions and 3D positioning guides updated in real-time during the surgical procedure. The CAS system aids in dealing with complex cases with benefits for the patient, with surgical practice, and for orthodontic finishing. Advanced software tools for diagnosis and treatment planning allow preparation of detailed operative plans, osteotomy repositioning, bone reconstructions, surgical resident training and assessing the difficulties of the surgical procedures prior to the surgery. CAS has the potential to make the elaboration of the surgical plan a more flexible process, increase the level of detail and accuracy of the plan, yield higher operative precision and control, and enhance documentation of cases. Supported by NIDCR DE017727, and DE018962 PMID:20816308

  5. Ladar scene generation techniques for hardware-in-the-loop testing

    NASA Astrophysics Data System (ADS)

    Coker, Jason S.; Coker, Charles F.; Bergin, Thomas P.

    1999-07-01

    LADAR (Laser Detection and Ranging) as its name implies uses laser-ranging technology to provide information regarding target and/or background signatures. When fielded in systems, LADAR can provide ranging information to on board algorithms that in turn may utilize the information to analyze target type and range. Real-time closed loop simulation of LADAR seekers in a hardware-in-the-loop (HWIL) facility can be used to provide a nondestructive testing environment to evaluate a system's capability and therefore reduce program risk and cost. However, in LADAR systems many factors can influence the quality of the data obtained, and thus have a significant impact on algorithm performance. It is important therefore to take these factors into consideration when attempting to simulate LADAR data for Digital or HWIL testing. Some of the factors that will be considered in this paper include items such as weak or noisy detectors, multi-return, and weapon body dynamics. Various computer techniques that may be employed to simulate these factors will be analyzed to determine their merit in use for real-time simulations.

  6. Low-cost compact MEMS scanning ladar system for robotic applications

    NASA Astrophysics Data System (ADS)

    Moss, Robert; Yuan, Ping; Bai, Xiaogang; Quesada, Emilio; Sudharsanan, Rengarajan; Stann, Barry L.; Dammann, John F.; Giza, Mark M.; Lawler, William B.

    2012-06-01

    Future robots and autonomous vehicles require compact low-cost Laser Detection and Ranging (LADAR) systems for autonomous navigation. Army Research Laboratory (ARL) had recently demonstrated a brass-board short-range eye-safe MEMS scanning LADAR system for robotic applications. Boeing Spectrolab is doing a tech-transfer (CRADA) of this system and has built a compact MEMS scanning LADAR system with additional improvements in receiver sensitivity, laser system, and data processing system. Improved system sensitivity, low-cost, miniaturization, and low power consumption are the main goals for the commercialization of this LADAR system. The receiver sensitivity has been improved by 2x using large-area InGaAs PIN detectors with low-noise amplifiers. The FPGA code has been updated to extend the range to 50 meters and detect up to 3 targets per pixel. Range accuracy has been improved through the implementation of an optical T-Zero input line. A compact commercially available erbium fiber laser operating at 1550 nm wavelength is used as a transmitter, thus reducing the size of the LADAR system considerably from the ARL brassboard system. The computer interface has been consolidated to allow image data and configuration data (configuration settings and system status) to pass through a single Ethernet port. In this presentation we will discuss the system architecture and future improvements to receiver sensitivity using avalanche photodiodes.

  7. Imagery Integration Team

    NASA Technical Reports Server (NTRS)

    Calhoun, Tracy; Melendrez, Dave

    2014-01-01

    -of-a-kind imagery assets and skill sets, such as ground-based fixed and tracking cameras, crew-in the-loop imaging applications, and the integration of custom or commercial-off-the-shelf sensors onboard spacecraft. For spaceflight applications, the Integration 2 Team leverages modeling, analytical, and scientific resources along with decades of experience and lessons learned to assist the customer in optimizing engineering imagery acquisition and management schemes for any phase of flight - launch, ascent, on-orbit, descent, and landing. The Integration 2 Team guides the customer in using NASA's world-class imagery analysis teams, which specialize in overcoming inherent challenges associated with spaceflight imagery sets. Precision motion tracking, two-dimensional (2D) and three-dimensional (3D) photogrammetry, image stabilization, 3D modeling of imagery data, lighting assessment, and vehicle fiducial marking assessments are available. During a mission or test, the Integration 2 Team provides oversight of imagery operations to verify fulfillment of imagery requirements. The team oversees the collection, screening, and analysis of imagery to build a set of imagery findings. It integrates and corroborates the imagery findings with other mission data sets, generating executive summaries to support time-critical mission decisions.

  8. 3D polarimetric purity

    NASA Astrophysics Data System (ADS)

    Gil, José J.; San José, Ignacio

    2010-11-01

    From our previous definition of the indices of polarimetric purity for 3D light beams [J.J. Gil, J.M. Correas, P.A. Melero and C. Ferreira, Monogr. Semin. Mat. G. de Galdeano 31, 161 (2004)], an analysis of their geometric and physical interpretation is presented. It is found that, in agreement with previous results, the first parameter is a measure of the degree of polarization, whereas the second parameter (called the degree of directionality) is a measure of the mean angular aperture of the direction of propagation of the corresponding light beam. This pair of invariant, non-dimensional, indices of polarimetric purity contains complete information about the polarimetric purity of a light beam. The overall degree of polarimetric purity is obtained as a weighted quadratic average of the degree of polarization and the degree of directionality.

  9. 3D field harmonics

    SciTech Connect

    Caspi, S.; Helm, M.; Laslett, L.J.

    1991-03-30

    We have developed an harmonic representation for the three dimensional field components within the windings of accelerator magnets. The form by which the field is presented is suitable for interfacing with other codes that make use of the 3D field components (particle tracking and stability). The field components can be calculated with high precision and reduced cup time at any location (r,{theta},z) inside the magnet bore. The same conductor geometry which is used to simulate line currents is also used in CAD with modifications more readily available. It is our hope that the format used here for magnetic fields can be used not only as a means of delivering fields but also as a way by which beam dynamics can suggest correction to the conductor geometry. 5 refs., 70 figs.

  10. 'Bonneville' in 3-D!

    NASA Technical Reports Server (NTRS)

    2004-01-01

    The Mars Exploration Rover Spirit took this 3-D navigation camera mosaic of the crater called 'Bonneville' after driving approximately 13 meters (42.7 feet) to get a better vantage point. Spirit's current position is close enough to the edge to see the interior of the crater, but high enough and far enough back to get a view of all of the walls. Because scientists and rover controllers are so pleased with this location, they will stay here for at least two more martian days, or sols, to take high resolution panoramic camera images of 'Bonneville' in its entirety. Just above the far crater rim, on the left side, is the rover's heatshield, which is visible as a tiny reflective speck.

  11. Ground target detection based on discrete cosine transform and Rényi entropy for imaging ladar

    NASA Astrophysics Data System (ADS)

    Xu, Yuannan; Chen, Weili; Li, Junwei; Dong, Yanbing

    2016-01-01

    The discrete cosine transform (DCT) due to its excellent properties that the images can be represented in spatial/spatial-frequency domains, has been applied in sequence data analysis and image fusion. For intensity and range images of ladar, through the DCT using one dimension window, the statistical property of Rényi entropy for images is studied. We also analyzed the change of Rényi entropy's statistical property in the ladar intensity and range images when the man-made objects appear. From this foundation, a novel method for generating saliency map based on DCT and Rényi entropy is proposed. After that, ground target detection is completed when the saliency map is segmented using a simple and convenient threshold method. For the ladar intensity and range images, experimental results show the proposed method can effectively detect the military vehicles from complex earth background with low false alarm.

  12. Thermal infrared exploitation for 3D face reconstruction

    NASA Astrophysics Data System (ADS)

    Abayowa, Bernard O.

    2009-05-01

    Despite the advances in face recognition research, current face recognition systems are still not accurate or robust enough to be deployed in uncontrolled environments. The existence of a pose and illumination invariant face recognition system is still lacking. This research exploits the relationship between thermal infrared and visible imagery, to estimate 3D face with visible texture from infrared imagery. The relationship between visible and thermal infrared texture is learned using kernel canonical correlation analysis(KCCA), and then a 3D modeler is used to estimate the geometric structure from predicted visual imagery. This research will find it's application in uncontrolled environments where illumination and pose invariant identification or tracking is required at long range such as urban search and rescue (Amber alert, missing dementia patient), and manhunt scenarios.

  13. Imaging signal-to-noise ratio of synthetic aperture ladar

    NASA Astrophysics Data System (ADS)

    Liu, Liren

    2015-09-01

    On the basis of the Poisson photocurrent statistics in the photon-limited heterodyne detection, in this paper, the signal-to-noise ratios in the receiver in the time domain and on the focused 1-D image and 2-D image in the space domain are derived for both the down-looking and side-looking synthetic aperture imaging ladars using PIN or APD photodiodes. The major shot noises in the down-looking SAIL and the side-looking SAIL are, respectively, from the dark current of photodiode and the local beam current. It is found that the ratio of 1-D image SNR to receiver SNR is proportional to the number of resolution elements in the cross direction of travel and the ratio of 2-D image SNR to 1-D image SNR is proportional to the number of resolution elements in the travel direction. And the sensitivity, the effect of Fourier transform of sampled signal, and the influence of time response of detection circuit are discussed, too. The study will help to correctly design a SAIL system.

  14. Prominent rocks - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Many prominent rocks near the Sagan Memorial Station are featured in this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. Wedge is at lower left; Shark, Half-Dome, and Pumpkin are at center. Flat Top, about four inches high, is at lower right. The horizon in the distance is one to two kilometers away.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  15. 'Diamond' in 3-D

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This 3-D, microscopic imager mosaic of a target area on a rock called 'Diamond Jenness' was taken after NASA's Mars Exploration Rover Opportunity ground into the surface with its rock abrasion tool for a second time.

    Opportunity has bored nearly a dozen holes into the inner walls of 'Endurance Crater.' On sols 177 and 178 (July 23 and July 24, 2004), the rover worked double-duty on Diamond Jenness. Surface debris and the bumpy shape of the rock resulted in a shallow and irregular hole, only about 2 millimeters (0.08 inch) deep. The final depth was not enough to remove all the bumps and leave a neat hole with a smooth floor. This extremely shallow depression was then examined by the rover's alpha particle X-ray spectrometer.

    On Sol 178, Opportunity's 'robotic rodent' dined on Diamond Jenness once again, grinding almost an additional 5 millimeters (about 0.2 inch). The rover then applied its Moessbauer spectrometer to the deepened hole. This double dose of Diamond Jenness enabled the science team to examine the rock at varying layers. Results from those grindings are currently being analyzed.

    The image mosaic is about 6 centimeters (2.4 inches) across.

  16. The Enhanced-model Ladar Wind Sensor and Its Application in Planetary Wind Velocity Measurements

    NASA Technical Reports Server (NTRS)

    Soreide, D. C.; Mcgann, R. L.; Erwin, L. L.; Morris, D. J.

    1993-01-01

    For several years we have been developing an optical air-speed sensor that has a clear application as a meteorological wind-speed sensor for the Mars landers. This sensor has been developed for aircraft use to replace the familiar, pressure-based Pitot probe. Our approach utilizes a new concept in the laser-based optical measurement of air velocity (the Enhanced-Mode Ladar), which allows us to make velocity measurements with significantly lower laser power than conventional methods. The application of the Enhanced-Mode Ladar to measuring wind speeds in the martian atmosphere is discussed.

  17. Development of an automultiscopic true 3D display (Invited Paper)

    NASA Astrophysics Data System (ADS)

    Kurtz, Russell M.; Pradhan, Ranjit D.; Aye, Tin M.; Yu, Kevin H.; Okorogu, Albert O.; Chua, Kang-Bin; Tun, Nay; Win, Tin; Schindler, Axel

    2005-05-01

    True 3D displays, whether generated by volume holography, merged stereopsis (requiring glasses), or autostereoscopic methods (stereopsis without the need for special glasses), are useful in a great number of applications, ranging from training through product visualization to computer gaming. Holography provides an excellent 3D image but cannot yet be produced in real time, merged stereopsis results in accommodation-convergence conflict (where distance cues generated by the 3D appearance of the image conflict with those obtained from the angular position of the eyes) and lacks parallax cues, and autostereoscopy produces a 3D image visible only from a small region of space. Physical Optics Corporation is developing the next step in real-time 3D displays, the automultiscopic system, which eliminates accommodation-convergence conflict, produces 3D imagery from any position around the display, and includes true image parallax. Theory of automultiscopic display systems is presented, together with results from our prototype display, which produces 3D video imagery with full parallax cues from any viewing direction.

  18. Visualization of 3D Geological Models on Google Earth

    NASA Astrophysics Data System (ADS)

    Choi, Y.; Um, J.; Park, M.

    2013-05-01

    Google Earth combines satellite imagery, aerial photography, thematic maps and various data sets to make a three-dimensional (3D) interactive image of the world. Currently, Google Earth is a popular visualization tool in a variety of fields and plays an increasingly important role not only for private users in daily life, but also for scientists, practitioners, policymakers and stakeholders in research and application. In this study, a method to visualize 3D geological models on Google Earth is presented. COLLAborative Design Activity (COLLADA, an open standard XML schema for establishing interactive 3D applications) was used to represent different 3D geological models such as borehole, fence section, surface-based 3D volume and 3D grid by triangle meshes (a set of triangles connected by their common edges or corners). In addition, we designed Keyhole Markup Language (KML, the XML-based scripting language of Google Earth) codes to import the COLLADA files into the 3D render window of Google Earth. The method was applied to the Grosmont formation in Alberta, Canada. The application showed that the combination of COLLADA and KML enables Google Earth to effectively visualize 3D geological structures and properties.; Visualization of the (a) boreholes, (b) fence sections, (c) 3D volume model and (d) 3D grid model of Grossmont formation on Google Earth

  19. Simulation of 3D infrared scenes using random fields model

    NASA Astrophysics Data System (ADS)

    Shao, Xiaopeng; Zhang, Jianqi

    2001-09-01

    Analysis and simulation of smart munitions requires imagery for the munition's sensor to view. The traditional infrared background simulations are always limited in the plane scene studies. A new method is described to synthesize the images in 3D view and with various terrains texture. We develop the random fields model and temperature fields to simulate 3D infrared scenes. Generalized long-correlation (GLC) model, one of random field models, will generate both the 3D terrains skeleton data and the terrains texture in this work. To build the terrain mesh with the random fields, digital elevation models (DEM) are introduced in the paper. And texture mapping technology will perform the task of pasting the texture in the concavo-convex surfaces of the 3D scene. The simulation using random fields model is a very available method to produce 3D infrared scene with great randomicity and reality.

  20. A 3D Cloud-Construction Algorithm for the EarthCARE Satellite Mission

    NASA Technical Reports Server (NTRS)

    Barker, H. W.; Jerg, M. P.; Wehr, T.; Kato, S.; Donovan, D. P.; Hogan, R. J.

    2011-01-01

    This article presents and assesses an algorithm that constructs 3D distributions of cloud from passive satellite imagery and collocated 2D nadir profiles of cloud properties inferred synergistically from lidar, cloud radar and imager data.

  1. Jigsaw phase III: a miniaturized airborne 3-D imaging laser radar with photon-counting sensitivity for foliage penetration

    NASA Astrophysics Data System (ADS)

    Vaidyanathan, Mohan; Blask, Steven; Higgins, Thomas; Clifton, William; Davidsohn, Daniel; Carson, Ryan; Reynolds, Van; Pfannenstiel, Joanne; Cannata, Richard; Marino, Richard; Drover, John; Hatch, Robert; Schue, David; Freehart, Robert; Rowe, Greg; Mooney, James; Hart, Carl; Stanley, Byron; McLaughlin, Joseph; Lee, Eui-In; Berenholtz, Jack; Aull, Brian; Zayhowski, John; Vasile, Alex; Ramaswami, Prem; Ingersoll, Kevin; Amoruso, Thomas; Khan, Imran; Davis, William; Heinrichs, Richard

    2007-04-01

    Jigsaw three-dimensional (3D) imaging laser radar is a compact, light-weight system for imaging highly obscured targets through dense foliage semi-autonomously from an unmanned aircraft. The Jigsaw system uses a gimbaled sensor operating in a spot light mode to laser illuminate a cued target, and autonomously capture and produce the 3D image of hidden targets under trees at high 3D voxel resolution. With our MIT Lincoln Laboratory team members, the sensor system has been integrated into a geo-referenced 12-inch gimbal, and used in airborne data collections from a UH-1 manned helicopter, which served as a surrogate platform for the purpose of data collection and system validation. In this paper, we discuss the results from the ground integration and testing of the system, and the results from UH-1 flight data collections. We also discuss the performance results of the system obtained using ladar calibration targets.

  2. 3-D Perspective Pasadena, California

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This perspective view shows the western part of the city of Pasadena, California, looking north towards the San Gabriel Mountains. Portions of the cities of Altadena and La Canada, Flintridge are also shown. The image was created from three datasets: the Shuttle Radar Topography Mission (SRTM) supplied the elevation data; Landsat data from November 11, 1986 provided the land surface color (not the sky) and U.S. Geological Survey digital aerial photography provides the image detail. The Rose Bowl, surrounded by a golf course, is the circular feature at the bottom center of the image. The Jet Propulsion Laboratory is the cluster of large buildings north of the Rose Bowl at the base of the mountains. A large landfill, Scholl Canyon, is the smooth area in the lower left corner of the scene. This image shows the power of combining data from different sources to create planning tools to study problems that affect large urban areas. In addition to the well-known earthquake hazards, Southern California is affected by a natural cycle of fire and mudflows. Wildfires strip the mountains of vegetation, increasing the hazards from flooding and mudflows for several years afterwards. Data such as shown on this image can be used to predict both how wildfires will spread over the terrain and also how mudflows will be channeled down the canyons. The Shuttle Radar Topography Mission (SRTM), launched on February 11, 2000, uses the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. The mission was designed to collect three dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, an additional C-band imaging antenna and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration (NASA), the National Imagery and Mapping Agency

  3. Noise filtering techniques for photon-counting ladar data

    NASA Astrophysics Data System (ADS)

    Magruder, Lori A.; Wharton, Michael E., III; Stout, Kevin D.; Neuenschwander, Amy L.

    2012-06-01

    Many of the recent small, low power ladar systems provide detection sensitivities on the photon(s) level for altimetry applications. These "photon-counting" instruments, many times, are the operational solution to high altitude or space based platforms where low signal strength and size limitations must be accommodated. Despite the many existing algorithms for lidar data product generation, there remains a void in techniques available for handling the increased noise level in the photon-counting measurements as the larger analog systems do not exhibit such low SNR. Solar background noise poses a significant challenge to accurately extract surface features from the data. Thus, filtering is required prior to implementation of other post-processing efforts. This paper presents several methodologies for noise filtering photoncounting data. Techniques include modified Canny Edge Detection, PDF-based signal extraction, and localized statistical analysis. The Canny Edge detection identifies features in a rasterized data product using a Gaussian filter and gradient calculation to extract signal photons. PDF-based analysis matches local probability density functions with the aggregate, thereby extracting probable signal points. The localized statistical method assigns thresholding values based on a weighted local mean of angular variances. These approaches have demonstrated the ability to remove noise and subsequently provide accurate surface (ground/canopy) determination. The results presented here are based on analysis of multiple data sets acquired with the high altitude NASA MABEL system and photon-counting data supplied by Sigma Space Inc. configured to simulate the NASA upcoming ICESat-2 mission instrument expected data product.

  4. Integration and demonstration of MEMS-scanned LADAR for robotic navigation

    NASA Astrophysics Data System (ADS)

    Stann, Barry L.; Dammann, John F.; Del Giorno, Mark; DiBerardino, Charles; Giza, Mark M.; Powers, Michael A.; Uzunovic, Nenad

    2014-06-01

    LADAR is among the pre-eminent sensor modalities for autonomous vehicle navigation. Size, weight, power and cost constraints impose significant practical limitations on perception systems intended for small ground robots. In recent years, the Army Research Laboratory (ARL) developed a LADAR architecture based on a MEMS mirror scanner that fundamentally improves the trade-offs between these limitations and sensor capability. We describe how the characteristics of a highly developed prototype correspond to and satisfy the requirements of autonomous navigation and the experimental scenarios of the ARL Robotics Collaborative Technology Alliance (RCTA) program. In particular, the long maximum and short minimum range capability of the ARL MEMS LADAR makes it remarkably suitable for a wide variety of scenarios from building mapping to the manipulation of objects at close range, including dexterous manipulation with robotic arms. A prototype system was applied to a small (approximately 50 kg) unmanned robotic vehicle as the primary mobility perception sensor. We present the results of a field test where the perception information supplied by the LADAR system successfully accomplished the experimental objectives of an Integrated Research Assessment (IRA).

  5. Case study: The Avengers 3D: cinematic techniques and digitally created 3D

    NASA Astrophysics Data System (ADS)

    Clark, Graham D.

    2013-03-01

    Marvel's THE AVENGERS was the third film Stereo D collaborated on with Marvel; it was a summation of our artistic development of what Digitally Created 3D and Stereo D's artists and toolsets affords Marvel's filmmakers; the ability to shape stereographic space to support the film and story, in a way that balances human perception and live photography. We took our artistic lead from the cinematic intentions of Marvel, the Director Joss Whedon, and Director of Photography Seamus McGarvey. In the digital creation of a 3D film from a 2D image capture, recommendations to the filmmakers cinematic techniques are offered by Stereo D at each step from pre-production onwards, through set, into post. As the footage arrives at our facility we respond in depth to the cinematic qualities of the imagery in context of the edit and story, with the guidance of the Directors and Studio, creating stereoscopic imagery. Our involvement in The Avengers was early in production, after reading the script we had the opportunity and honor to meet and work with the Director Joss Whedon, and DP Seamus McGarvey on set, and into post. We presented what is obvious to such great filmmakers in the ways of cinematic techniques as they related to the standard depth cues and story points we would use to evaluate depth for their film. Our hope was any cinematic habits that supported better 3D would be emphasized. In searching for a 3D statement for the studio and filmmakers we arrived at a stereographic style that allowed for comfort and maximum visual engagement to the viewer.

  6. 3D Spectroscopy in Astronomy

    NASA Astrophysics Data System (ADS)

    Mediavilla, Evencio; Arribas, Santiago; Roth, Martin; Cepa-Nogué, Jordi; Sánchez, Francisco

    2011-09-01

    Preface; Acknowledgements; 1. Introductory review and technical approaches Martin M. Roth; 2. Observational procedures and data reduction James E. H. Turner; 3. 3D Spectroscopy instrumentation M. A. Bershady; 4. Analysis of 3D data Pierre Ferruit; 5. Science motivation for IFS and galactic studies F. Eisenhauer; 6. Extragalactic studies and future IFS science Luis Colina; 7. Tutorials: how to handle 3D spectroscopy data Sebastian F. Sánchez, Begona García-Lorenzo and Arlette Pécontal-Rousset.

  7. 3D Elevation Program—Virtual USA in 3D

    USGS Publications Warehouse

    Lukas, Vicki; Stoker, J.M.

    2016-01-01

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) uses a laser system called ‘lidar’ (light detection and ranging) to create a virtual reality map of the Nation that is very accurate. 3D maps have many uses with new uses being discovered all the time.  

  8. IFSAR processing for 3D target reconstruction

    NASA Astrophysics Data System (ADS)

    Austin, Christian D.; Moses, Randolph L.

    2005-05-01

    In this paper we investigate the use of interferometric synthetic aperture radar (IFSAR) processing for the 3D reconstruction of radar targets. A major source of reconstruction error is induced by multiple scattering responses in a resolution cell, giving rise to height errors. We present a model for multiple scattering centers and analyze the errors that result using traditional IFSAR height estimation. We present a simple geometric model that characterizes the height error and suggests tests for detecting or reducing this error. We consider the use of image magnitude difference as a test statistic to detect multiple scattering responses in a resolution cell, and we analyze the resulting height error reduction and hypothesis test performance using this statistic. Finally, we consider phase linearity test statistics when three or more IFSAR images are available. Examples using synthetic Xpatch backhoe imagery are presented.

  9. Rapid high-fidelity visualisation of multispectral 3D mapping

    NASA Astrophysics Data System (ADS)

    Tudor, Philip M.; Christy, Mark

    2011-06-01

    Mobile LIDAR scanning typically provides captured 3D data in the form of 3D 'Point Clouds'. Combined with colour imagery these data produce coloured point clouds or, if further processed, polygon-based 3D models. The use of point clouds is simple and rapid, but visualisation can appear ghostly and diffuse. Textured 3D models provide high fidelity visualisation, but their creation is time consuming, difficult to automate and can modify key terrain details. This paper describes techniques for the visualisation of fused multispectral 3D data that approach the visual fidelity of polygon-based models with the rapid turnaround and detail of 3D point clouds. The general approaches to data capture and data fusion are identified as well as the central underlying mathematical transforms, data management and graphics processing techniques used to support rapid, interactive visualisation of very large multispectral 3D datasets. Performance data with respect to real-world 3D mapping as well as illustrations of visualisation outputs are included.

  10. World Wind 3D Earth Viewing

    NASA Technical Reports Server (NTRS)

    Hogan, Patrick; Maxwell, Christopher; Kim, Randolph; Gaskins, Tom

    2007-01-01

    World Wind allows users to zoom from satellite altitude down to any place on Earth, leveraging high-resolution LandSat imagery and SRTM (Shuttle Radar Topography Mission) elevation data to experience Earth in visually rich 3D. In addition to Earth, World Wind can also visualize other planets, and there are already comprehensive data sets for Mars and the Earth's moon, which are as easily accessible as those of Earth. There have been more than 20 million downloads to date, and the software is being used heavily by the Department of Defense due to the code s ability to be extended and the evolution of the code courtesy of NASA and the user community. Primary features include the dynamic access to public domain imagery and its ease of use. All one needs to control World Wind is a two-button mouse. Additional guides and features can be accessed through a simplified menu. A JAVA version will be available soon. Navigation is automated with single clicks of a mouse, or by typing in any location to automatically zoom in to see it. The World Wind install package contains the necessary requirements such as the .NET runtime and managed DirectX library. World Wind can display combinations of data from a variety of sources, including Blue Marble, LandSat 7, SRTM, NASA Scientific Visualization Studio, GLOBE, and much more. A thorough list of features, the user manual, a key chart, and screen shots are available at http://worldwind.arc.nasa.gov.

  11. Target surface finding using 3D SAR data

    NASA Astrophysics Data System (ADS)

    Ruiter, Jason R.; Burns, Joseph W.; Subotic, Nikola S.

    2005-05-01

    Methods of generating more literal, easily interpretable imagery from 3-D SAR data are being studied to provide all weather, near-visual target identification and/or scene interpretation. One method of approaching this problem is to automatically generate shape-based geometric renderings from the SAR data. In this paper we describe the application of the Marching Tetrahedrons surface finding algorithm to 3-D SAR data. The Marching Tetrahedrons algorithm finds a surface through the 3-D data cube, which provides a recognizable representation of the target surface. This algorithm was applied to the public-release X-patch simulations of a backhoe, which provided densely sampled 3-D SAR data sets. The performance of the algorithm to noise and spatial resolution were explored. Surface renderings were readily recognizable over a range of spatial resolution, and maintained their fidelity even under relatively low Signal-to-Noise Ratio (SNR) conditions.

  12. Modular 3-D Transport model

    EPA Science Inventory

    MT3D was first developed by Chunmiao Zheng in 1990 at S.S. Papadopulos & Associates, Inc. with partial support from the U.S. Environmental Protection Agency (USEPA). Starting in 1990, MT3D was released as a pubic domain code from the USEPA. Commercial versions with enhanced capab...

  13. Market study: 3-D eyetracker

    NASA Technical Reports Server (NTRS)

    1977-01-01

    A market study of a proposed version of a 3-D eyetracker for initial use at NASA's Ames Research Center was made. The commercialization potential of a simplified, less expensive 3-D eyetracker was ascertained. Primary focus on present and potential users of eyetrackers, as well as present and potential manufacturers has provided an effective means of analyzing the prospects for commercialization.

  14. LLNL-Earth3D

    Energy Science and Technology Software Center (ESTSC)

    2013-10-01

    Earth3D is a computer code designed to allow fast calculation of seismic rays and travel times through a 3D model of the Earth. LLNL is using this for earthquake location and global tomography efforts and such codes are of great interest to the Earth Science community.

  15. [3-D ultrasound in gastroenterology].

    PubMed

    Zoller, W G; Liess, H

    1994-06-01

    Three-dimensional (3D) sonography represents a development of noninvasive diagnostic imaging by real-time two-dimensional (2D) sonography. The use of transparent rotating scans, comparable to a block of glass, generates a 3D effect. The objective of the present study was to optimate 3D presentation of abdominal findings. Additional investigations were made with a new volumetric program to determine the volume of selected findings of the liver. The results were compared with the estimated volumes of 2D sonography and 2D computer tomography (CT). For the processing of 3D images, typical parameter constellations were found for the different findings, which facilitated processing of 3D images. In more than 75% of the cases examined we found an optimal 3D presentation of sonographic findings with respect to the evaluation criteria developed by us for the 3D imaging of processed data. Great differences were found for the estimated volumes of the findings of the liver concerning the three different techniques applied. 3D ultrasound represents a valuable method to judge morphological appearance in abdominal findings. The possibility of volumetric measurements enlarges its potential diagnostic significance. Further clinical investigations are necessary to find out if definite differentiation between benign and malign findings is possible. PMID:7919882

  16. 3D World Building System

    SciTech Connect

    2013-10-30

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  17. 3D World Building System

    ScienceCinema

    None

    2014-02-26

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  18. Euro3D Science Conference

    NASA Astrophysics Data System (ADS)

    Walsh, J. R.

    2004-02-01

    The Euro3D RTN is an EU funded Research Training Network to foster the exploitation of 3D spectroscopy in Europe. 3D spectroscopy is a general term for spectroscopy of an area of the sky and derives its name from its two spatial + one spectral dimensions. There are an increasing number of instruments which use integral field devices to achieve spectroscopy of an area of the sky, either using lens arrays, optical fibres or image slicers, to pack spectra of multiple pixels on the sky (``spaxels'') onto a 2D detector. On account of the large volume of data and the special methods required to reduce and analyse 3D data, there are only a few centres of expertise and these are mostly involved with instrument developments. There is a perceived lack of expertise in 3D spectroscopy spread though the astronomical community and its use in the armoury of the observational astronomer is viewed as being highly specialised. For precisely this reason the Euro3D RTN was proposed to train young researchers in this area and develop user tools to widen the experience with this particular type of data in Europe. The Euro3D RTN is coordinated by Martin M. Roth (Astrophysikalisches Institut Potsdam) and has been running since July 2002. The first Euro3D science conference was held in Cambridge, UK from 22 to 23 May 2003. The main emphasis of the conference was, in keeping with the RTN, to expose the work of the young post-docs who are funded by the RTN. In addition the team members from the eleven European institutes involved in Euro3D also presented instrumental and observational developments. The conference was organized by Andy Bunker and held at the Institute of Astronomy. There were over thirty participants and 26 talks covered the whole range of application of 3D techniques. The science ranged from Galactic planetary nebulae and globular clusters to kinematics of nearby galaxies out to objects at high redshift. Several talks were devoted to reporting recent observations with newly

  19. PLOT3D user's manual

    NASA Technical Reports Server (NTRS)

    Walatka, Pamela P.; Buning, Pieter G.; Pierce, Larry; Elson, Patricia A.

    1990-01-01

    PLOT3D is a computer graphics program designed to visualize the grids and solutions of computational fluid dynamics. Seventy-four functions are available. Versions are available for many systems. PLOT3D can handle multiple grids with a million or more grid points, and can produce varieties of model renderings, such as wireframe or flat shaded. Output from PLOT3D can be used in animation programs. The first part of this manual is a tutorial that takes the reader, keystroke by keystroke, through a PLOT3D session. The second part of the manual contains reference chapters, including the helpfile, data file formats, advice on changing PLOT3D, and sample command files.

  20. 3D printing in dentistry.

    PubMed

    Dawood, A; Marti Marti, B; Sauret-Jackson, V; Darwood, A

    2015-12-01

    3D printing has been hailed as a disruptive technology which will change manufacturing. Used in aerospace, defence, art and design, 3D printing is becoming a subject of great interest in surgery. The technology has a particular resonance with dentistry, and with advances in 3D imaging and modelling technologies such as cone beam computed tomography and intraoral scanning, and with the relatively long history of the use of CAD CAM technologies in dentistry, it will become of increasing importance. Uses of 3D printing include the production of drill guides for dental implants, the production of physical models for prosthodontics, orthodontics and surgery, the manufacture of dental, craniomaxillofacial and orthopaedic implants, and the fabrication of copings and frameworks for implant and dental restorations. This paper reviews the types of 3D printing technologies available and their various applications in dentistry and in maxillofacial surgery. PMID:26657435

  1. 3D wavefront image formation for NIITEK GPR

    NASA Astrophysics Data System (ADS)

    Soumekh, Mehrdad; Ton, Tuan; Howard, Pete

    2009-05-01

    The U.S. Department of Defense Humanitarian Demining (HD) Research and Development Program focuses on developing, testing, demonstrating, and validating new technology for immediate use in humanitarian demining operations around the globe. Beginning in the late 1990's, the U.S. Army Countermine Division funded the development of the NIITEK ground penetrating radar (GPR) for detection of anti-tank (AT) landmines. This work is concerned with signal processing algorithms to suppress sources of artifacts in the NIITEK GPR, and formation of three-dimensional (3D) imagery from the resultant data. We first show that the NIITEK GPR data correspond to a 3D Synthetic Aperture Radar (SAR) database. An adaptive filtering method is utilized to suppress ground return and self-induced resonance (SIR) signals that are generated by the interaction of the radar-carrying platform and the transmitted radar signal. We examine signal processing methods to improve the fidelity of imagery for this 3D SAR system using pre-processing methods that suppress Doppler aliasing as well as other side lobe leakage artifacts that are introduced by the radar radiation pattern. The algorithm, known as digital spotlighting, imposes a filtering scheme on the azimuth-compressed SAR data, and manipulates the resultant spectral data to achieve a higher PRF to suppress the Doppler aliasing. We also present the 3D version of the Fourier-based wavefront reconstruction, a computationally-efficient and approximation-free SAR imaging method, for image formation with the NIITEK 3D SAR database.

  2. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  3. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  4. 3D sensor algorithms for spacecraft pose determination

    NASA Astrophysics Data System (ADS)

    Trenkle, John M.; Tchoryk, Peter, Jr.; Ritter, Greg A.; Pavlich, Jane C.; Hickerson, Aaron S.

    2006-05-01

    Researchers at the Michigan Aerospace Corporation have developed accurate and robust 3-D algorithms for pose determination (position and orientation) of satellites as part of an on-going effort supporting autonomous rendezvous, docking and space situational awareness activities. 3-D range data from a LAser Detection And Ranging (LADAR) sensor is the expected input; however, the approach is unique in that the algorithms are designed to be sensor independent. Parameterized inputs allow the algorithms to be readily adapted to any sensor of opportunity. The cornerstone of our approach is the ability to simulate realistic range data that may be tailored to the specifications of any sensor. We were able to modify an open-source raytracing package to produce point cloud information from which high-fidelity simulated range images are generated. The assumptions made in our experimentation are as follows: 1) we have access to a CAD model of the target including information about the surface scattering and reflection characteristics of the components; 2) the satellite of interest may appear at any 3-D attitude; 3) the target is not necessarily rigid, but does have a limited number of configurations; and, 4) the target is not obscured in any way and is the only object in the field of view of the sensor. Our pose estimation approach then involves rendering a large number of exemplars (100k to 5M), extracting 2-D (silhouette- and projection-based) and 3-D (surface-based) features, and then training ensembles of decision trees to predict: a) the 4-D regions on a unit hypersphere into which the unit quaternion that represents the vehicle [Q X, Q Y, Q Z, Q W] is pointing, and, b) the components of that unit quaternion. Results have been quite promising and the tools and simulation environment developed for this application may also be applied to non-cooperative spacecraft operations, Autonomous Hazard Detection and Avoidance (AHDA) for landing craft, terrain mapping, vehicle

  5. The 3D laser radar vision processor system

    NASA Technical Reports Server (NTRS)

    Sebok, T. M.

    1990-01-01

    Loral Defense Systems (LDS) developed a 3D Laser Radar Vision Processor system capable of detecting, classifying, and identifying small mobile targets as well as larger fixed targets using three dimensional laser radar imagery for use with a robotic type system. This processor system is designed to interface with the NASA Johnson Space Center in-house Extra Vehicular Activity (EVA) Retriever robot program and provide to it needed information so it can fetch and grasp targets in a space-type scenario.

  6. Bioprinting of 3D hydrogels.

    PubMed

    Stanton, M M; Samitier, J; Sánchez, S

    2015-08-01

    Three-dimensional (3D) bioprinting has recently emerged as an extension of 3D material printing, by using biocompatible or cellular components to build structures in an additive, layer-by-layer methodology for encapsulation and culture of cells. These 3D systems allow for cell culture in a suspension for formation of highly organized tissue or controlled spatial orientation of cell environments. The in vitro 3D cellular environments simulate the complexity of an in vivo environment and natural extracellular matrices (ECM). This paper will focus on bioprinting utilizing hydrogels as 3D scaffolds. Hydrogels are advantageous for cell culture as they are highly permeable to cell culture media, nutrients, and waste products generated during metabolic cell processes. They have the ability to be fabricated in customized shapes with various material properties with dimensions at the micron scale. 3D hydrogels are a reliable method for biocompatible 3D printing and have applications in tissue engineering, drug screening, and organ on a chip models. PMID:26066320

  7. Unassisted 3D camera calibration

    NASA Astrophysics Data System (ADS)

    Atanassov, Kalin; Ramachandra, Vikas; Nash, James; Goma, Sergio R.

    2012-03-01

    With the rapid growth of 3D technology, 3D image capture has become a critical part of the 3D feature set on mobile phones. 3D image quality is affected by the scene geometry as well as on-the-device processing. An automatic 3D system usually assumes known camera poses accomplished by factory calibration using a special chart. In real life settings, pose parameters estimated by factory calibration can be negatively impacted by movements of the lens barrel due to shaking, focusing, or camera drop. If any of these factors displaces the optical axes of either or both cameras, vertical disparity might exceed the maximum tolerable margin and the 3D user may experience eye strain or headaches. To make 3D capture more practical, one needs to consider unassisted (on arbitrary scenes) calibration. In this paper, we propose an algorithm that relies on detection and matching of keypoints between left and right images. Frames containing erroneous matches, along with frames with insufficiently rich keypoint constellations, are detected and discarded. Roll, pitch yaw , and scale differences between left and right frames are then estimated. The algorithm performance is evaluated in terms of the remaining vertical disparity as compared to the maximum tolerable vertical disparity.

  8. Arena3D: visualization of biological networks in 3D

    PubMed Central

    Pavlopoulos, Georgios A; O'Donoghue, Seán I; Satagopam, Venkata P; Soldatos, Theodoros G; Pafilis, Evangelos; Schneider, Reinhard

    2008-01-01

    Background Complexity is a key problem when visualizing biological networks; as the number of entities increases, most graphical views become incomprehensible. Our goal is to enable many thousands of entities to be visualized meaningfully and with high performance. Results We present a new visualization tool, Arena3D, which introduces a new concept of staggered layers in 3D space. Related data – such as proteins, chemicals, or pathways – can be grouped onto separate layers and arranged via layout algorithms, such as Fruchterman-Reingold, distance geometry, and a novel hierarchical layout. Data on a layer can be clustered via k-means, affinity propagation, Markov clustering, neighbor joining, tree clustering, or UPGMA ('unweighted pair-group method with arithmetic mean'). A simple input format defines the name and URL for each node, and defines connections or similarity scores between pairs of nodes. The use of Arena3D is illustrated with datasets related to Huntington's disease. Conclusion Arena3D is a user friendly visualization tool that is able to visualize biological or any other network in 3D space. It is free for academic use and runs on any platform. It can be downloaded or lunched directly from . Java3D library and Java 1.5 need to be pre-installed for the software to run. PMID:19040715

  9. Fdf in US3D

    NASA Astrophysics Data System (ADS)

    Otis, Collin; Ferrero, Pietro; Candler, Graham; Givi, Peyman

    2013-11-01

    The scalar filtered mass density function (SFMDF) methodology is implemented into the computer code US3D. This is an unstructured Eulerian finite volume hydrodynamic solver and has proven very effective for simulation of compressible turbulent flows. The resulting SFMDF-US3D code is employed for large eddy simulation (LES) on unstructured meshes. Simulations are conducted of subsonic and supersonic flows under non-reacting and reacting conditions. The consistency and the accuracy of the simulated results are assessed along with appraisal of the overall performance of the methodology. The SFMDF-US3D is now capable of simulating high speed flows in complex configurations.

  10. Wavefront construction in 3-D

    SciTech Connect

    Chilcoat, S.R. Hildebrand, S.T.

    1995-12-31

    Travel time computation in inhomogeneous media is essential for pre-stack Kirchhoff imaging in areas such as the sub-salt province in the Gulf of Mexico. The 2D algorithm published by Vinje, et al, has been extended to 3D to compute wavefronts in complicated inhomogeneous media. The 3D wavefront construction algorithm provides many advantages over conventional ray tracing and other methods of computing travel times in 3D. The algorithm dynamically maintains a reasonably consistent ray density without making a priori guesses at the number of rays to shoot. The determination of caustics in 3D is a straight forward geometric procedure. The wavefront algorithm also enables the computation of multi-valued travel time surfaces.

  11. Heterodyne 3D ghost imaging

    NASA Astrophysics Data System (ADS)

    Yang, Xu; Zhang, Yong; Yang, Chenghua; Xu, Lu; Wang, Qiang; Zhao, Yuan

    2016-06-01

    Conventional three dimensional (3D) ghost imaging measures range of target based on pulse fight time measurement method. Due to the limit of data acquisition system sampling rate, range resolution of the conventional 3D ghost imaging is usually low. In order to take off the effect of sampling rate to range resolution of 3D ghost imaging, a heterodyne 3D ghost imaging (HGI) system is presented in this study. The source of HGI is a continuous wave laser instead of pulse laser. Temporal correlation and spatial correlation of light are both utilized to obtain the range image of target. Through theory analysis and numerical simulations, it is demonstrated that HGI can obtain high range resolution image with low sampling rate.

  12. Combinatorial 3D Mechanical Metamaterials

    NASA Astrophysics Data System (ADS)

    Coulais, Corentin; Teomy, Eial; de Reus, Koen; Shokef, Yair; van Hecke, Martin

    2015-03-01

    We present a class of elastic structures which exhibit 3D-folding motion. Our structures consist of cubic lattices of anisotropic unit cells that can be tiled in a complex combinatorial fashion. We design and 3d-print this complex ordered mechanism, in which we combine elastic hinges and defects to tailor the mechanics of the material. Finally, we use this large design space to encode smart functionalities such as surface patterning and multistability.

  13. Optical imaging process based on two-dimensional Fourier transform for synthetic aperture imaging ladar

    NASA Astrophysics Data System (ADS)

    Sun, Zhiwei; Zhi, Ya'nan; Liu, Liren; Sun, Jianfeng; Zhou, Yu; Hou, Peipei

    2013-09-01

    The synthetic aperture imaging ladar (SAIL) systems typically generate large amounts of data difficult to compress with digital method. This paper presents an optical SAIL processor based on compensation of quadratic phase of echo in azimuth direction and two dimensional Fourier transform. The optical processor mainly consists of one phase-only liquid crystal spatial modulator(LCSLM) to load the phase data of target echo and one cylindrical lens to compensate the quadratic phase and one spherical lens to fulfill the task of two dimensional Fourier transform. We show the imaging processing result of practical target echo obtained by a synthetic aperture imaging ladar demonstrator. The optical processor is compact and lightweight and could provide inherent parallel and the speed-of-light computing capability, it has a promising application future especially in onboard and satellite borne SAIL systems.

  14. Advances in ground vehicle-based LADAR for standoff detection of road-side hazards

    NASA Astrophysics Data System (ADS)

    Hollinger, Jim; Vessey, Alyssa; Close, Ryan; Middleton, Seth; Williams, Kathryn; Rupp, Ronald; Nguyen, Son

    2016-05-01

    Commercial sensor technology has the potential to bring cost-effective sensors to a number of U.S. Army applications. By using sensors built for a widespread of commercial application, such as the automotive market, the Army can decrease costs of future systems while increasing overall capabilities. Additional sensors operating in alternate and orthogonal modalities can also be leveraged to gain a broader spectrum measurement of the environment. Leveraging multiple phenomenologies can reduce false alarms and make detection algorithms more robust to varied concealment materials. In this paper, this approach is applied to the detection of roadside hazards partially concealed by light-to-medium vegetation. This paper will present advances in detection algorithms using a ground vehicle-based commercial LADAR system. The benefits of augmenting a LADAR with millimeter-wave automotive radar and results from relevant data sets are also discussed.

  15. The laser linewidth effect on the image quality of phase coded synthetic aperture ladar

    NASA Astrophysics Data System (ADS)

    Cai, Guangyu; Hou, Peipei; Ma, Xiaoping; Sun, Jianfeng; Zhang, Ning; Li, Guangyuan; Zhang, Guo; Liu, Liren

    2015-12-01

    The phase coded (PC) waveform in synthetic aperture ladar (SAL) outperforms linear frequency modulated (LFM) signal in lower side lobe, shorter pulse duration and making the rigid control of the chirp starting point in every pulse unnecessary. Inherited from radar PC waveform and strip map SAL, the backscattered signal of a point target in PC SAL was listed and the two dimensional match filtering algorithm was introduced to focus a point image. As an inherent property of laser, linewidth is always detrimental to coherent ladar imaging. With the widely adopted laser linewidth model, the effect of laser linewidth on SAL image quality was theoretically analyzed and examined via Monte Carlo simulation. The research gives us a clear view of how to select linewidth parameters in the future PC SAL systems.

  16. From 3D view to 3D print

    NASA Astrophysics Data System (ADS)

    Dima, M.; Farisato, G.; Bergomi, M.; Viotto, V.; Magrin, D.; Greggio, D.; Farinato, J.; Marafatto, L.; Ragazzoni, R.; Piazza, D.

    2014-08-01

    In the last few years 3D printing is getting more and more popular and used in many fields going from manufacturing to industrial design, architecture, medical support and aerospace. 3D printing is an evolution of bi-dimensional printing, which allows to obtain a solid object from a 3D model, realized with a 3D modelling software. The final product is obtained using an additive process, in which successive layers of material are laid down one over the other. A 3D printer allows to realize, in a simple way, very complex shapes, which would be quite difficult to be produced with dedicated conventional facilities. Thanks to the fact that the 3D printing is obtained superposing one layer to the others, it doesn't need any particular work flow and it is sufficient to simply draw the model and send it to print. Many different kinds of 3D printers exist based on the technology and material used for layer deposition. A common material used by the toner is ABS plastics, which is a light and rigid thermoplastic polymer, whose peculiar mechanical properties make it diffusely used in several fields, like pipes production and cars interiors manufacturing. I used this technology to create a 1:1 scale model of the telescope which is the hardware core of the space small mission CHEOPS (CHaracterising ExOPlanets Satellite) by ESA, which aims to characterize EXOplanets via transits observations. The telescope has a Ritchey-Chrétien configuration with a 30cm aperture and the launch is foreseen in 2017. In this paper, I present the different phases for the realization of such a model, focusing onto pros and cons of this kind of technology. For example, because of the finite printable volume (10×10×12 inches in the x, y and z directions respectively), it has been necessary to split the largest parts of the instrument in smaller components to be then reassembled and post-processed. A further issue is the resolution of the printed material, which is expressed in terms of layers

  17. YouDash3D: exploring stereoscopic 3D gaming for 3D movie theaters

    NASA Astrophysics Data System (ADS)

    Schild, Jonas; Seele, Sven; Masuch, Maic

    2012-03-01

    Along with the success of the digitally revived stereoscopic cinema, events beyond 3D movies become attractive for movie theater operators, i.e. interactive 3D games. In this paper, we present a case that explores possible challenges and solutions for interactive 3D games to be played by a movie theater audience. We analyze the setting and showcase current issues related to lighting and interaction. Our second focus is to provide gameplay mechanics that make special use of stereoscopy, especially depth-based game design. Based on these results, we present YouDash3D, a game prototype that explores public stereoscopic gameplay in a reduced kiosk setup. It features live 3D HD video stream of a professional stereo camera rig rendered in a real-time game scene. We use the effect to place the stereoscopic effigies of players into the digital game. The game showcases how stereoscopic vision can provide for a novel depth-based game mechanic. Projected trigger zones and distributed clusters of the audience video allow for easy adaptation to larger audiences and 3D movie theater gaming.

  18. Remote 3D Medical Consultation

    NASA Astrophysics Data System (ADS)

    Welch, Greg; Sonnenwald, Diane H.; Fuchs, Henry; Cairns, Bruce; Mayer-Patel, Ketan; Yang, Ruigang; State, Andrei; Towles, Herman; Ilie, Adrian; Krishnan, Srinivas; Söderholm, Hanna M.

    Two-dimensional (2D) video-based telemedical consultation has been explored widely in the past 15-20 years. Two issues that seem to arise in most relevant case studies are the difficulty associated with obtaining the desired 2D camera views, and poor depth perception. To address these problems we are exploring the use of a small array of cameras to synthesize a spatially continuous range of dynamic three-dimensional (3D) views of a remote environment and events. The 3D views can be sent across wired or wireless networks to remote viewers with fixed displays or mobile devices such as a personal digital assistant (PDA). The viewpoints could be specified manually or automatically via user head or PDA tracking, giving the remote viewer virtual head- or hand-slaved (PDA-based) remote cameras for mono or stereo viewing. We call this idea remote 3D medical consultation (3DMC). In this article we motivate and explain the vision for 3D medical consultation; we describe the relevant computer vision/graphics, display, and networking research; we present a proof-of-concept prototype system; and we present some early experimental results supporting the general hypothesis that 3D remote medical consultation could offer benefits over conventional 2D televideo.

  19. Speaking Volumes About 3-D

    NASA Technical Reports Server (NTRS)

    2002-01-01

    In 1999, Genex submitted a proposal to Stennis Space Center for a volumetric 3-D display technique that would provide multiple users with a 360-degree perspective to simultaneously view and analyze 3-D data. The futuristic capabilities of the VolumeViewer(R) have offered tremendous benefits to commercial users in the fields of medicine and surgery, air traffic control, pilot training and education, computer-aided design/computer-aided manufacturing, and military/battlefield management. The technology has also helped NASA to better analyze and assess the various data collected by its satellite and spacecraft sensors. Genex capitalized on its success with Stennis by introducing two separate products to the commercial market that incorporate key elements of the 3-D display technology designed under an SBIR contract. The company Rainbow 3D(R) imaging camera is a novel, three-dimensional surface profile measurement system that can obtain a full-frame 3-D image in less than 1 second. The third product is the 360-degree OmniEye(R) video system. Ideal for intrusion detection, surveillance, and situation management, this unique camera system offers a continuous, panoramic view of a scene in real time.

  20. Holography, tomography and 3D microscopy as linear filtering operations

    NASA Astrophysics Data System (ADS)

    Coupland, J. M.; Lobera, J.

    2008-07-01

    In this paper, we characterize 3D optical imaging techniques as 3D linear shift-invariant filtering operations. From the Helmholtz equation that is the basis of scalar diffraction theory, we show that the scattered field, or indeed a holographic reconstruction of this field, can be considered to be the result of a linear filtering operation applied to a source distribution. We note that if the scattering is weak, the source distribution is independent of the scattered field and a holographic reconstruction (or in fact any far-field optical imaging system) behaves as a 3D linear shift-invariant filter applied to the refractive index contrast (which effectively defines the object). We go on to consider tomographic techniques that synthesize images from recordings of the scattered field using different illumination conditions. In our analysis, we compare the 3D response of monochromatic optical tomography with the 3D imagery offered by confocal microscopy and scanning white light interferometry (using quasi-monochromatic illumination) and explain the circumstances under which these approaches are equivalent. Finally, we consider the 3D response of polychromatic optical tomography and in particular the response of spectral optical coherence tomography and scanning white light interferometry.

  1. 3D-Printed Microfluidics.

    PubMed

    Au, Anthony K; Huynh, Wilson; Horowitz, Lisa F; Folch, Albert

    2016-03-14

    The advent of soft lithography allowed for an unprecedented expansion in the field of microfluidics. However, the vast majority of PDMS microfluidic devices are still made with extensive manual labor, are tethered to bulky control systems, and have cumbersome user interfaces, which all render commercialization difficult. On the other hand, 3D printing has begun to embrace the range of sizes and materials that appeal to the developers of microfluidic devices. Prior to fabrication, a design is digitally built as a detailed 3D CAD file. The design can be assembled in modules by remotely collaborating teams, and its mechanical and fluidic behavior can be simulated using finite-element modeling. As structures are created by adding materials without the need for etching or dissolution, processing is environmentally friendly and economically efficient. We predict that in the next few years, 3D printing will replace most PDMS and plastic molding techniques in academia. PMID:26854878

  2. 3D Computations and Experiments

    SciTech Connect

    Couch, R; Faux, D; Goto, D; Nikkel, D

    2004-04-05

    This project consists of two activities. Task A, Simulations and Measurements, combines all the material model development and associated numerical work with the materials-oriented experimental activities. The goal of this effort is to provide an improved understanding of dynamic material properties and to provide accurate numerical representations of those properties for use in analysis codes. Task B, ALE3D Development, involves general development activities in the ALE3D code with the focus of improving simulation capabilities for problems of mutual interest to DoD and DOE. Emphasis is on problems involving multi-phase flow, blast loading of structures and system safety/vulnerability studies.

  3. 3D Computations and Experiments

    SciTech Connect

    Couch, R; Faux, D; Goto, D; Nikkel, D

    2003-05-12

    This project is in its first full year after the combining of two previously funded projects: ''3D Code Development'' and ''Dynamic Material Properties''. The motivation behind this move was to emphasize and strengthen the ties between the experimental work and the computational model development in the materials area. The next year's activities will indicate the merging of the two efforts. The current activity is structured in two tasks. Task A, ''Simulations and Measurements'', combines all the material model development and associated numerical work with the materials-oriented experimental activities. Task B, ''ALE3D Development'', is a continuation of the non-materials related activities from the previous project.

  4. SNL3dFace

    Energy Science and Technology Software Center (ESTSC)

    2007-07-20

    This software distribution contains MATLAB and C++ code to enable identity verification using 3D images that may or may not contain a texture component. The code is organized to support system performance testing and system capability demonstration through the proper configuration of the available user interface. Using specific algorithm parameters the face recognition system has been demonstrated to achieve a 96.6% verification rate (Pd) at 0.001 false alarm rate. The system computes robust facial featuresmore » of a 3D normalized face using Principal Component Analysis (PCA) and Fisher Linear Discriminant Analysis (FLDA). A 3D normalized face is obtained by alighning each face, represented by a set of XYZ coordinated, to a scaled reference face using the Iterative Closest Point (ICP) algorithm. The scaled reference face is then deformed to the input face using an iterative framework with parameters that control the deformed surface regulation an rate of deformation. A variety of options are available to control the information that is encoded by the PCA. Such options include the XYZ coordinates, the difference of each XYZ coordinates from the reference, the Z coordinate, the intensity/texture values, etc. In addition to PCA/FLDA feature projection this software supports feature matching to obtain similarity matrices for performance analysis. In addition, this software supports visualization of the STL, MRD, 2D normalized, and PCA synthetic representations in a 3D environment.« less

  5. Making Inexpensive 3-D Models

    ERIC Educational Resources Information Center

    Manos, Harry

    2016-01-01

    Visual aids are important to student learning, and they help make the teacher's job easier. Keeping with the "TPT" theme of "The Art, Craft, and Science of Physics Teaching," the purpose of this article is to show how teachers, lacking equipment and funds, can construct a durable 3-D model reference frame and a model gravity…

  6. SNL3dFace

    SciTech Connect

    Russ, Trina; Koch, Mark; Koudelka, Melissa; Peters, Ralph; Little, Charles; Boehnen, Chris; Peters, Tanya

    2007-07-20

    This software distribution contains MATLAB and C++ code to enable identity verification using 3D images that may or may not contain a texture component. The code is organized to support system performance testing and system capability demonstration through the proper configuration of the available user interface. Using specific algorithm parameters the face recognition system has been demonstrated to achieve a 96.6% verification rate (Pd) at 0.001 false alarm rate. The system computes robust facial features of a 3D normalized face using Principal Component Analysis (PCA) and Fisher Linear Discriminant Analysis (FLDA). A 3D normalized face is obtained by alighning each face, represented by a set of XYZ coordinated, to a scaled reference face using the Iterative Closest Point (ICP) algorithm. The scaled reference face is then deformed to the input face using an iterative framework with parameters that control the deformed surface regulation an rate of deformation. A variety of options are available to control the information that is encoded by the PCA. Such options include the XYZ coordinates, the difference of each XYZ coordinates from the reference, the Z coordinate, the intensity/texture values, etc. In addition to PCA/FLDA feature projection this software supports feature matching to obtain similarity matrices for performance analysis. In addition, this software supports visualization of the STL, MRD, 2D normalized, and PCA synthetic representations in a 3D environment.

  7. 3D Printing: Exploring Capabilities

    ERIC Educational Resources Information Center

    Samuels, Kyle; Flowers, Jim

    2015-01-01

    As 3D printers become more affordable, schools are using them in increasing numbers. They fit well with the emphasis on product design in technology and engineering education, allowing students to create high-fidelity physical models to see and test different iterations in their product designs. They may also help students to "think in three…

  8. TACO3D. 3-D Finite Element Heat Transfer Code

    SciTech Connect

    Mason, W.E.

    1992-03-04

    TACO3D is a three-dimensional, finite-element program for heat transfer analysis. An extension of the two-dimensional TACO program, it can perform linear and nonlinear analyses and can be used to solve either transient or steady-state problems. The program accepts time-dependent or temperature-dependent material properties, and materials may be isotropic or orthotropic. A variety of time-dependent and temperature-dependent boundary conditions and loadings are available including temperature, flux, convection, and radiation boundary conditions and internal heat generation. Additional specialized features treat enclosure radiation, bulk nodes, and master/slave internal surface conditions (e.g., contact resistance). Data input via a free-field format is provided. A user subprogram feature allows for any type of functional representation of any independent variable. A profile (bandwidth) minimization option is available. The code is limited to implicit time integration for transient solutions. TACO3D has no general mesh generation capability. Rows of evenly-spaced nodes and rows of sequential elements may be generated, but the program relies on separate mesh generators for complex zoning. TACO3D does not have the ability to calculate view factors internally. Graphical representation of data in the form of time history and spatial plots is provided through links to the POSTACO and GRAPE postprocessor codes.

  9. Developing Spatial Reasoning Through 3D Representations of the Universe

    NASA Astrophysics Data System (ADS)

    Summers, F.; Eisenhamer, B.; McCallister, D.

    2013-12-01

    Mental models of astronomical objects are often greatly hampered by the flat two-dimensional representation of pictures from telescopes. Lacking experience with the true structures in much of the imagery, there is no basis for anything but the default interpretation of a picture postcard. Using astronomical data and scientific visualizations, our professional development session allows teachers and their students to develop their spatial reasoning while forming more accurate and richer mental models. Examples employed in this session include star positions and constellations, morphologies of both normal and interacting galaxies, shapes of planetary nebulae, and three dimensional structures in star forming regions. Participants examine, imagine, predict, and confront the 3D interpretation of well-known 2D imagery using authentic data from NASA, the Hubble Space Telescope, and other scientific sources. The session's cross-disciplinary nature includes science, math, and artistic reasoning while addressing common cosmic misconceptions. Stars of the Orion Constellation seen in 3D explodes the popular misconception that stars in a constellation are all at the same distance. A scientific visualization of two galaxies colliding provides a 3D comparison for Hubble images of interacting galaxies.

  10. 3D Soil Images Structure Quantification using Relative Entropy

    NASA Astrophysics Data System (ADS)

    Tarquis, A. M.; Gonzalez-Nieto, P. L.; Bird, N. R. A.

    2012-04-01

    Soil voids manifest the cumulative effect of local pedogenic processes and ultimately influence soil behavior - especially as it pertains to aeration and hydrophysical properties. Because of the relatively weak attenuation of X-rays by air, compared with liquids or solids, non-disruptive CT scanning has become a very attractive tool for generating three-dimensional imagery of soil voids. One of the main steps involved in this analysis is the thresholding required to transform the original (greyscale) images into the type of binary representation (e.g., pores in white, solids in black) needed for fractal analysis or simulation with Lattice-Boltzmann models (Baveye et al., 2010). The objective of the current work is to apply an innovative approach to quantifying soil voids and pore networks in original X-ray CT imagery using Relative Entropy (Bird et al., 2006; Tarquis et al., 2008). These will be illustrated using typical imagery representing contrasting soil structures. Particular attention will be given to the need to consider the full 3D context of the CT imagery, as well as scaling issues, in the application and interpretation of this index.

  11. Optoplasmonics: hybridization in 3D

    NASA Astrophysics Data System (ADS)

    Rosa, L.; Gervinskas, G.; Žukauskas, A.; Malinauskas, M.; Brasselet, E.; Juodkazis, S.

    2013-12-01

    Femtosecond laser fabrication has been used to make hybrid refractive and di ractive micro-optical elements in photo-polymer SZ2080. For applications in micro- uidics, axicon lenses were fabricated (both single and arrays), for generation of light intensity patterns extending through the entire depth of a typically tens-of-micrometers deep channel. Further hybridisation of an axicon with a plasmonic slot is fabricated and demonstrated nu- merically. Spiralling chiral grooves were inscribed into a 100-nm-thick gold coating sputtered over polymerized micro-axicon lenses, using a focused ion beam. This demonstrates possibility of hybridisation between optical and plasmonic 3D micro-optical elements. Numerical modelling of optical performance by 3D-FDTD method is presented.

  12. 3-D Relativistic MHD Simulations

    NASA Astrophysics Data System (ADS)

    Nishikawa, K.-I.; Frank, J.; Koide, S.; Sakai, J.-I.; Christodoulou, D. M.; Sol, H.; Mutel, R. L.

    1998-12-01

    We present 3-D numerical simulations of moderately hot, supersonic jets propagating initially along or obliquely to the field lines of a denser magnetized background medium with Lorentz factors of W = 4.56 and evolving in a four-dimensional spacetime. The new results are understood as follows: Relativistic simulations have consistently shown that these jets are effectively heavy and so they do not suffer substantial momentum losses and are not decelerated as efficiently as their nonrelativistic counterparts. In addition, the ambient magnetic field, however strong, can be pushed aside with relative ease by the beam, provided that the degrees of freedom associated with all three spatial dimensions are followed self-consistently in the simulations. This effect is analogous to pushing Japanese ``noren'' or vertical Venetian blinds out of the way while the slats are allowed to bend in 3-D space rather than as a 2-D slab structure.

  13. Forensic 3D Scene Reconstruction

    SciTech Connect

    LITTLE,CHARLES Q.; PETERS,RALPH R.; RIGDON,J. BRIAN; SMALL,DANIEL E.

    1999-10-12

    Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a feasible prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.

  14. Forensic 3D scene reconstruction

    NASA Astrophysics Data System (ADS)

    Little, Charles Q.; Small, Daniel E.; Peters, Ralph R.; Rigdon, J. B.

    2000-05-01

    Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a fieldable prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.

  15. 360-degree 3D profilometry

    NASA Astrophysics Data System (ADS)

    Song, Yuanhe; Zhao, Hong; Chen, Wenyi; Tan, Yushan

    1997-12-01

    A new method of 360 degree turning 3D shape measurement in which light sectioning and phase shifting techniques are both used is presented in this paper. A sine light field is applied in the projected light stripe, meanwhile phase shifting technique is used to calculate phases of the light slit. Thereafter wrapped phase distribution of the slit is formed and the unwrapping process is made by means of the height information based on the light sectioning method. Therefore phase measuring results with better precision can be obtained. At last the target 3D shape data can be produced according to geometric relationships between phases and the object heights. The principles of this method are discussed in detail and experimental results are shown in this paper.

  16. 3D Printable Graphene Composite.

    PubMed

    Wei, Xiaojun; Li, Dong; Jiang, Wei; Gu, Zheming; Wang, Xiaojuan; Zhang, Zengxing; Sun, Zhengzong

    2015-01-01

    In human being's history, both the Iron Age and Silicon Age thrived after a matured massive processing technology was developed. Graphene is the most recent superior material which could potentially initialize another new material Age. However, while being exploited to its full extent, conventional processing methods fail to provide a link to today's personalization tide. New technology should be ushered in. Three-dimensional (3D) printing fills the missing linkage between graphene materials and the digital mainstream. Their alliance could generate additional stream to push the graphene revolution into a new phase. Here we demonstrate for the first time, a graphene composite, with a graphene loading up to 5.6 wt%, can be 3D printable into computer-designed models. The composite's linear thermal coefficient is below 75 ppm·°C(-1) from room temperature to its glass transition temperature (Tg), which is crucial to build minute thermal stress during the printing process. PMID:26153673

  17. 3D Printed Robotic Hand

    NASA Technical Reports Server (NTRS)

    Pizarro, Yaritzmar Rosario; Schuler, Jason M.; Lippitt, Thomas C.

    2013-01-01

    Dexterous robotic hands are changing the way robots and humans interact and use common tools. Unfortunately, the complexity of the joints and actuations drive up the manufacturing cost. Some cutting edge and commercially available rapid prototyping machines now have the ability to print multiple materials and even combine these materials in the same job. A 3D model of a robotic hand was designed using Creo Parametric 2.0. Combining "hard" and "soft" materials, the model was printed on the Object Connex350 3D printer with the purpose of resembling as much as possible the human appearance and mobility of a real hand while needing no assembly. After printing the prototype, strings where installed as actuators to test mobility. Based on printing materials, the manufacturing cost of the hand was $167, significantly lower than other robotic hands without the actuators since they have more complex assembly processes.

  18. 3D light scanning macrography.

    PubMed

    Huber, D; Keller, M; Robert, D

    2001-08-01

    The technique of 3D light scanning macrography permits the non-invasive surface scanning of small specimens at magnifications up to 200x. Obviating both the problem of limited depth of field inherent to conventional close-up macrophotography and the metallic coating required by scanning electron microscopy, 3D light scanning macrography provides three-dimensional digital images of intact specimens without the loss of colour, texture and transparency information. This newly developed technique offers a versatile, portable and cost-efficient method for the non-invasive digital and photographic documentation of small objects. Computer controlled device operation and digital image acquisition facilitate fast and accurate quantitative morphometric investigations, and the technique offers a broad field of research and educational applications in biological, medical and materials sciences. PMID:11489078

  19. 3D-graphite structure

    SciTech Connect

    Belenkov, E. A. Ali-Pasha, V. A.

    2011-01-15

    The structure of clusters of some new carbon 3D-graphite phases have been calculated using the molecular-mechanics methods. It is established that 3D-graphite polytypes {alpha}{sub 1,1}, {alpha}{sub 1,3}, {alpha}{sub 1,5}, {alpha}{sub 2,1}, {alpha}{sub 2,3}, {alpha}{sub 3,1}, {beta}{sub 1,2}, {beta}{sub 1,4}, {beta}{sub 1,6}, {beta}{sub 2,1}, and {beta}{sub 3,2} consist of sp{sup 2}-hybridized atoms, have hexagonal unit cells, and differ in regards to the structure of layers and order of their alternation. A possible way to experimentally synthesize new carbon phases is proposed: the polymerization and carbonization of hydrocarbon molecules.

  20. Improvement of the signal-to-noise ratio in static-mode down-looking synthetic aperture imaging ladar

    NASA Astrophysics Data System (ADS)

    Lu, Zhiyong; Sun, Jianfeng; Zhang, Ning; Zhou, Yu; Cai, Guangyu; Liu, Liren

    2015-09-01

    The static-mode down-looking synthetic aperture imaging ladar (SAIL) can keep the target and carrying-platform still during the collection process. Improvement of the signal-to-noise ratio in static-mode down-looking SAIL is investigated. The signal-to-noise ratio is improved by increasing scanning time and sampling rate in static-mode down-looking SAIL. In the experiment, the targets are reconstructed in different scanning time and different sampling rate. As the increasing of the scanning time and sampling rate, the reconstructed images become clearer. These techniques have a great potential for applications in extensive synthetic aperture imaging ladar fields.

  1. [Real time 3D echocardiography].

    PubMed

    Bauer, F; Shiota, T; Thomas, J D

    2001-07-01

    Three-dimensional representation of the heart is an old concern. Usually, 3D reconstruction of the cardiac mass is made by successive acquisition of 2D sections, the spatial localisation and orientation of which require complex guiding systems. More recently, the concept of volumetric acquisition has been introduced. A matricial emitter-receiver probe complex with parallel data processing provides instantaneous of a pyramidal 64 degrees x 64 degrees volume. The image is restituted in real time and is composed of 3 planes (planes B and C) which can be displaced in all spatial directions at any time during acquisition. The flexibility of this system of acquisition allows volume and mass measurement with greater accuracy and reproducibility, limiting inter-observer variability. Free navigation of the planes of investigation allows reconstruction for qualitative and quantitative analysis of valvular heart disease and other pathologies. Although real time 3D echocardiography is ready for clinical usage, some improvements are still necessary to improve its conviviality. Then real time 3D echocardiography could be the essential tool for understanding, diagnosis and management of patients. PMID:11494630

  2. [Real time 3D echocardiography

    NASA Technical Reports Server (NTRS)

    Bauer, F.; Shiota, T.; Thomas, J. D.

    2001-01-01

    Three-dimensional representation of the heart is an old concern. Usually, 3D reconstruction of the cardiac mass is made by successive acquisition of 2D sections, the spatial localisation and orientation of which require complex guiding systems. More recently, the concept of volumetric acquisition has been introduced. A matricial emitter-receiver probe complex with parallel data processing provides instantaneous of a pyramidal 64 degrees x 64 degrees volume. The image is restituted in real time and is composed of 3 planes (planes B and C) which can be displaced in all spatial directions at any time during acquisition. The flexibility of this system of acquisition allows volume and mass measurement with greater accuracy and reproducibility, limiting inter-observer variability. Free navigation of the planes of investigation allows reconstruction for qualitative and quantitative analysis of valvular heart disease and other pathologies. Although real time 3D echocardiography is ready for clinical usage, some improvements are still necessary to improve its conviviality. Then real time 3D echocardiography could be the essential tool for understanding, diagnosis and management of patients.

  3. GPU-Accelerated Denoising in 3D (GD3D)

    Energy Science and Technology Software Center (ESTSC)

    2013-10-01

    The raw computational power GPU Accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. This software addresses two facets of this promising application: what tuning is necessary to achieve optimal performance on a modern GPU? And what parameters yield the best denoising results in practice? To answer the first question, the software performs an autotuning step to empirically determine optimal memory blocking on the GPU. To answer themore » second, it performs a sweep of algorithm parameters to determine the combination that best reduces the mean squared error relative to a noiseless reference image.« less

  4. Magmatic Systems in 3-D

    NASA Astrophysics Data System (ADS)

    Kent, G. M.; Harding, A. J.; Babcock, J. M.; Orcutt, J. A.; Bazin, S.; Singh, S.; Detrick, R. S.; Canales, J. P.; Carbotte, S. M.; Diebold, J.

    2002-12-01

    Multichannel seismic (MCS) images of crustal magma chambers are ideal targets for advanced visualization techniques. In the mid-ocean ridge environment, reflections originating at the melt-lens are well separated from other reflection boundaries, such as the seafloor, layer 2A and Moho, which enables the effective use of transparency filters. 3-D visualization of seismic reflectivity falls into two broad categories: volume and surface rendering. Volumetric-based visualization is an extremely powerful approach for the rapid exploration of very dense 3-D datasets. These 3-D datasets are divided into volume elements or voxels, which are individually color coded depending on the assigned datum value; the user can define an opacity filter to reject plotting certain voxels. This transparency allows the user to peer into the data volume, enabling an easy identification of patterns or relationships that might have geologic merit. Multiple image volumes can be co-registered to look at correlations between two different data types (e.g., amplitude variation with offsets studies), in a manner analogous to draping attributes onto a surface. In contrast, surface visualization of seismic reflectivity usually involves producing "fence" diagrams of 2-D seismic profiles that are complemented with seafloor topography, along with point class data, draped lines and vectors (e.g. fault scarps, earthquake locations and plate-motions). The overlying seafloor can be made partially transparent or see-through, enabling 3-D correlations between seafloor structure and seismic reflectivity. Exploration of 3-D datasets requires additional thought when constructing and manipulating these complex objects. As numbers of visual objects grow in a particular scene, there is a tendency to mask overlapping objects; this clutter can be managed through the effective use of total or partial transparency (i.e., alpha-channel). In this way, the co-variation between different datasets can be investigated

  5. LADAR performance simulations with a high spectral resolution atmospheric transmittance and radiance model: LEEDR

    NASA Astrophysics Data System (ADS)

    Roth, Benjamin D.; Fiorino, Steven T.

    2012-06-01

    In this study of atmospheric effects on Geiger Mode laser ranging and detection (LADAR), the parameter space is explored primarily using the Air Force Institute of Technology Center for Directed Energy's (AFIT/CDE) Laser Environmental Effects Definition and Reference (LEEDR) code. The expected performance of LADAR systems is assessed at operationally representative wavelengths of 1.064, 1.56 and 2.039 μm at a number of locations worldwide. Signal attenuation and background noise are characterized using LEEDR. These results are compared to standard atmosphere and Fast Atmospheric Signature Code (FASCODE) assessments. Scenarios evaluated are based on air-toground engagements including both down looking oblique and vertical geometries in which anticipated clear air aerosols are expected to occur. Engagement geometry variations are considered to determine optimum employment techniques to exploit or defeat the environmental conditions. Results, presented primarily in the form of worldwide plots of notional signal to noise ratios, show a significant climate dependence, but large variances between climatological and standard atmosphere assessments. An overall average absolute mean difference ratio of 1.03 is found when climatological signal-to-noise ratios at 40 locations are compared to their equivalent standard atmosphere assessment. Atmospheric transmission is shown to not always correlate with signal-to-noise ratios between different atmosphere profiles. Allowing aerosols to swell with relative humidity proves to be significant especially for up looking geometries reducing the signal-to-noise ratio several orders of magnitude. Turbulence blurring effects that impact tracking and imaging show that the LADAR system has little capability at a 50km range yet the turbulence has little impact at a 3km range.

  6. Evaluation of terrestrial photogrammetric point clouds derived from thermal imagery

    NASA Astrophysics Data System (ADS)

    Metcalf, Jeremy P.; Olsen, Richard C.

    2016-05-01

    Computer vision and photogrammetric techniques have been widely applied to digital imagery producing high density 3D point clouds. Using thermal imagery as input, the same techniques can be applied to infrared data to produce point clouds in 3D space, providing surface temperature information. The work presented here is an evaluation of the accuracy of 3D reconstruction of point clouds produced using thermal imagery. An urban scene was imaged over an area at the Naval Postgraduate School, Monterey, CA, viewing from above as with an airborne system. Terrestrial thermal and RGB imagery were collected from a rooftop overlooking the site using a FLIR SC8200 MWIR camera and a Canon T1i DSLR. In order to spatially align each dataset, ground control points were placed throughout the study area using Trimble R10 GNSS receivers operating in RTK mode. Each image dataset is processed to produce a dense point cloud for 3D evaluation.

  7. Image quality analysis and improvement of Ladar reflective tomography for space object recognition

    NASA Astrophysics Data System (ADS)

    Wang, Jin-cheng; Zhou, Shi-wei; Shi, Liang; Hu, Yi-Hua; Wang, Yong

    2016-01-01

    Some problems in the application of Ladar reflective tomography for space object recognition are studied in this work. An analytic target model is adopted to investigate the image reconstruction properties with limited relative angle range, which are useful to verify the target shape from the incomplete image, analyze the shadowing effect of the target and design the satellite payloads against recognition via reflective tomography approach. We proposed an iterative maximum likelihood method basing on Bayesian theory, which can effectively compress the pulse width and greatly improve the image resolution of incoherent LRT system without loss of signal to noise ratio.

  8. Measurement of polarization parameters of the targets in synthetic aperture imaging LADAR

    NASA Astrophysics Data System (ADS)

    Xu, Qian; Sun, Jianfeng; Lu, Wei; Hou, Peipei; Ma, Xiaoping; Lu, Zhiyong; Sun, Zhiwei; Liu, Liren

    2015-09-01

    In Synthetic aperture imaging ladar (SAIL), the polarization state change of the backscattered light will affect the imaging. Polarization state of the reflected field is always determined by the interaction of the light and the materials on the target plane. The Stokes parameters, which can provide the information on both light intensity and polarization state, are the ideal quantities for characterizing the above features. In this paper, a measurement system of the polarization characteristic for the SAIL target materials is designed. The measurement results are expected to be useful in target identification and recognition.

  9. Ideal Positions: 3D Sonography, Medical Visuality, Popular Culture.

    PubMed

    Seiber, Tim

    2016-03-01

    As digital technologies are integrated into medical environments, they continue to transform the experience of contemporary health care. Importantly, medicine is increasingly visual. In the history of sonography, visibility has played an important role in accessing fetal bodies for diagnostic and entertainment purposes. With the advent of three-dimensional (3D) rendering, sonography presents the fetus visually as already a child. The aesthetics of this process and the resulting imagery, made possible in digital networks, discloses important changes in the relationship between technology and biology, reproductive health and political debates, and biotechnology and culture. PMID:26164291

  10. 3-D HYDRODYNAMIC MODELING IN A GEOSPATIAL FRAMEWORK

    SciTech Connect

    Bollinger, J; Alfred Garrett, A; Larry Koffman, L; David Hayes, D

    2006-08-24

    3-D hydrodynamic models are used by the Savannah River National Laboratory (SRNL) to simulate the transport of thermal and radionuclide discharges in coastal estuary systems. Development of such models requires accurate bathymetry, coastline, and boundary condition data in conjunction with the ability to rapidly discretize model domains and interpolate the required geospatial data onto the domain. To facilitate rapid and accurate hydrodynamic model development, SRNL has developed a pre- and post-processor application in a geospatial framework to automate the creation of models using existing data. This automated capability allows development of very detailed models to maximize exploitation of available surface water radionuclide sample data and thermal imagery.

  11. A Clean Adirondack (3-D)

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This is a 3-D anaglyph showing a microscopic image taken of an area measuring 3 centimeters (1.2 inches) across on the rock called Adirondack. The image was taken at Gusev Crater on the 33rd day of the Mars Exploration Rover Spirit's journey (Feb. 5, 2004), after the rover used its rock abrasion tool brush to clean the surface of the rock. Dust, which was pushed off to the side during cleaning, can still be seen to the left and in low areas of the rock.

  12. Making Inexpensive 3-D Models

    NASA Astrophysics Data System (ADS)

    Manos, Harry

    2016-03-01

    Visual aids are important to student learning, and they help make the teacher's job easier. Keeping with the TPT theme of "The Art, Craft, and Science of Physics Teaching," the purpose of this article is to show how teachers, lacking equipment and funds, can construct a durable 3-D model reference frame and a model gravity well tailored to specific class lessons. Most of the supplies are readily available in the home or at school: rubbing alcohol, a rag, two colors of spray paint, art brushes, and masking tape. The cost of these supplies, if you don't have them, is less than 20.

  13. What Lies Ahead (3-D)

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This 3-D cylindrical-perspective mosaic taken by the navigation camera on the Mars Exploration Rover Spirit on sol 82 shows the view south of the large crater dubbed 'Bonneville.' The rover will travel toward the Columbia Hills, seen here at the upper left. The rock dubbed 'Mazatzal' and the hole the rover drilled in to it can be seen at the lower left. The rover's position is referred to as 'Site 22, Position 32.' This image was geometrically corrected to make the horizon appear flat.

  14. Vacant Lander in 3-D

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This 3-D image captured by the Mars Exploration Rover Opportunity's rear hazard-identification camera shows the now-empty lander that carried the rover 283 million miles to Meridiani Planum, Mars. Engineers received confirmation that Opportunity's six wheels successfully rolled off the lander and onto martian soil at 3:01 a.m. PST, January 31, 2004, on the seventh martian day, or sol, of the mission. The rover is approximately 1 meter (3 feet) in front of the lander, facing north.

  15. 3D segmentation and reconstruction of endobronchial ultrasound

    NASA Astrophysics Data System (ADS)

    Zang, Xiaonan; Breslav, Mikhail; Higgins, William E.

    2013-03-01

    State-of-the-art practice for lung-cancer staging bronchoscopy often draws upon a combination of endobronchial ultrasound (EBUS) and multidetector computed-tomography (MDCT) imaging. While EBUS offers real-time in vivo imaging of suspicious lesions and lymph nodes, its low signal-to-noise ratio and tendency to exhibit missing region-of-interest (ROI) boundaries complicate diagnostic tasks. Furthermore, past efforts did not incorporate automated analysis of EBUS images and a subsequent fusion of the EBUS and MDCT data. To address these issues, we propose near real-time automated methods for three-dimensional (3D) EBUS segmentation and reconstruction that generate a 3D ROI model along with ROI measurements. Results derived from phantom data and lung-cancer patients show the promise of the methods. In addition, we present a preliminary image-guided intervention (IGI) system example, whereby EBUS imagery is registered to a patient's MDCT chest scan.

  16. High-definition 3D display for training applications

    NASA Astrophysics Data System (ADS)

    Pezzaniti, J. Larry; Edmondson, Richard; Vaden, Justin; Hyatt, Brian; Morris, James; Chenault, David; Tchon, Joe; Barnidge, Tracy

    2010-04-01

    In this paper, we report on the development of a high definition stereoscopic liquid crystal display for use in training applications. The display technology provides full spatial and temporal resolution on a liquid crystal display panel consisting of 1920×1200 pixels at 60 frames per second. Display content can include mixed 2D and 3D data. Source data can be 3D video from cameras, computer generated imagery, or fused data from a variety of sensor modalities. Discussion of the use of this display technology in military and medical industries will be included. Examples of use in simulation and training for robot tele-operation, helicopter landing, surgical procedures, and vehicle repair, as well as for DoD mission rehearsal will be presented.

  17. 3D Printable Graphene Composite

    NASA Astrophysics Data System (ADS)

    Wei, Xiaojun; Li, Dong; Jiang, Wei; Gu, Zheming; Wang, Xiaojuan; Zhang, Zengxing; Sun, Zhengzong

    2015-07-01

    In human being’s history, both the Iron Age and Silicon Age thrived after a matured massive processing technology was developed. Graphene is the most recent superior material which could potentially initialize another new material Age. However, while being exploited to its full extent, conventional processing methods fail to provide a link to today’s personalization tide. New technology should be ushered in. Three-dimensional (3D) printing fills the missing linkage between graphene materials and the digital mainstream. Their alliance could generate additional stream to push the graphene revolution into a new phase. Here we demonstrate for the first time, a graphene composite, with a graphene loading up to 5.6 wt%, can be 3D printable into computer-designed models. The composite’s linear thermal coefficient is below 75 ppm·°C-1 from room temperature to its glass transition temperature (Tg), which is crucial to build minute thermal stress during the printing process.

  18. 3D acoustic atmospheric tomography

    NASA Astrophysics Data System (ADS)

    Rogers, Kevin; Finn, Anthony

    2014-10-01

    This paper presents a method for tomographically reconstructing spatially varying 3D atmospheric temperature profiles and wind velocity fields based. Measurements of the acoustic signature measured onboard a small Unmanned Aerial Vehicle (UAV) are compared to ground-based observations of the same signals. The frequency-shifted signal variations are then used to estimate the acoustic propagation delay between the UAV and the ground microphones, which are also affected by atmospheric temperature and wind speed vectors along each sound ray path. The wind and temperature profiles are modelled as the weighted sum of Radial Basis Functions (RBFs), which also allow local meteorological measurements made at the UAV and ground receivers to supplement any acoustic observations. Tomography is used to provide a full 3D reconstruction/visualisation of the observed atmosphere. The technique offers observational mobility under direct user control and the capacity to monitor hazardous atmospheric environments, otherwise not justifiable on the basis of cost or risk. This paper summarises the tomographic technique and reports on the results of simulations and initial field trials. The technique has practical applications for atmospheric research, sound propagation studies, boundary layer meteorology, air pollution measurements, analysis of wind shear, and wind farm surveys.

  19. 3D Printed Bionic Ears

    PubMed Central

    Mannoor, Manu S.; Jiang, Ziwen; James, Teena; Kong, Yong Lin; Malatesta, Karen A.; Soboyejo, Winston O.; Verma, Naveen; Gracias, David H.; McAlpine, Michael C.

    2013-01-01

    The ability to three-dimensionally interweave biological tissue with functional electronics could enable the creation of bionic organs possessing enhanced functionalities over their human counterparts. Conventional electronic devices are inherently two-dimensional, preventing seamless multidimensional integration with synthetic biology, as the processes and materials are very different. Here, we present a novel strategy for overcoming these difficulties via additive manufacturing of biological cells with structural and nanoparticle derived electronic elements. As a proof of concept, we generated a bionic ear via 3D printing of a cell-seeded hydrogel matrix in the precise anatomic geometry of a human ear, along with an intertwined conducting polymer consisting of infused silver nanoparticles. This allowed for in vitro culturing of cartilage tissue around an inductive coil antenna in the ear, which subsequently enables readout of inductively-coupled signals from cochlea-shaped electrodes. The printed ear exhibits enhanced auditory sensing for radio frequency reception, and complementary left and right ears can listen to stereo audio music. Overall, our approach suggests a means to intricately merge biologic and nanoelectronic functionalities via 3D printing. PMID:23635097

  20. 3-D Relativistic MHD Simulations

    NASA Astrophysics Data System (ADS)

    Nishikaw, K.-I.; Frank, J.; Christodoulou, D. M.; Koide, S.; Sakai, J.-I.; Sol, H.; Mutel, R. L.

    1998-12-01

    We present 3-D numerical simulations of moderately hot, supersonic jets propagating initially along or obliquely to the field lines of a denser magnetized background medium with Lorentz factors of W=4.56 and evolving in a four-dimensional spacetime. The new results are understood as follows: Relativistic simulations have consistently shown that these jets are effectively heavy and so they do not suffer substantial momentum losses and are not decelerated as efficiently as their nonrelativistic counterparts. In addition, the ambient magnetic field, however strong, can be pushed aside with relative ease by the beam, provided that the degrees of freedom associated with all three spatial dimensions are followed self-consistently in the simulations. This effect is analogous to pushing Japanese ``noren'' or vertical Venetian blinds out of the way while the slats are allowed to bend in 3-D space rather than as a 2-D slab structure. We also simulate jets with the more realistic initial conditions for injecting jets for helical mangetic field, perturbed density, velocity, and internal energy, which are supposed to be caused in the process of jet generation. Three possible explanations for the observed variability are (i) tidal disruption of a star falling into the black hole, (ii) instabilities in the relativistic accretion disk, and (iii) jet-related PRocesses. New results will be reported at the meeting.

  1. 3D printed bionic ears.

    PubMed

    Mannoor, Manu S; Jiang, Ziwen; James, Teena; Kong, Yong Lin; Malatesta, Karen A; Soboyejo, Winston O; Verma, Naveen; Gracias, David H; McAlpine, Michael C

    2013-06-12

    The ability to three-dimensionally interweave biological tissue with functional electronics could enable the creation of bionic organs possessing enhanced functionalities over their human counterparts. Conventional electronic devices are inherently two-dimensional, preventing seamless multidimensional integration with synthetic biology, as the processes and materials are very different. Here, we present a novel strategy for overcoming these difficulties via additive manufacturing of biological cells with structural and nanoparticle derived electronic elements. As a proof of concept, we generated a bionic ear via 3D printing of a cell-seeded hydrogel matrix in the anatomic geometry of a human ear, along with an intertwined conducting polymer consisting of infused silver nanoparticles. This allowed for in vitro culturing of cartilage tissue around an inductive coil antenna in the ear, which subsequently enables readout of inductively-coupled signals from cochlea-shaped electrodes. The printed ear exhibits enhanced auditory sensing for radio frequency reception, and complementary left and right ears can listen to stereo audio music. Overall, our approach suggests a means to intricately merge biologic and nanoelectronic functionalities via 3D printing. PMID:23635097

  2. 3D Printable Graphene Composite

    PubMed Central

    Wei, Xiaojun; Li, Dong; Jiang, Wei; Gu, Zheming; Wang, Xiaojuan; Zhang, Zengxing; Sun, Zhengzong

    2015-01-01

    In human being’s history, both the Iron Age and Silicon Age thrived after a matured massive processing technology was developed. Graphene is the most recent superior material which could potentially initialize another new material Age. However, while being exploited to its full extent, conventional processing methods fail to provide a link to today’s personalization tide. New technology should be ushered in. Three-dimensional (3D) printing fills the missing linkage between graphene materials and the digital mainstream. Their alliance could generate additional stream to push the graphene revolution into a new phase. Here we demonstrate for the first time, a graphene composite, with a graphene loading up to 5.6 wt%, can be 3D printable into computer-designed models. The composite’s linear thermal coefficient is below 75 ppm·°C−1 from room temperature to its glass transition temperature (Tg), which is crucial to build minute thermal stress during the printing process. PMID:26153673

  3. 3D medical thermography device

    NASA Astrophysics Data System (ADS)

    Moghadam, Peyman

    2015-05-01

    In this paper, a novel handheld 3D medical thermography system is introduced. The proposed system consists of a thermal-infrared camera, a color camera and a depth camera rigidly attached in close proximity and mounted on an ergonomic handle. As a practitioner holding the device smoothly moves it around the human body parts, the proposed system generates and builds up a precise 3D thermogram model by incorporating information from each new measurement in real-time. The data is acquired in motion, thus it provides multiple points of view. When processed, these multiple points of view are adaptively combined by taking into account the reliability of each individual measurement which can vary due to a variety of factors such as angle of incidence, distance between the device and the subject and environmental sensor data or other factors influencing a confidence of the thermal-infrared data when captured. Finally, several case studies are presented to support the usability and performance of the proposed system.

  4. 3D Ion Temperature Reconstruction

    NASA Astrophysics Data System (ADS)

    Tanabe, Hiroshi; You, Setthivoine; Balandin, Alexander; Inomoto, Michiaki; Ono, Yasushi

    2009-11-01

    The TS-4 experiment at the University of Tokyo collides two spheromaks to form a single high-beta compact toroid. Magnetic reconnection during the merging process heats and accelerates the plasma in toroidal and poloidal directions. The reconnection region has a complex 3D topology determined by the pitch of the spheromak magnetic fields at the merging plane. A pair of multichord passive spectroscopic diagnostics have been established to measure the ion temperature and velocity in the reconnection volume. One setup measures spectral lines across a poloidal plane, retrieving velocity and temperature from Abel inversion. The other, novel setup records spectral lines across another section of the plasma and reconstructs velocity and temperature from 3D vector and 2D scalar tomography techniques. The magnetic field linking both measurement planes is determined from in situ magnetic probe arrays. The ion temperature is then estimated within the volume between the two measurement planes and at the reconnection region. The measurement is followed over several repeatable discharges to follow the heating and acceleration process during the merging reconnection.

  5. LOTT RANCH 3D PROJECT

    SciTech Connect

    Larry Lawrence; Bruce Miller

    2004-09-01

    The Lott Ranch 3D seismic prospect located in Garza County, Texas is a project initiated in September of 1991 by the J.M. Huber Corp., a petroleum exploration and production company. By today's standards the 126 square mile project does not seem monumental, however at the time it was conceived it was the most intensive land 3D project ever attempted. Acquisition began in September of 1991 utilizing GEO-SEISMIC, INC., a seismic data contractor. The field parameters were selected by J.M. Huber, and were of a radical design. The recording instruments used were GeoCor IV amplifiers designed by Geosystems Inc., which record the data in signed bit format. It would not have been practical, if not impossible, to have processed the entire raw volume with the tools available at that time. The end result was a dataset that was thought to have little utility due to difficulties in processing the field data. In 1997, Yates Energy Corp. located in Roswell, New Mexico, formed a partnership to further develop the project. Through discussions and meetings with Pinnacle Seismic, it was determined that the original Lott Ranch 3D volume could be vastly improved upon reprocessing. Pinnacle Seismic had shown the viability of improving field-summed signed bit data on smaller 2D and 3D projects. Yates contracted Pinnacle Seismic Ltd. to perform the reprocessing. This project was initiated with high resolution being a priority. Much of the potential resolution was lost through the initial summing of the field data. Modern computers that are now being utilized have tremendous speed and storage capacities that were cost prohibitive when this data was initially processed. Software updates and capabilities offer a variety of quality control and statics resolution, which are pertinent to the Lott Ranch project. The reprocessing effort was very successful. The resulting processed data-set was then interpreted using modern PC-based interpretation and mapping software. Production data, log data

  6. 3D Printing of Graphene Aerogels.

    PubMed

    Zhang, Qiangqiang; Zhang, Feng; Medarametla, Sai Pradeep; Li, Hui; Zhou, Chi; Lin, Dong

    2016-04-01

    3D printing of a graphene aerogel with true 3D overhang structures is highlighted. The aerogel is fabricated by combining drop-on-demand 3D printing and freeze casting. The water-based GO ink is ejected and freeze-cast into designed 3D structures. The lightweight (<10 mg cm(-3) ) 3D printed graphene aerogel presents superelastic and high electrical conduction. PMID:26861680

  7. Simulation of a Geiger-Mode Imaging LADAR System for Performance Assessment

    PubMed Central

    Kim, Seongjoon; Lee, Impyeong; Kwon, Yong Joon

    2013-01-01

    As LADAR systems applications gradually become more diverse, new types of systems are being developed. When developing new systems, simulation studies are an essential prerequisite. A simulator enables performance predictions and optimal system parameters at the design level, as well as providing sample data for developing and validating application algorithms. The purpose of the study is to propose a method for simulating a Geiger-mode imaging LADAR system. We develop simulation software to assess system performance and generate sample data for the applications. The simulation is based on three aspects of modeling—the geometry, radiometry and detection. The geometric model computes the ranges to the reflection points of the laser pulses. The radiometric model generates the return signals, including the noises. The detection model determines the flight times of the laser pulses based on the nature of the Geiger-mode detector. We generated sample data using the simulator with the system parameters and analyzed the detection performance by comparing the simulated points to the reference points. The proportion of the outliers in the simulated points reached 25.53%, indicating the need for efficient outlier elimination algorithms. In addition, the false alarm rate and dropout rate of the designed system were computed as 1.76% and 1.06%, respectively. PMID:23823970

  8. High power CO2 coherent ladar haven't quit the stage of military affairs

    NASA Astrophysics Data System (ADS)

    Zhang, Heyong

    2015-05-01

    The invention of the laser in 1960 created the possibility of using a source of coherent light as a transmitter for a laser radar (ladar). Coherent ladar shares many of the basic features of more common microwave radars. However, it is the extremely short operating wavelength of lasers that introduces new military applications, especially in the area of missile identification, space target tracking, remote rang finding, camouflage discrimination and toxic agent detection. Therefore, the most popular application field such as laser imaging and ranging were focused on CO2 laser in the last few decades. But during the development of solid state and fiber laser, some people said that the CO2 laser will be disappeared and will be replaced by the solid and fiber laser in the field of military and industry. The coherent CO2 laser radar will have the same destiny in the field of military affairs. However, to my opinion, the high power CO2 laser will be the most important laser source for laser radar and countermeasure in the future.

  9. Enhanced resolution edge and surface estimation from ladar point clouds containing multiple return data

    NASA Astrophysics Data System (ADS)

    Neilsen, Kevin D.; Budge, Scott E.

    2013-11-01

    Signal processing enables the detection of more returns in a digital ladar waveform by computing the surface response. Prior work has shown that obtaining the surface response can improve the range resolution by a factor of 2. However, this advantage presents a problem when forming a range image-each ladar shot crossing an edge contains multiple values. To exploit this information, the location of each return inside the spatial beam footprint is estimated by dividing the footprint into sections that correspond to each return and assigning the coordinates of the return to the centroid of the region. Increased resolution results on the edges of targets where multiple returns occur. Experiments focus on angled and slotted surfaces for both simulated and real data. Results show that the angle of incidence on a 75-deg surface is computed only using a single waveform with an error of 1.4 deg and that the width of a 19-cm-wide by 16-cm-deep slot is estimated with an error of 3.4 cm using real data. Point clouds show that the edges of the slotted surface are sharpened. These results can be used to improve features extracted from objects for applications such as automatic target recognition.

  10. ShowMe3D

    Energy Science and Technology Software Center (ESTSC)

    2012-01-05

    ShowMe3D is a data visualization graphical user interface specifically designed for use with hyperspectral image obtained from the Hyperspectral Confocal Microscope. The program allows the user to select and display any single image from a three dimensional hyperspectral image stack. By moving a slider control, the user can easily move between images of the stack. The user can zoom into any region of the image. The user can select any pixel or region from themore » displayed image and display the fluorescence spectrum associated with that pixel or region. The user can define up to 3 spectral filters to apply to the hyperspectral image and view the image as it would appear from a filter-based confocal microscope. The user can also obtain statistics such as intensity average and variance from selected regions.« less

  11. ShowMe3D

    SciTech Connect

    Sinclair, Michael B

    2012-01-05

    ShowMe3D is a data visualization graphical user interface specifically designed for use with hyperspectral image obtained from the Hyperspectral Confocal Microscope. The program allows the user to select and display any single image from a three dimensional hyperspectral image stack. By moving a slider control, the user can easily move between images of the stack. The user can zoom into any region of the image. The user can select any pixel or region from the displayed image and display the fluorescence spectrum associated with that pixel or region. The user can define up to 3 spectral filters to apply to the hyperspectral image and view the image as it would appear from a filter-based confocal microscope. The user can also obtain statistics such as intensity average and variance from selected regions.

  12. 3D Elastic Wavefield Tomography

    NASA Astrophysics Data System (ADS)

    Guasch, L.; Warner, M.; Stekl, I.; Umpleby, A.; Shah, N.

    2010-12-01

    Wavefield tomography, or waveform inversion, aims to extract the maximum information from seismic data by matching trace by trace the response of the solid earth to seismic waves using numerical modelling tools. Its first formulation dates from the early 80's, when Albert Tarantola developed a solid theoretical basis that is still used today with little change. Due to computational limitations, the application of the method to 3D problems has been unaffordable until a few years ago, and then only under the acoustic approximation. Although acoustic wavefield tomography is widely used, a complete solution of the seismic inversion problem requires that we account properly for the physics of wave propagation, and so must include elastic effects. We have developed a 3D tomographic wavefield inversion code that incorporates the full elastic wave equation. The bottle neck of the different implementations is the forward modelling algorithm that generates the synthetic data to be compared with the field seismograms as well as the backpropagation of the residuals needed to form the direction update of the model parameters. Furthermore, one or two extra modelling runs are needed in order to calculate the step-length. Our approach uses a FD scheme explicit time-stepping by finite differences that are 4th order in space and 2nd order in time, which is a 3D version of the one developed by Jean Virieux in 1986. We chose the time domain because an explicit time scheme is much less demanding in terms of memory than its frequency domain analogue, although the discussion of wich domain is more efficient still remains open. We calculate the parameter gradients for Vp and Vs by correlating the normal and shear stress wavefields respectively. A straightforward application would lead to the storage of the wavefield at all grid points at each time-step. We tackled this problem using two different approaches. The first one makes better use of resources for small models of dimension equal

  13. Supernova Remnant in 3-D

    NASA Technical Reports Server (NTRS)

    2009-01-01

    wavelengths. Since the amount of the wavelength shift is related to the speed of motion, one can determine how fast the debris are moving in either direction. Because Cas A is the result of an explosion, the stellar debris is expanding radially outwards from the explosion center. Using simple geometry, the scientists were able to construct a 3-D model using all of this information. A program called 3-D Slicer modified for astronomical use by the Astronomical Medicine Project at Harvard University in Cambridge, Mass. was used to display and manipulate the 3-D model. Commercial software was then used to create the 3-D fly-through.

    The blue filaments defining the blast wave were not mapped using the Doppler effect because they emit a different kind of light synchrotron radiation that does not emit light at discrete wavelengths, but rather in a broad continuum. The blue filaments are only a representation of the actual filaments observed at the blast wave.

    This visualization shows that there are two main components to this supernova remnant: a spherical component in the outer parts of the remnant and a flattened (disk-like) component in the inner region. The spherical component consists of the outer layer of the star that exploded, probably made of helium and carbon. These layers drove a spherical blast wave into the diffuse gas surrounding the star. The flattened component that astronomers were unable to map into 3-D prior to these Spitzer observations consists of the inner layers of the star. It is made from various heavier elements, not all shown in the visualization, such as oxygen, neon, silicon, sulphur, argon and iron.

    High-velocity plumes, or jets, of this material are shooting out from the explosion in the plane of the disk-like component mentioned above. Plumes of silicon appear in the northeast and southwest, while those of iron are seen in the southeast and north. These jets were already known and Doppler velocity measurements have been made for these

  14. Supernova Remnant in 3-D

    NASA Technical Reports Server (NTRS)

    2009-01-01

    wavelengths. Since the amount of the wavelength shift is related to the speed of motion, one can determine how fast the debris are moving in either direction. Because Cas A is the result of an explosion, the stellar debris is expanding radially outwards from the explosion center. Using simple geometry, the scientists were able to construct a 3-D model using all of this information. A program called 3-D Slicer modified for astronomical use by the Astronomical Medicine Project at Harvard University in Cambridge, Mass. was used to display and manipulate the 3-D model. Commercial software was then used to create the 3-D fly-through.

    The blue filaments defining the blast wave were not mapped using the Doppler effect because they emit a different kind of light synchrotron radiation that does not emit light at discrete wavelengths, but rather in a broad continuum. The blue filaments are only a representation of the actual filaments observed at the blast wave.

    This visualization shows that there are two main components to this supernova remnant: a spherical component in the outer parts of the remnant and a flattened (disk-like) component in the inner region. The spherical component consists of the outer layer of the star that exploded, probably made of helium and carbon. These layers drove a spherical blast wave into the diffuse gas surrounding the star. The flattened component that astronomers were unable to map into 3-D prior to these Spitzer observations consists of the inner layers of the star. It is made from various heavier elements, not all shown in the visualization, such as oxygen, neon, silicon, sulphur, argon and iron.

    High-velocity plumes, or jets, of this material are shooting out from the explosion in the plane of the disk-like component mentioned above. Plumes of silicon appear in the northeast and southwest, while those of iron are seen in the southeast and north. These jets were already known and Doppler velocity measurements have been made for these

  15. 3D Structure of Tillage Soils

    NASA Astrophysics Data System (ADS)

    González-Torre, Iván; Losada, Juan Carlos; Falconer, Ruth; Hapca, Simona; Tarquis, Ana M.

    2015-04-01

    Soil structure may be defined as the spatial arrangement of soil particles, aggregates and pores. The geometry of each one of these elements, as well as their spatial arrangement, has a great influence on the transport of fluids and solutes through the soil. Fractal/Multifractal methods have been increasingly applied to quantify soil structure thanks to the advances in computer technology (Tarquis et al., 2003). There is no doubt that computed tomography (CT) has provided an alternative for observing intact soil structure. These CT techniques reduce the physical impact to sampling, providing three-dimensional (3D) information and allowing rapid scanning to study sample dynamics in near real-time (Houston et al., 2013a). However, several authors have dedicated attention to the appropriate pore-solid CT threshold (Elliot and Heck, 2007; Houston et al., 2013b) and the better method to estimate the multifractal parameters (Grau et al., 2006; Tarquis et al., 2009). The aim of the present study is to evaluate the effect of the algorithm applied in the multifractal method (box counting and box gliding) and the cube size on the calculation of generalized fractal dimensions (Dq) in grey images without applying any threshold. To this end, soil samples were extracted from different areas plowed with three tools (moldboard, chissel and plow). Soil samples for each of the tillage treatment were packed into polypropylene cylinders of 8 cm diameter and 10 cm high. These were imaged using an mSIMCT at 155keV and 25 mA. An aluminium filter (0.25 mm) was applied to reduce beam hardening and later several corrections where applied during reconstruction. References Elliot, T.R. and Heck, R.J. 2007. A comparison of 2D and 3D thresholding of CT imagery. Can. J. Soil Sci., 87(4), 405-412. Grau, J, Médez, V.; Tarquis, A.M., Saa, A. and Díaz, M.C.. 2006. Comparison of gliding box and box-counting methods in soil image analysis. Geoderma, 134, 349-359. González-Torres, Iván. Theory and

  16. 3D multiplexed immunoplasmonics microscopy

    NASA Astrophysics Data System (ADS)

    Bergeron, Éric; Patskovsky, Sergiy; Rioux, David; Meunier, Michel

    2016-07-01

    Selective labelling, identification and spatial distribution of cell surface biomarkers can provide important clinical information, such as distinction between healthy and diseased cells, evolution of a disease and selection of the optimal patient-specific treatment. Immunofluorescence is the gold standard for efficient detection of biomarkers expressed by cells. However, antibodies (Abs) conjugated to fluorescent dyes remain limited by their photobleaching, high sensitivity to the environment, low light intensity, and wide absorption and emission spectra. Immunoplasmonics is a novel microscopy method based on the visualization of Abs-functionalized plasmonic nanoparticles (fNPs) targeting cell surface biomarkers. Tunable fNPs should provide higher multiplexing capacity than immunofluorescence since NPs are photostable over time, strongly scatter light at their plasmon peak wavelengths and can be easily functionalized. In this article, we experimentally demonstrate accurate multiplexed detection based on the immunoplasmonics approach. First, we achieve the selective labelling of three targeted cell surface biomarkers (cluster of differentiation 44 (CD44), epidermal growth factor receptor (EGFR) and voltage-gated K+ channel subunit KV1.1) on human cancer CD44+ EGFR+ KV1.1+ MDA-MB-231 cells and reference CD44- EGFR- KV1.1+ 661W cells. The labelling efficiency with three stable specific immunoplasmonics labels (functionalized silver nanospheres (CD44-AgNSs), gold (Au) NSs (EGFR-AuNSs) and Au nanorods (KV1.1-AuNRs)) detected by reflected light microscopy (RLM) is similar to the one with immunofluorescence. Second, we introduce an improved method for 3D localization and spectral identification of fNPs based on fast z-scanning by RLM with three spectral filters corresponding to the plasmon peak wavelengths of the immunoplasmonics labels in the cellular environment (500 nm for 80 nm AgNSs, 580 nm for 100 nm AuNSs and 700 nm for 40 nm × 92 nm AuNRs). Third, the developed

  17. NIF Ignition Target 3D Point Design

    SciTech Connect

    Jones, O; Marinak, M; Milovich, J; Callahan, D

    2008-11-05

    We have developed an input file for running 3D NIF hohlraums that is optimized such that it can be run in 1-2 days on parallel computers. We have incorporated increasing levels of automation into the 3D input file: (1) Configuration controlled input files; (2) Common file for 2D and 3D, different types of capsules (symcap, etc.); and (3) Can obtain target dimensions, laser pulse, and diagnostics settings automatically from NIF Campaign Management Tool. Using 3D Hydra calculations to investigate different problems: (1) Intrinsic 3D asymmetry; (2) Tolerance to nonideal 3D effects (e.g. laser power balance, pointing errors); and (3) Synthetic diagnostics.

  18. 3D multiplexed immunoplasmonics microscopy.

    PubMed

    Bergeron, Éric; Patskovsky, Sergiy; Rioux, David; Meunier, Michel

    2016-07-21

    Selective labelling, identification and spatial distribution of cell surface biomarkers can provide important clinical information, such as distinction between healthy and diseased cells, evolution of a disease and selection of the optimal patient-specific treatment. Immunofluorescence is the gold standard for efficient detection of biomarkers expressed by cells. However, antibodies (Abs) conjugated to fluorescent dyes remain limited by their photobleaching, high sensitivity to the environment, low light intensity, and wide absorption and emission spectra. Immunoplasmonics is a novel microscopy method based on the visualization of Abs-functionalized plasmonic nanoparticles (fNPs) targeting cell surface biomarkers. Tunable fNPs should provide higher multiplexing capacity than immunofluorescence since NPs are photostable over time, strongly scatter light at their plasmon peak wavelengths and can be easily functionalized. In this article, we experimentally demonstrate accurate multiplexed detection based on the immunoplasmonics approach. First, we achieve the selective labelling of three targeted cell surface biomarkers (cluster of differentiation 44 (CD44), epidermal growth factor receptor (EGFR) and voltage-gated K(+) channel subunit KV1.1) on human cancer CD44(+) EGFR(+) KV1.1(+) MDA-MB-231 cells and reference CD44(-) EGFR(-) KV1.1(+) 661W cells. The labelling efficiency with three stable specific immunoplasmonics labels (functionalized silver nanospheres (CD44-AgNSs), gold (Au) NSs (EGFR-AuNSs) and Au nanorods (KV1.1-AuNRs)) detected by reflected light microscopy (RLM) is similar to the one with immunofluorescence. Second, we introduce an improved method for 3D localization and spectral identification of fNPs based on fast z-scanning by RLM with three spectral filters corresponding to the plasmon peak wavelengths of the immunoplasmonics labels in the cellular environment (500 nm for 80 nm AgNSs, 580 nm for 100 nm AuNSs and 700 nm for 40 nm × 92 nm AuNRs). Third

  19. Multi-Purpose Crew Vehicle Camera Asset Planning: Imagery Previsualization

    NASA Technical Reports Server (NTRS)

    Beaulieu, K.

    2014-01-01

    Using JSC-developed and other industry-standard off-the-shelf 3D modeling, animation, and rendering software packages, the Image Science Analysis Group (ISAG) supports Orion Project imagery planning efforts through dynamic 3D simulation and realistic previsualization of ground-, vehicle-, and air-based camera output.

  20. 3D Kitaev spin liquids

    NASA Astrophysics Data System (ADS)

    Hermanns, Maria

    The Kitaev honeycomb model has become one of the archetypal spin models exhibiting topological phases of matter, where the magnetic moments fractionalize into Majorana fermions interacting with a Z2 gauge field. In this talk, we discuss generalizations of this model to three-dimensional lattice structures. Our main focus is the metallic state that the emergent Majorana fermions form. In particular, we discuss the relation of the nature of this Majorana metal to the details of the underlying lattice structure. Besides (almost) conventional metals with a Majorana Fermi surface, one also finds various realizations of Dirac semi-metals, where the gapless modes form Fermi lines or even Weyl nodes. We introduce a general classification of these gapless quantum spin liquids using projective symmetry analysis. Furthermore, we briefly outline why these Majorana metals in 3D Kitaev systems provide an even richer variety of Dirac and Weyl phases than possible for electronic matter and comment on possible experimental signatures. Work done in collaboration with Kevin O'Brien and Simon Trebst.

  1. Yogi the rock - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Yogi, a rock taller than rover Sojourner, is the subject of this image, taken in stereo by the deployed Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. The soil in the foreground has been the location of multiple soil mechanics experiments performed by Sojourner's cleated wheels. Pathfinder scientists were able to control the force inflicted on the soil beneath the rover's wheels, giving them insight into the soil's mechanical properties. The soil mechanics experiments were conducted after this image was taken.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  2. 3D ultrafast laser scanner

    NASA Astrophysics Data System (ADS)

    Mahjoubfar, A.; Goda, K.; Wang, C.; Fard, A.; Adam, J.; Gossett, D. R.; Ayazi, A.; Sollier, E.; Malik, O.; Chen, E.; Liu, Y.; Brown, R.; Sarkhosh, N.; Di Carlo, D.; Jalali, B.

    2013-03-01

    Laser scanners are essential for scientific research, manufacturing, defense, and medical practice. Unfortunately, often times the speed of conventional laser scanners (e.g., galvanometric mirrors and acousto-optic deflectors) falls short for many applications, resulting in motion blur and failure to capture fast transient information. Here, we present a novel type of laser scanner that offers roughly three orders of magnitude higher scan rates than conventional methods. Our laser scanner, which we refer to as the hybrid dispersion laser scanner, performs inertia-free laser scanning by dispersing a train of broadband pulses both temporally and spatially. More specifically, each broadband pulse is temporally processed by time stretch dispersive Fourier transform and further dispersed into space by one or more diffractive elements such as prisms and gratings. As a proof-of-principle demonstration, we perform 1D line scans at a record high scan rate of 91 MHz and 2D raster scans and 3D volumetric scans at an unprecedented scan rate of 105 kHz. The method holds promise for a broad range of scientific, industrial, and biomedical applications. To show the utility of our method, we demonstrate imaging, nanometer-resolved surface vibrometry, and high-precision flow cytometry with real-time throughput that conventional laser scanners cannot offer due to their low scan rates.

  3. Crowdsourcing Based 3d Modeling

    NASA Astrophysics Data System (ADS)

    Somogyi, A.; Barsi, A.; Molnar, B.; Lovas, T.

    2016-06-01

    Web-based photo albums that support organizing and viewing the users' images are widely used. These services provide a convenient solution for storing, editing and sharing images. In many cases, the users attach geotags to the images in order to enable using them e.g. in location based applications on social networks. Our paper discusses a procedure that collects open access images from a site frequently visited by tourists. Geotagged pictures showing the image of a sight or tourist attraction are selected and processed in photogrammetric processing software that produces the 3D model of the captured object. For the particular investigation we selected three attractions in Budapest. To assess the geometrical accuracy, we used laser scanner and DSLR as well as smart phone photography to derive reference values to enable verifying the spatial model obtained from the web-album images. The investigation shows how detailed and accurate models could be derived applying photogrammetric processing software, simply by using images of the community, without visiting the site.

  4. 3D multiplexed immunoplasmonics microscopy

    NASA Astrophysics Data System (ADS)

    Bergeron, Éric; Patskovsky, Sergiy; Rioux, David; Meunier, Michel

    2016-07-01

    Selective labelling, identification and spatial distribution of cell surface biomarkers can provide important clinical information, such as distinction between healthy and diseased cells, evolution of a disease and selection of the optimal patient-specific treatment. Immunofluorescence is the gold standard for efficient detection of biomarkers expressed by cells. However, antibodies (Abs) conjugated to fluorescent dyes remain limited by their photobleaching, high sensitivity to the environment, low light intensity, and wide absorption and emission spectra. Immunoplasmonics is a novel microscopy method based on the visualization of Abs-functionalized plasmonic nanoparticles (fNPs) targeting cell surface biomarkers. Tunable fNPs should provide higher multiplexing capacity than immunofluorescence since NPs are photostable over time, strongly scatter light at their plasmon peak wavelengths and can be easily functionalized. In this article, we experimentally demonstrate accurate multiplexed detection based on the immunoplasmonics approach. First, we achieve the selective labelling of three targeted cell surface biomarkers (cluster of differentiation 44 (CD44), epidermal growth factor receptor (EGFR) and voltage-gated K+ channel subunit KV1.1) on human cancer CD44+ EGFR+ KV1.1+ MDA-MB-231 cells and reference CD44- EGFR- KV1.1+ 661W cells. The labelling efficiency with three stable specific immunoplasmonics labels (functionalized silver nanospheres (CD44-AgNSs), gold (Au) NSs (EGFR-AuNSs) and Au nanorods (KV1.1-AuNRs)) detected by reflected light microscopy (RLM) is similar to the one with immunofluorescence. Second, we introduce an improved method for 3D localization and spectral identification of fNPs based on fast z-scanning by RLM with three spectral filters corresponding to the plasmon peak wavelengths of the immunoplasmonics labels in the cellular environment (500 nm for 80 nm AgNSs, 580 nm for 100 nm AuNSs and 700 nm for 40 nm × 92 nm AuNRs). Third, the developed

  5. Large bulk-yard 3D measurement based on videogrammetry and projected contour aiding

    NASA Astrophysics Data System (ADS)

    Ou, Jianliang; Zhang, Xiaohu; Yuan, Yun; Zhu, Xianwei

    2011-07-01

    Fast and accurate 3D measurement of large stack-yard is important job in bulk load-and-unload and logistics management. Stack-yard holds its special characteristics as: complex and irregular shape, single surface texture and low material reflectivity, thus its 3D measurement is quite difficult to be realized by traditional non-contacting methods, such as LiDAR(LIght Detecting And Ranging) and photogrammetry. Light-section is good at the measurement of small bulk-flow but not suitable for large-scale bulk-yard yet. In the paper, an improved method based on stereo cameras and laser-line projector is proposed. The due theoretical model is composed from such three key points: corresponding point of contour edge matching in stereo imagery based on gradient and epipolar-line constraint, 3D point-set calculating for stereo imagery projected-contour edge with least square adjustment and forward intersection, then the projected 3D-contour reconstructed by RANSAC(RANdom SAmpling Consensus) and contour spatial features from 3D point-set of single contour edge. In this way, stack-yard surface can be scanned easily by the laser-line projector, and certain region's 3D shape can be reconstructed automatically by stereo cameras on an observing position. Experiment proved the proposed method is effective for bulk-yard 3D measurement in fast, automatic, reliable and accurate way.

  6. 3-D Object Recognition from Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Smith, W.; Walker, A. S.; Zhang, B.

    2011-09-01

    The market for real-time 3-D mapping includes not only traditional geospatial applications but also navigation of unmanned autonomous vehicles (UAVs). Massively parallel processes such as graphics processing unit (GPU) computing make real-time 3-D object recognition and mapping achievable. Geospatial technologies such as digital photogrammetry and GIS offer advanced capabilities to produce 2-D and 3-D static maps using UAV data. The goal is to develop real-time UAV navigation through increased automation. It is challenging for a computer to identify a 3-D object such as a car, a tree or a house, yet automatic 3-D object recognition is essential to increasing the productivity of geospatial data such as 3-D city site models. In the past three decades, researchers have used radiometric properties to identify objects in digital imagery with limited success, because these properties vary considerably from image to image. Consequently, our team has developed software that recognizes certain types of 3-D objects within 3-D point clouds. Although our software is developed for modeling, simulation and visualization, it has the potential to be valuable in robotics and UAV applications. The locations and shapes of 3-D objects such as buildings and trees are easily recognizable by a human from a brief glance at a representation of a point cloud such as terrain-shaded relief. The algorithms to extract these objects have been developed and require only the point cloud and minimal human inputs such as a set of limits on building size and a request to turn on a squaring option. The algorithms use both digital surface model (DSM) and digital elevation model (DEM), so software has also been developed to derive the latter from the former. The process continues through the following steps: identify and group 3-D object points into regions; separate buildings and houses from trees; trace region boundaries; regularize and simplify boundary polygons; construct complex roofs. Several case

  7. Quantitative 3-D imaging topogrammetry for telemedicine applications

    NASA Technical Reports Server (NTRS)

    Altschuler, Bruce R.

    1994-01-01

    The technology to reliably transmit high-resolution visual imagery over short to medium distances in real time has led to the serious considerations of the use of telemedicine, telepresence, and telerobotics in the delivery of health care. These concepts may involve, and evolve toward: consultation from remote expert teaching centers; diagnosis; triage; real-time remote advice to the surgeon; and real-time remote surgical instrument manipulation (telerobotics with virtual reality). Further extrapolation leads to teledesign and telereplication of spare surgical parts through quantitative teleimaging of 3-D surfaces tied to CAD/CAM devices and an artificially intelligent archival data base of 'normal' shapes. The ability to generate 'topogrames' or 3-D surface numerical tables of coordinate values capable of creating computer-generated virtual holographic-like displays, machine part replication, and statistical diagnostic shape assessment is critical to the progression of telemedicine. Any virtual reality simulation will remain in 'video-game' realm until realistic dimensional and spatial relational inputs from real measurements in vivo during surgeries are added to an ever-growing statistical data archive. The challenges of managing and interpreting this 3-D data base, which would include radiographic and surface quantitative data, are considerable. As technology drives toward dynamic and continuous 3-D surface measurements, presenting millions of X, Y, Z data points per second of flexing, stretching, moving human organs, the knowledge base and interpretive capabilities of 'brilliant robots' to work as a surgeon's tireless assistants becomes imaginable. The brilliant robot would 'see' what the surgeon sees--and more, for the robot could quantify its 3-D sensing and would 'see' in a wider spectral range than humans, and could zoom its 'eyes' from the macro world to long-distance microscopy. Unerring robot hands could rapidly perform machine-aided suturing with

  8. 3-D Cavern Enlargement Analyses

    SciTech Connect

    EHGARTNER, BRIAN L.; SOBOLIK, STEVEN R.

    2002-03-01

    Three-dimensional finite element analyses simulate the mechanical response of enlarging existing caverns at the Strategic Petroleum Reserve (SPR). The caverns are located in Gulf Coast salt domes and are enlarged by leaching during oil drawdowns as fresh water is injected to displace the crude oil from the caverns. The current criteria adopted by the SPR limits cavern usage to 5 drawdowns (leaches). As a base case, 5 leaches were modeled over a 25 year period to roughly double the volume of a 19 cavern field. Thirteen additional leaches where then simulated until caverns approached coalescence. The cavern field approximated the geometries and geologic properties found at the West Hackberry site. This enabled comparisons are data collected over nearly 20 years to analysis predictions. The analyses closely predicted the measured surface subsidence and cavern closure rates as inferred from historic well head pressures. This provided the necessary assurance that the model displacements, strains, and stresses are accurate. However, the cavern field has not yet experienced the large scale drawdowns being simulated. Should they occur in the future, code predictions should be validated with actual field behavior at that time. The simulations were performed using JAS3D, a three dimensional finite element analysis code for nonlinear quasi-static solids. The results examine the impacts of leaching and cavern workovers, where internal cavern pressures are reduced, on surface subsidence, well integrity, and cavern stability. The results suggest that the current limit of 5 oil drawdowns may be extended with some mitigative action required on the wells and later on to surface structure due to subsidence strains. The predicted stress state in the salt shows damage to start occurring after 15 drawdowns with significant failure occurring at the 16th drawdown, well beyond the current limit of 5 drawdowns.

  9. Imaging a Sustainable Future in 3D

    NASA Astrophysics Data System (ADS)

    Schuhr, W.; Lee, J. D.; Kanngieser, E.

    2012-07-01

    It is the intention of this paper, to contribute to a sustainable future by providing objective object information based on 3D photography as well as promoting 3D photography not only for scientists, but also for amateurs. Due to the presentation of this article by CIPA Task Group 3 on "3D Photographs in Cultural Heritage", the presented samples are masterpieces of historic as well as of current 3D photography concentrating on cultural heritage. In addition to a report on exemplarily access to international archives of 3D photographs, samples for new 3D photographs taken with modern 3D cameras, as well as by means of a ground based high resolution XLITE staff camera and also 3D photographs taken from a captive balloon and the use of civil drone platforms are dealt with. To advise on optimum suited 3D methodology, as well as to catch new trends in 3D, an updated synoptic overview of the 3D visualization technology, even claiming completeness, has been carried out as a result of a systematic survey. In this respect, e.g., today's lasered crystals might be "early bird" products in 3D, which, due to lack in resolution, contrast and color, remember to the stage of the invention of photography.

  10. Teaching Geography with 3-D Visualization Technology

    ERIC Educational Resources Information Center

    Anthamatten, Peter; Ziegler, Susy S.

    2006-01-01

    Technology that helps students view images in three dimensions (3-D) can support a broad range of learning styles. "Geo-Wall systems" are visualization tools that allow scientists, teachers, and students to project stereographic images and view them in 3-D. We developed and presented 3-D visualization exercises in several undergraduate courses.…

  11. 3D Printing and Its Urologic Applications

    PubMed Central

    Soliman, Youssef; Feibus, Allison H; Baum, Neil

    2015-01-01

    3D printing is the development of 3D objects via an additive process in which successive layers of material are applied under computer control. This article discusses 3D printing, with an emphasis on its historical context and its potential use in the field of urology. PMID:26028997

  12. 3D Flow Visualization Using Texture Advection

    NASA Technical Reports Server (NTRS)

    Kao, David; Zhang, Bing; Kim, Kwansik; Pang, Alex; Moran, Pat (Technical Monitor)

    2001-01-01

    Texture advection is an effective tool for animating and investigating 2D flows. In this paper, we discuss how this technique can be extended to 3D flows. In particular, we examine the use of 3D and 4D textures on 3D synthetic and computational fluid dynamics flow fields.

  13. 3D Elastic Seismic Wave Propagation Code

    Energy Science and Technology Software Center (ESTSC)

    1998-09-23

    E3D is capable of simulating seismic wave propagation in a 3D heterogeneous earth. Seismic waves are initiated by earthquake, explosive, and/or other sources. These waves propagate through a 3D geologic model, and are simulated as synthetic seismograms or other graphical output.

  14. 3D Printing and Its Urologic Applications.

    PubMed

    Soliman, Youssef; Feibus, Allison H; Baum, Neil

    2015-01-01

    3D printing is the development of 3D objects via an additive process in which successive layers of material are applied under computer control. This article discusses 3D printing, with an emphasis on its historical context and its potential use in the field of urology. PMID:26028997

  15. 3-D Perspective Kamchatka Peninsula Russia

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This perspective view shows the western side of the volcanically active Kamchatka Peninsula in eastern Russia. The image was generated using the first data collected during the Shuttle Radar Topography Mission (SRTM). In the foreground is the Sea of Okhotsk. Inland from the coast, vegetated floodplains and low relief hills rise toward snow capped peaks. The topographic effects on snow and vegetation distribution are very clear in this near-horizontal view. Forming the skyline is the Sredinnyy Khrebet, the volcanic mountain range that makes up the spine of the peninsula. High resolution SRTM topographic data will be used by geologists to study how volcanoes form and to understand the hazards posed by future eruptions. This image was generated using topographic data from SRTM and an enhanced true-color image from the Landsat 7 satellite. This image contains about 2,400 meters (7,880 feet) of total relief. The topographic expression was enhanced by adding artificial shading as calculated from the SRTM elevation model. The Landsat data was provided by the United States Geological Survey's Earth Resources Observations Systems (EROS) Data Center, Sioux Falls, South Dakota. SRTM, launched on February 11, 2000, used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. To collect the 3-D SRTM data, engineers added a 60- meter-long (200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. SRTM collected three dimensional measurements of nearly 80 percent of the Earth's surface. SRTM is a cooperative project between NASA, the National Imagery and Mapping Agency (NIMA) of the U.S. Department of Defense, and the German and Italian space agencies. Size: 33.3 km (20.6 miles) wide x 136 km (84 miles) coast to skyline. Location: 58.3 deg. North lat., 160 deg. East long. Orientation: Easterly view, 2 degrees

  16. 3-D Perspective View, Kamchatka Peninsula, Russia

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This perspective view shows the western side of the volcanically active Kamchatka Peninsula in eastern Russia. The image was generated using the first data collected during the Shuttle Radar Topography Mission (SRTM). In the foreground is the Sea of Okhotsk. Inland from the coast, vegetated floodplains and low relief hills rise toward snow capped peaks. The topographic effects on snow and vegetation distribution are very clear in this near-horizontal view. Forming the skyline is the Sredinnyy Khrebet, the volcanic mountain range that makes up the spine of the peninsula. High resolution SRTM topographic data will be used by geologists to study how volcanoes form and to understand the hazards posed by future eruptions.

    This image was generated using topographic data from SRTM and an enhanced true-color image from the Landsat 7 satellite. This image contains about 2,400 meters (7,880 feet) of total relief. The topographic expression was enhanced by adding artificial shading as calculated from the SRTM elevation model. The Landsat data was provided by the United States Geological Survey's Earth Resources Observations Systems (EROS) Data Center, Sioux Falls, South Dakota.

    SRTM, launched on February 11, 2000, used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar(SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. To collect the 3-D SRTM data, engineers added a 60-meter-long (200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. SRTM collected three-dimensional measurements of nearly 80 percent of the Earth's surface. SRTM is a cooperative project between NASA, the National Imagery and Mapping Agency (NIMA) of the U.S. Department of Defense, and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Earth Science Enterprise, Washington, D.C.

    Size: 33.3 km (20.6 miles) wide x

  17. Simulation of synthetic aperture imaging ladar (SAIL) for three-dimensional target model

    NASA Astrophysics Data System (ADS)

    Yi, Ning; Wu, Zhen-Sen

    2010-11-01

    In conventional imaging laser radar, the resolution of target is constrained by the diffraction-limited, which includes the beamwidth of the laser in the target plane and the telescope's aperture. Synthetic aperture imaging Ladar (SAIL) is an imaging technique which employs aperture synthesis with coherent laser radar, the resolution is determined by the total frequency spread of the source and is independent of range, so can achieve fine resolution in long range. Ray tracing is utilized here to obtain two-dimensional scattering properties from three-dimensional geometric model of actual target, and range-doppler algorithm is used for synthetic aperture process in laser image simulation. The results show that the SAIL can support better resolution.

  18. Current efforts on developing an HWIL synthetic environment for LADAR sensor testing at AMRDEC

    NASA Astrophysics Data System (ADS)

    Kim, Hajin J.; Cornell, Michael C.; Naumann, Charles B.

    2005-05-01

    Efforts in developing a synthetic environment for testing LADAR sensors in a hardware-in-the-loop simulation are continuing at the Aviation and Missile Research, Engineering, and Development Center (AMRDEC) of the U.S. Army Research, Engineering and Development Command (RDECOM). Current activities have concentrated on developing the optical projection hardware portion of the synthetic environment. These activities range from system level design down to component level testing. Of particular interest have been schemes for generating the optical signals representing the individual pixels of the projection. Several approaches have been investigated and tested with emphasis on operating wavelength, intensity dynamic range and uniformity, and flexibility in pixel waveform generation. This paper will discuss some of the results from these current efforts at RDECOM's Advanced Simulation Center (ASC).

  19. Pose recognition of articulated target based on ladar range image with elastic shape analysis

    NASA Astrophysics Data System (ADS)

    Liu, Zheng-Jun; Li, Qi; Wang, Qi

    2014-10-01

    Elastic shape analysis is introduced for pose recognition of articulated target which is based on small samples of ladar range images. Shape deformations caused by poses changes represented as closed elastic curves given by the square-root velocity function geodesics are used to quantify shape differences and the Karcher mean is used to build a model library. Three kinds of moments - Hu moment invariants, affine moment invariants, and Zernike moment invariants based on support vector machines (SVMs) - are applied to evaluate this approach. The experiment results show that no matter what the azimuth angles of the testing samples are, this approach is capable of achieving a high recognition rate using only 3 model samples with different carrier to noise ratios (CNR); the performance of this approach is much better than that of three kinds of moments based on SVM, especially under high noise conditions.

  20. A synthetic aperture imaging ladar demonstrator with Ø300mm antenna and changeable footprint

    NASA Astrophysics Data System (ADS)

    Zhou, Yu; Zhi, Yanan; Yan, Aimin; Xu, Nan; Wang, Lijuan; Wu, Yapeng; Luan, Zhu; Sun, Jianfeng; Liu, Liren

    2010-08-01

    A demonstrator of synthetic aperture imaging ladar (SAIL) is constructed with the maximum aperture Ø300mm of antenna telescope. This demonstrator can be set with a rectangular aperture to produce a rectangular footprint suitable for scanning format with a high resolution and a wide strip. Particularly, the demonstrator is designed not only for the farfield application but also for the verifying and testing in the near-field in the laboratory space. And a 90 degree optical hybrid is used to mitigate the external phase errors caused by turbulence and vibration along line of sight direction and the internal phase errors caused by local fiber delay line. This paper gives the details of the systematic design, and the progresses of the experiment at a target distance around 130m.

  1. The Esri 3D city information model

    NASA Astrophysics Data System (ADS)

    Reitz, T.; Schubiger-Banz, S.

    2014-02-01

    With residential and commercial space becoming increasingly scarce, cities are going vertical. Managing the urban environments in 3D is an increasingly important and complex undertaking. To help solving this problem, Esri has released the ArcGIS for 3D Cities solution. The ArcGIS for 3D Cities solution provides the information model, tools and apps for creating, analyzing and maintaining a 3D city using the ArcGIS platform. This paper presents an overview of the 3D City Information Model and some sample use cases.

  2. Vizcano: Student development of 3-D Volcanic Visualizations

    NASA Astrophysics Data System (ADS)

    Konter, J. G.; Smith-Konter, B. R.

    2008-12-01

    The development and use of 3-D visualizations of volcanoes in the classroom provides a unique way to balance common student curiosity about volcanoes with interests in computer technology and opportunities for exploration. Through the inclusion of multiple scientific datasets, students can develop 3-D volcano visualizations and use these unique tools to investigate relationships between geological, geophysical, and geochemical datasets. This type of exercise allows undergraduates to become familiar with research-type exploration, while graduate students can focus on more specific research questions. This Fall, students enrolled in the Volcanology course at the University of Texas at El Paso will develop 3-D visualizations of major volcanoes on Earth, using Fledermaus and GRASS visualization software. Each visualization project will utilize SRTM v.4 topography data and available LandSat imagery. These data will allow for an initial investigation of the structure of the volcano, including recognition of recent volcanic features. Students will also use seismic data from a variety of online resources to evaluate earthquake locations and earthquake swarms as indicators of volcanic activity. Each visualization project will be archived on a website hosted at UTEP (http://www.geo.utep.edu/pub/jasper/volcano), making each visualization product globally accessible to students, teachers, researchers, and the general public. These student-generated visualizations form an important part of a practical resource for not only students and teachers, but also Earth scientists that are interested in placing their own research in a geospatial context.

  3. Effective 3-D surface modeling for geographic information systems

    NASA Astrophysics Data System (ADS)

    Yüksek, K.; Alparslan, M.; Mendi, E.

    2013-11-01

    In this work, we propose a dynamic, flexible and interactive urban digital terrain platform (DTP) with spatial data and query processing capabilities of Geographic Information Systems (GIS), multimedia database functionality and graphical modeling infrastructure. A new data element, called Geo-Node, which stores image, spatial data and 3-D CAD objects is developed using an efficient data structure. The system effectively handles data transfer of Geo-Nodes between main memory and secondary storage with an optimized Directional Replacement Policy (DRP) based buffer management scheme. Polyhedron structures are used in Digital Surface Modeling (DSM) and smoothing process is performed by interpolation. The experimental results show that our framework achieves high performance and works effectively with urban scenes independent from the amount of spatial data and image size. The proposed platform may contribute to the development of various applications such as Web GIS systems based on 3-D graphics standards (e.g. X3-D and VRML) and services which integrate multi-dimensional spatial information and satellite/aerial imagery.

  4. Radiometric Quality Evaluation of INSAT-3D Imager Data

    NASA Astrophysics Data System (ADS)

    Prakash, S.; Jindal, D.; Badal, N.; Kartikeyan, B.; Gopala Krishna, B.

    2014-11-01

    INSAT-3D is an advanced meteorological satellite of ISRO which acquires imagery in optical and infra-red (IR) channels for study of weather dynamics in Indian sub-continent region. In this paper, methodology of radiometric quality evaluation for Level-1 products of Imager, one of the payloads onboard INSAT-3D, is described. Firstly, overall visual quality of scene in terms of dynamic range, edge sharpness or modulation transfer function (MTF), presence of striping and other image artefacts is computed. Uniform targets in Desert and Sea region are identified for which detailed radiometric performance evaluation for IR channels is carried out. Mean brightness temperature (BT) of targets is computed and validated with independently generated radiometric references. Further, diurnal/seasonal trends in target BT values and radiometric uncertainty or sensor noise are studied. Results of radiometric quality evaluation over duration of eight months (January to August 2014) and comparison of radiometric consistency pre/post yaw flip of satellite are presented. Radiometric Analysis indicates that INSAT-3D images have high contrast (MTF > 0.2) and low striping effects. A bias of <4K is observed in the brightness temperature values of TIR-1 channel measured during January-August 2014 indicating consistent radiometric calibration. Diurnal and seasonal analysis shows that Noise equivalent differential temperature (NEdT) for IR channels is consistent and well within specifications.

  5. Effective 3-D surface modeling for geographic information systems

    NASA Astrophysics Data System (ADS)

    Yüksek, K.; Alparslan, M.; Mendi, E.

    2016-01-01

    In this work, we propose a dynamic, flexible and interactive urban digital terrain platform with spatial data and query processing capabilities of geographic information systems, multimedia database functionality and graphical modeling infrastructure. A new data element, called Geo-Node, which stores image, spatial data and 3-D CAD objects is developed using an efficient data structure. The system effectively handles data transfer of Geo-Nodes between main memory and secondary storage with an optimized directional replacement policy (DRP) based buffer management scheme. Polyhedron structures are used in digital surface modeling and smoothing process is performed by interpolation. The experimental results show that our framework achieves high performance and works effectively with urban scenes independent from the amount of spatial data and image size. The proposed platform may contribute to the development of various applications such as Web GIS systems based on 3-D graphics standards (e.g., X3-D and VRML) and services which integrate multi-dimensional spatial information and satellite/aerial imagery.

  6. Case study: Beauty and the Beast 3D: benefits of 3D viewing for 2D to 3D conversion

    NASA Astrophysics Data System (ADS)

    Handy Turner, Tara

    2010-02-01

    From the earliest stages of the Beauty and the Beast 3D conversion project, the advantages of accurate desk-side 3D viewing was evident. While designing and testing the 2D to 3D conversion process, the engineering team at Walt Disney Animation Studios proposed a 3D viewing configuration that not only allowed artists to "compose" stereoscopic 3D but also improved efficiency by allowing artists to instantly detect which image features were essential to the stereoscopic appeal of a shot and which features had minimal or even negative impact. At a time when few commercial 3D monitors were available and few software packages provided 3D desk-side output, the team designed their own prototype devices and collaborated with vendors to create a "3D composing" workstation. This paper outlines the display technologies explored, final choices made for Beauty and the Beast 3D, wish-lists for future development and a few rules of thumb for composing compelling 2D to 3D conversions.

  7. Demonstrated resolution enhancement capability of a stripmap holographic aperture ladar system.

    PubMed

    Venable, Samuel M; Duncan, Bradley D; Dierking, Matthew P; Rabb, David J

    2012-08-01

    Holographic aperture ladar (HAL) is a variant of synthetic aperture ladar (SAL). The two processes are related in that they both seek to increase cross-range (i.e., the direction of the receiver translation) image resolution through the synthesis of a large effective aperture. This is in turn achieved via the translation of a receiver aperture and the subsequent coherent phasing and correlation of multiple received signals. However, while SAL imaging incorporates a translating point detector, HAL takes advantage of a two-dimensional translating sensor array. For the research presented in this article, a side-looking stripmap HAL geometry was used to sequentially image a set of Ronchi ruling targets. Prior to this, theoretical calculations were performed to determine the baseline, single subaperture resolution of our experimental, laboratory-based system. Theoretical calculations were also performed to determine the ideal modulation transfer function (MTF) and expected cross-range HAL image sharpening ratio corresponding to the geometry of our apparatus. To verify our expectations, we first sequentially captured an oversampled collection of pupil plane field segments for each Ronchi ruling. A HAL processing algorithm incorporating a high-precision speckle field registration process was then employed to phase-correct and reposition the field segments. Relative interframe piston phase errors were also removed prior to final synthetic image formation. By then taking the Fourier transform of the synthetic image intensity and examining the fundamental spatial frequency content, we were able to produce experimental modulation transfer function curves, which we then compared with our theoretical expectations. Our results show that we are able to achieve nearly diffraction-limited results for image sharpening ratios as high as 6.43. PMID:22859045

  8. 3D laptop for defense applications

    NASA Astrophysics Data System (ADS)

    Edmondson, Richard; Chenault, David

    2012-06-01

    Polaris Sensor Technologies has developed numerous 3D display systems using a US Army patented approach. These displays have been developed as prototypes for handheld controllers for robotic systems and closed hatch driving, and as part of a TALON robot upgrade for 3D vision, providing depth perception for the operator for improved manipulation and hazard avoidance. In this paper we discuss the prototype rugged 3D laptop computer and its applications to defense missions. The prototype 3D laptop combines full temporal and spatial resolution display with the rugged Amrel laptop computer. The display is viewed through protective passive polarized eyewear, and allows combined 2D and 3D content. Uses include robot tele-operation with live 3D video or synthetically rendered scenery, mission planning and rehearsal, enhanced 3D data interpretation, and simulation.

  9. Multiple-input multiple-output synthetic aperture ladar system for wide-range swath with high azimuth resolution.

    PubMed

    Tang, Yu; Qin, Bao; Yan, Yun; Xing, Mengdao

    2016-02-20

    For the trade-off between the high azimuth resolution and the wide-range swath in the single-input single-output synthetic aperture ladar (SAL) system, the range swath of the SAL system is restricted to a narrow range, this paper proposes a multiple-input multiple-output (MIMO) synthetic aperture ladar system. The MIMO system adopts a low pulse repetition frequency (PRF) to avoid a range ambiguity for the wide-range swath and in azimuth adopts the multi-channel method to achieve azimuth high resolution from the unambiguous azimuth wide-spectrum signal, processed through adaptive digital beam-forming technology. Simulations and analytical results are presented. PMID:26906593

  10. Auditory Imagery: Empirical Findings

    ERIC Educational Resources Information Center

    Hubbard, Timothy L.

    2010-01-01

    The empirical literature on auditory imagery is reviewed. Data on (a) imagery for auditory features (pitch, timbre, loudness), (b) imagery for complex nonverbal auditory stimuli (musical contour, melody, harmony, tempo, notational audiation, environmental sounds), (c) imagery for verbal stimuli (speech, text, in dreams, interior monologue), (d)…

  11. 3-D Technology Approaches for Biological Ecologies

    NASA Astrophysics Data System (ADS)

    Liu, Liyu; Austin, Robert; U. S-China Physical-Oncology Sciences Alliance (PS-OA) Team

    Constructing three dimensional (3-D) landscapes is an inevitable issue in deep study of biological ecologies, because in whatever scales in nature, all of the ecosystems are composed by complex 3-D environments and biological behaviors. Just imagine if a 3-D technology could help complex ecosystems be built easily and mimic in vivo microenvironment realistically with flexible environmental controls, it will be a fantastic and powerful thrust to assist researchers for explorations. For years, we have been utilizing and developing different technologies for constructing 3-D micro landscapes for biophysics studies in in vitro. Here, I will review our past efforts, including probing cancer cell invasiveness with 3-D silicon based Tepuis, constructing 3-D microenvironment for cell invasion and metastasis through polydimethylsiloxane (PDMS) soft lithography, as well as explorations of optimized stenting positions for coronary bifurcation disease with 3-D wax printing and the latest home designed 3-D bio-printer. Although 3-D technologies is currently considered not mature enough for arbitrary 3-D micro-ecological models with easy design and fabrication, I hope through my talk, the audiences will be able to sense its significance and predictable breakthroughs in the near future. This work was supported by the State Key Development Program for Basic Research of China (Grant No. 2013CB837200), the National Natural Science Foundation of China (Grant No. 11474345) and the Beijing Natural Science Foundation (Grant No. 7154221).

  12. RT3D tutorials for GMS users

    SciTech Connect

    Clement, T.P.; Jones, N.L.

    1998-02-01

    RT3D (Reactive Transport in 3-Dimensions) is a computer code that solves coupled partial differential equations that describe reactive-flow and transport of multiple mobile and/or immobile species in a three dimensional saturated porous media. RT3D was developed from the single-species transport code, MT3D (DoD-1.5, 1997 version). As with MT3D, RT3D also uses the USGS groundwater flow model MODFLOW for computing spatial and temporal variations in groundwater head distribution. This report presents a set of tutorial problems that are designed to illustrate how RT3D simulations can be performed within the Department of Defense Groundwater Modeling System (GMS). GMS serves as a pre- and post-processing interface for RT3D. GMS can be used to define all the input files needed by RT3D code, and later the code can be launched from within GMS and run as a separate application. Once the RT3D simulation is completed, the solution can be imported to GMS for graphical post-processing. RT3D v1.0 supports several reaction packages that can be used for simulating different types of reactive contaminants. Each of the tutorials, described below, provides training on a different RT3D reaction package. Each reaction package has different input requirements, and the tutorials are designed to describe these differences. Furthermore, the tutorials illustrate the various options available in GMS for graphical post-processing of RT3D results. Users are strongly encouraged to complete the tutorials before attempting to use RT3D and GMS on a routine basis.

  13. Development and analysis of a photon-counting three-dimensional imaging laser detection and ranging (LADAR) system.

    PubMed

    Oh, Min Seok; Kong, Hong Jin; Kim, Tae Hoon; Jo, Sung Eun; Kim, Byung Wook; Park, Dong Jo

    2011-05-01

    In this paper, a photon-counting three-dimensional imaging laser detection and ranging (LADAR) system that uses a Geiger-mode avalanche photodiode (GAPD) of relatively short dead time (45 ns) is described. A passively Q-switched microchip laser is used as a laser source and a compact peripheral component interconnect system, which includes a time-to-digital converter (TDC), is set up for fast signal processing. The combination of a GAPD with short dead time and a TDC with a multistop function enables the system to operate in a single-hit or a multihit mode during the acquisition of time-of-flight data. The software for the three-dimensional visualization and an algorithm for the removal of noise are developed. For the photon-counting LADAR system, we establish a theoretical model of target-detection and false-alarm probabilities in both the single-hit and multihit modes with a Poisson statistic; this model provides the prediction of the performance of the system and a technique for the acquisition of a noise image with a GAPD. Both the noise image and the three-dimensional image of a scene acquired by the photon-counting LADAR system during the day are presented. PMID:21532685

  14. 3D Dynamic Echocardiography with a Digitizer

    NASA Astrophysics Data System (ADS)

    Oshiro, Osamu; Matani, Ayumu; Chihara, Kunihiro

    1998-05-01

    In this paper,a three-dimensional (3D) dynamic ultrasound (US) imaging system,where a US brightness-mode (B-mode) imagetriggered with an R-wave of electrocardiogram (ECG)was obtained with an ultrasound diagnostic deviceand the location and orientation of the US probewere simultaneously measured with a 3D digitizer, is described.The obtained B-mode imagewas then projected onto a virtual 3D spacewith the proposed interpolation algorithm using a Gaussian operator.Furthermore, a 3D image was presented on a cathode ray tube (CRT)and stored in virtual reality modeling language (VRML).We performed an experimentto reconstruct a 3D heart image in systole using this system.The experimental results indicatethat the system enables the visualization ofthe 3D and internal structure of a heart viewed from any angleand has potential for use in dynamic imaging,intraoperative ultrasonography and tele-medicine.

  15. 3D Scientific Visualization with Blender

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.

    2015-03-01

    This is the first book written on using Blender for scientific visualization. It is a practical and interesting introduction to Blender for understanding key parts of 3D rendering and animation that pertain to the sciences via step-by-step guided tutorials. 3D Scientific Visualization with Blender takes you through an understanding of 3D graphics and modelling for different visualization scenarios in the physical sciences.

  16. 3D Lunar Terrain Reconstruction from Apollo Images

    NASA Technical Reports Server (NTRS)

    Broxton, Michael J.; Nefian, Ara V.; Moratto, Zachary; Kim, Taemin; Lundy, Michael; Segal, Alkeksandr V.

    2009-01-01

    Generating accurate three dimensional planetary models is becoming increasingly important as NASA plans manned missions to return to the Moon in the next decade. This paper describes a 3D surface reconstruction system called the Ames Stereo Pipeline that is designed to produce such models automatically by processing orbital stereo imagery. We discuss two important core aspects of this system: (1) refinement of satellite station positions and pose estimates through least squares bundle adjustment; and (2) a stochastic plane fitting algorithm that generalizes the Lucas-Kanade method for optimal matching between stereo pair images.. These techniques allow us to automatically produce seamless, highly accurate digital elevation models from multiple stereo image pairs while significantly reducing the influence of image noise. Our technique is demonstrated on a set of 71 high resolution scanned images from the Apollo 15 mission

  17. Software for 3D radiotherapy dosimetry. Validation

    NASA Astrophysics Data System (ADS)

    Kozicki, Marek; Maras, Piotr; Karwowski, Andrzej C.

    2014-08-01

    The subject of this work is polyGeVero® software (GeVero Co., Poland), which has been developed to fill the requirements of fast calculations of 3D dosimetry data with the emphasis on polymer gel dosimetry for radiotherapy. This software comprises four workspaces that have been prepared for: (i) calculating calibration curves and calibration equations, (ii) storing the calibration characteristics of the 3D dosimeters, (iii) calculating 3D dose distributions in irradiated 3D dosimeters, and (iv) comparing 3D dose distributions obtained from measurements with the aid of 3D dosimeters and calculated with the aid of treatment planning systems (TPSs). The main features and functions of the software are described in this work. Moreover, the core algorithms were validated and the results are presented. The validation was performed using the data of the new PABIGnx polymer gel dosimeter. The polyGeVero® software simplifies and greatly accelerates the calculations of raw 3D dosimetry data. It is an effective tool for fast verification of TPS-generated plans for tumor irradiation when combined with a 3D dosimeter. Consequently, the software may facilitate calculations by the 3D dosimetry community. In this work, the calibration characteristics of the PABIGnx obtained through four calibration methods: multi vial, cross beam, depth dose, and brachytherapy, are discussed as well.

  18. Dimensional accuracy of 3D printed vertebra

    NASA Astrophysics Data System (ADS)

    Ogden, Kent; Ordway, Nathaniel; Diallo, Dalanda; Tillapaugh-Fay, Gwen; Aslan, Can

    2014-03-01

    3D printer applications in the biomedical sciences and medical imaging are expanding and will have an increasing impact on the practice of medicine. Orthopedic and reconstructive surgery has been an obvious area for development of 3D printer applications as the segmentation of bony anatomy to generate printable models is relatively straightforward. There are important issues that should be addressed when using 3D printed models for applications that may affect patient care; in particular the dimensional accuracy of the printed parts needs to be high to avoid poor decisions being made prior to surgery or therapeutic procedures. In this work, the dimensional accuracy of 3D printed vertebral bodies derived from CT data for a cadaver spine is compared with direct measurements on the ex-vivo vertebra and with measurements made on the 3D rendered vertebra using commercial 3D image processing software. The vertebra was printed on a consumer grade 3D printer using an additive print process using PLA (polylactic acid) filament. Measurements were made for 15 different anatomic features of the vertebral body, including vertebral body height, endplate width and depth, pedicle height and width, and spinal canal width and depth, among others. It is shown that for the segmentation and printing process used, the results of measurements made on the 3D printed vertebral body are substantially the same as those produced by direct measurement on the vertebra and measurements made on the 3D rendered vertebra.

  19. Stereo 3-D Vision in Teaching Physics

    NASA Astrophysics Data System (ADS)

    Zabunov, Svetoslav

    2012-03-01

    Stereo 3-D vision is a technology used to present images on a flat surface (screen, paper, etc.) and at the same time to create the notion of three-dimensional spatial perception of the viewed scene. A great number of physical processes are much better understood when viewed in stereo 3-D vision compared to standard flat 2-D presentation. The current paper describes the modern stereo 3-D technologies that are applicable to various tasks in teaching physics in schools, colleges, and universities. Examples of stereo 3-D simulations developed by the author can be observed on online.

  20. Accuracy in Quantitative 3D Image Analysis

    PubMed Central

    Bassel, George W.

    2015-01-01

    Quantitative 3D imaging is becoming an increasingly popular and powerful approach to investigate plant growth and development. With the increased use of 3D image analysis, standards to ensure the accuracy and reproducibility of these data are required. This commentary highlights how image acquisition and postprocessing can introduce artifacts into 3D image data and proposes steps to increase both the accuracy and reproducibility of these analyses. It is intended to aid researchers entering the field of 3D image processing of plant cells and tissues and to help general readers in understanding and evaluating such data. PMID:25804539

  1. FastScript3D - A Companion to Java 3D

    NASA Technical Reports Server (NTRS)

    Koenig, Patti

    2005-01-01

    FastScript3D is a computer program, written in the Java 3D(TM) programming language, that establishes an alternative language that helps users who lack expertise in Java 3D to use Java 3D for constructing three-dimensional (3D)-appearing graphics. The FastScript3D language provides a set of simple, intuitive, one-line text-string commands for creating, controlling, and animating 3D models. The first word in a string is the name of a command; the rest of the string contains the data arguments for the command. The commands can also be used as an aid to learning Java 3D. Developers can extend the language by adding custom text-string commands. The commands can define new 3D objects or load representations of 3D objects from files in formats compatible with such other software systems as X3D. The text strings can be easily integrated into other languages. FastScript3D facilitates communication between scripting languages [which enable programming of hyper-text markup language (HTML) documents to interact with users] and Java 3D. The FastScript3D language can be extended and customized on both the scripting side and the Java 3D side.

  2. 3D PDF - a means of public access to geological 3D - objects, using the example of GTA3D

    NASA Astrophysics Data System (ADS)

    Slaby, Mark-Fabian; Reimann, Rüdiger

    2013-04-01

    In geology, 3D modeling has become very important. In the past, two-dimensional data such as isolines, drilling profiles, or cross-sections based on those, were used to illustrate the subsurface geology, whereas now, we can create complex digital 3D models. These models are produced with special software, such as GOCAD ®. The models can be viewed, only through the software used to create them, or through viewers available for free. The platform-independent PDF (Portable Document Format), enforced by Adobe, has found a wide distribution. This format has constantly evolved over time. Meanwhile, it is possible to display CAD data in an Adobe 3D PDF file with the free Adobe Reader (version 7). In a 3D PDF, a 3D model is freely rotatable and can be assembled from a plurality of objects, which can thus be viewed from all directions on their own. In addition, it is possible to create moveable cross-sections (profiles), and to assign transparency to the objects. Based on industry-standard CAD software, 3D PDFs can be generated from a large number of formats, or even be exported directly from this software. In geoinformatics, different approaches to creating 3D PDFs exist. The intent of the Authority for Mining, Energy and Geology to allow free access to the models of the Geotectonic Atlas (GTA3D), could not be realized with standard software solutions. A specially designed code converts the 3D objects to VRML (Virtual Reality Modeling Language). VRML is one of the few formats that allow using image files (maps) as textures, and to represent colors and shapes correctly. The files were merged in Acrobat X Pro, and a 3D PDF was generated subsequently. A topographic map, a display of geographic directions and horizontal and vertical scales help to facilitate the use.

  3. An aerial 3D printing test mission

    NASA Astrophysics Data System (ADS)

    Hirsch, Michael; McGuire, Thomas; Parsons, Michael; Leake, Skye; Straub, Jeremy

    2016-05-01

    This paper provides an overview of an aerial 3D printing technology, its development and its testing. This technology is potentially useful in its own right. In addition, this work advances the development of a related in-space 3D printing technology. A series of aerial 3D printing test missions, used to test the aerial printing technology, are discussed. Through completing these test missions, the design for an in-space 3D printer may be advanced. The current design for the in-space 3D printer involves focusing thermal energy to heat an extrusion head and allow for the extrusion of molten print material. Plastics can be used as well as composites including metal, allowing for the extrusion of conductive material. A variety of experiments will be used to test this initial 3D printer design. High altitude balloons will be used to test the effects of microgravity on 3D printing, as well as parabolic flight tests. Zero pressure balloons can be used to test the effect of long 3D printing missions subjected to low temperatures. Vacuum chambers will be used to test 3D printing in a vacuum environment. The results will be used to adapt a current prototype of an in-space 3D printer. Then, a small scale prototype can be sent into low-Earth orbit as a 3-U cube satellite. With the ability to 3D print in space demonstrated, future missions can launch production hardware through which the sustainability and durability of structures in space will be greatly improved.

  4. 3D ultrafast ultrasound imaging in vivo

    NASA Astrophysics Data System (ADS)

    Provost, Jean; Papadacci, Clement; Esteban Arango, Juan; Imbault, Marion; Fink, Mathias; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-10-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32  ×  32 matrix-array probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging, and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra—and inter-observer variability.

  5. Topology dictionary for 3D video understanding.

    PubMed

    Tung, Tony; Matsuyama, Takashi

    2012-08-01

    This paper presents a novel approach that achieves 3D video understanding. 3D video consists of a stream of 3D models of subjects in motion. The acquisition of long sequences requires large storage space (2 GB for 1 min). Moreover, it is tedious to browse data sets and extract meaningful information. We propose the topology dictionary to encode and describe 3D video content. The model consists of a topology-based shape descriptor dictionary which can be generated from either extracted patterns or training sequences. The model relies on 1) topology description and classification using Reeb graphs, and 2) a Markov motion graph to represent topology change states. We show that the use of Reeb graphs as the high-level topology descriptor is relevant. It allows the dictionary to automatically model complex sequences, whereas other strategies would require prior knowledge on the shape and topology of the captured subjects. Our approach serves to encode 3D video sequences, and can be applied for content-based description and summarization of 3D video sequences. Furthermore, topology class labeling during a learning process enables the system to perform content-based event recognition. Experiments were carried out on various 3D videos. We showcase an application for 3D video progressive summarization using the topology dictionary. PMID:22745004

  6. 3-D seismology in the Arabian Gulf

    SciTech Connect

    Al-Husseini, M.; Chimblo, R.

    1995-08-01

    Since 1977 when Aramco and GSI (Geophysical Services International) pioneered the first 3-D seismic survey in the Arabian Gulf, under the guidance of Aramco`s Chief Geophysicist John Hoke, 3-D seismology has been effectively used to map many complex subsurface geological phenomena. By the mid-1990s extensive 3-D surveys were acquired in Abu Dhabi, Oman, Qatar and Saudi Arabia. Also in the mid-1990`s Bahrain, Kuwait and Dubai were preparing to record surveys over their fields. On the structural side 3-D has refined seismic maps, focused faults and fractures systems, as well as outlined the distribution of facies, porosity and fluid saturation. In field development, 3D has not only reduced drilling costs significantly, but has also improved the understanding of fluid behavior in the reservoir. In Oman, Petroleum Development Oman (PDO) has now acquired the first Gulf 4-D seismic survey (time-lapse 3D survey) over the Yibal Field. The 4-D survey will allow PDO to directly monitor water encroachment in the highly-faulted Cretaceous Shu`aiba reservoir. In exploration, 3-D seismology has resolved complex prospects with structural and stratigraphic complications and reduced the risk in the selection of drilling locations. The many case studies from Saudi Arabia, Oman, Qatar and the United Arab Emirates, which are reviewed in this paper, attest to the effectiveness of 3D seismology in exploration and producing, in clastics and carbonates reservoirs, and in the Mesozoic and Paleozoic.

  7. A 3D Geostatistical Mapping Tool

    Energy Science and Technology Software Center (ESTSC)

    1999-02-09

    This software provides accurate 3D reservoir modeling tools and high quality 3D graphics for PC platforms enabling engineers and geologists to better comprehend reservoirs and consequently improve their decisions. The mapping algorithms are fractals, kriging, sequential guassian simulation, and three nearest neighbor methods.

  8. 3D, or Not to Be?

    ERIC Educational Resources Information Center

    Norbury, Keith

    2012-01-01

    It may be too soon for students to be showing up for class with popcorn and gummy bears, but technology similar to that behind the 3D blockbuster movie "Avatar" is slowly finding its way into college classrooms. 3D classroom projectors are taking students on fantastic voyages inside the human body, to the ruins of ancient Greece--even to faraway…

  9. Stereoscopic Investigations of 3D Coulomb Balls

    SciTech Connect

    Kaeding, Sebastian; Melzer, Andre; Arp, Oliver; Block, Dietmar; Piel, Alexander

    2005-10-31

    In dusty plasmas particles are arranged due to the influence of external forces and the Coulomb interaction. Recently Arp et al. were able to generate 3D spherical dust clouds, so-called Coulomb balls. Here, we present measurements that reveal the full 3D particle trajectories from stereoscopic imaging.

  10. 3-D structures of planetary nebulae

    NASA Astrophysics Data System (ADS)

    Steffen, W.

    2016-07-01

    Recent advances in the 3-D reconstruction of planetary nebulae are reviewed. We include not only results for 3-D reconstructions, but also the current techniques in terms of general methods and software. In order to obtain more accurate reconstructions, we suggest to extend the widely used assumption of homologous nebula expansion to map spectroscopically measured velocity to position along the line of sight.

  11. Wow! 3D Content Awakens the Classroom

    ERIC Educational Resources Information Center

    Gordon, Dan

    2010-01-01

    From her first encounter with stereoscopic 3D technology designed for classroom instruction, Megan Timme, principal at Hamilton Park Pacesetter Magnet School in Dallas, sensed it could be transformative. Last spring, when she began pilot-testing 3D content in her third-, fourth- and fifth-grade classrooms, Timme wasn't disappointed. Students…

  12. 3D Printed Block Copolymer Nanostructures

    ERIC Educational Resources Information Center

    Scalfani, Vincent F.; Turner, C. Heath; Rupar, Paul A.; Jenkins, Alexander H.; Bara, Jason E.

    2015-01-01

    The emergence of 3D printing has dramatically advanced the availability of tangible molecular and extended solid models. Interestingly, there are few nanostructure models available both commercially and through other do-it-yourself approaches such as 3D printing. This is unfortunate given the importance of nanotechnology in science today. In this…

  13. Static & Dynamic Response of 3D Solids

    Energy Science and Technology Software Center (ESTSC)

    1996-07-15

    NIKE3D is a large deformations 3D finite element code used to obtain the resulting displacements and stresses from multi-body static and dynamic structural thermo-mechanics problems with sliding interfaces. Many nonlinear and temperature dependent constitutive models are available.

  14. Immersive 3D Geovisualization in Higher Education

    ERIC Educational Resources Information Center

    Philips, Andrea; Walz, Ariane; Bergner, Andreas; Graeff, Thomas; Heistermann, Maik; Kienzler, Sarah; Korup, Oliver; Lipp, Torsten; Schwanghart, Wolfgang; Zeilinger, Gerold

    2015-01-01

    In this study, we investigate how immersive 3D geovisualization can be used in higher education. Based on MacEachren and Kraak's geovisualization cube, we examine the usage of immersive 3D geovisualization and its usefulness in a research-based learning module on flood risk, called GEOSimulator. Results of a survey among participating students…

  15. Stereo 3-D Vision in Teaching Physics

    ERIC Educational Resources Information Center

    Zabunov, Svetoslav

    2012-01-01

    Stereo 3-D vision is a technology used to present images on a flat surface (screen, paper, etc.) and at the same time to create the notion of three-dimensional spatial perception of the viewed scene. A great number of physical processes are much better understood when viewed in stereo 3-D vision compared to standard flat 2-D presentation. The…

  16. Pathways for Learning from 3D Technology

    ERIC Educational Resources Information Center

    Carrier, L. Mark; Rab, Saira S.; Rosen, Larry D.; Vasquez, Ludivina; Cheever, Nancy A.

    2012-01-01

    The purpose of this study was to find out if 3D stereoscopic presentation of information in a movie format changes a viewer's experience of the movie content. Four possible pathways from 3D presentation to memory and learning were considered: a direct connection based on cognitive neuroscience research; a connection through "immersion" in that 3D…

  17. Clinical applications of 3-D dosimeters

    NASA Astrophysics Data System (ADS)

    Wuu, Cheng-Shie

    2015-01-01

    Both 3-D gels and radiochromic plastic dosimeters, in conjunction with dose image readout systems (MRI or optical-CT), have been employed to measure 3-D dose distributions in many clinical applications. The 3-D dose maps obtained from these systems can provide a useful tool for clinical dose verification for complex treatment techniques such as IMRT, SRS/SBRT, brachytherapy, and proton beam therapy. These complex treatments present high dose gradient regions in the boundaries between the target and surrounding critical organs. Dose accuracy in these areas can be critical, and may affect treatment outcome. In this review, applications of 3-D gels and PRESAGE dosimeter are reviewed and evaluated in terms of their performance in providing information on clinical dose verification as well as commissioning of various treatment modalities. Future interests and clinical needs on studies of 3-D dosimetry are also discussed.

  18. Biocompatible 3D Matrix with Antimicrobial Properties.

    PubMed

    Ion, Alberto; Andronescu, Ecaterina; Rădulescu, Dragoș; Rădulescu, Marius; Iordache, Florin; Vasile, Bogdan Ștefan; Surdu, Adrian Vasile; Albu, Madalina Georgiana; Maniu, Horia; Chifiriuc, Mariana Carmen; Grumezescu, Alexandru Mihai; Holban, Alina Maria

    2016-01-01

    The aim of this study was to develop, characterize and assess the biological activity of a new regenerative 3D matrix with antimicrobial properties, based on collagen (COLL), hydroxyapatite (HAp), β-cyclodextrin (β-CD) and usnic acid (UA). The prepared 3D matrix was characterized by Scanning Electron Microscopy (SEM), Fourier Transform Infrared Microscopy (FT-IRM), Transmission Electron Microscopy (TEM), and X-ray Diffraction (XRD). In vitro qualitative and quantitative analyses performed on cultured diploid cells demonstrated that the 3D matrix is biocompatible, allowing the normal development and growth of MG-63 osteoblast-like cells and exhibited an antimicrobial effect, especially on the Staphylococcus aureus strain, explained by the particular higher inhibitory activity of usnic acid (UA) against Gram positive bacterial strains. Our data strongly recommend the obtained 3D matrix to be used as a successful alternative for the fabrication of three dimensional (3D) anti-infective regeneration matrix for bone tissue engineering. PMID:26805790

  19. Fabrication of 3D Silicon Sensors

    SciTech Connect

    Kok, A.; Hansen, T.E.; Hansen, T.A.; Lietaer, N.; Summanwar, A.; Kenney, C.; Hasi, J.; Da Via, C.; Parker, S.I.; /Hawaii U.

    2012-06-06

    Silicon sensors with a three-dimensional (3-D) architecture, in which the n and p electrodes penetrate through the entire substrate, have many advantages over planar silicon sensors including radiation hardness, fast time response, active edge and dual readout capabilities. The fabrication of 3D sensors is however rather complex. In recent years, there have been worldwide activities on 3D fabrication. SINTEF in collaboration with Stanford Nanofabrication Facility have successfully fabricated the original (single sided double column type) 3D detectors in two prototype runs and the third run is now on-going. This paper reports the status of this fabrication work and the resulted yield. The work of other groups such as the development of double sided 3D detectors is also briefly reported.

  20. BEAMS3D Neutral Beam Injection Model

    SciTech Connect

    Lazerson, Samuel

    2014-04-14

    With the advent of applied 3D fi elds in Tokamaks and modern high performance stellarators, a need has arisen to address non-axisymmetric effects on neutral beam heating and fueling. We report on the development of a fully 3D neutral beam injection (NBI) model, BEAMS3D, which addresses this need by coupling 3D equilibria to a guiding center code capable of modeling neutral and charged particle trajectories across the separatrix and into the plasma core. Ionization, neutralization, charge-exchange, viscous velocity reduction, and pitch angle scattering are modeled with the ADAS atomic physics database [1]. Benchmark calculations are presented to validate the collisionless particle orbits, neutral beam injection model, frictional drag, and pitch angle scattering effects. A calculation of neutral beam heating in the NCSX device is performed, highlighting the capability of the code to handle 3D magnetic fields.

  1. 3D city models completion by fusing lidar and image data

    NASA Astrophysics Data System (ADS)

    Grammatikopoulos, L.; Kalisperakis, I.; Petsa, E.; Stentoumis, C.

    2015-05-01

    A fundamental step in the generation of visually detailed 3D city models is the acquisition of high fidelity 3D data. Typical approaches employ DSM representations usually derived from Lidar (Light Detection and Ranging) airborne scanning or image based procedures. In this contribution, we focus on the fusion of data from both these methods in order to enhance or complete them. Particularly, we combine an existing Lidar and orthomosaic dataset (used as reference), with a new aerial image acquisition (including both vertical and oblique imagery) of higher resolution, which was carried out in the area of Kallithea, in Athens, Greece. In a preliminary step, a digital orthophoto and a DSM is generated from the aerial images in an arbitrary reference system, by employing a Structure from Motion and dense stereo matching framework. The image-to-Lidar registration is performed by 2D feature (SIFT and SURF) extraction and matching among the two orthophotos. The established point correspondences are assigned with 3D coordinates through interpolation on the reference Lidar surface, are then backprojected onto the aerial images, and finally matched with 2D image features located in the vicinity of the backprojected 3D points. Consequently, these points serve as Ground Control Points with appropriate weights for final orientation and calibration of the images through a bundle adjustment solution. By these means, the aerial imagery which is optimally aligned to the reference dataset can be used for the generation of an enhanced and more accurately textured 3D city model.

  2. 3D Visualization Development of SIUE Campus

    NASA Astrophysics Data System (ADS)

    Nellutla, Shravya

    Geographic Information Systems (GIS) has progressed from the traditional map-making to the modern technology where the information can be created, edited, managed and analyzed. Like any other models, maps are simplified representations of real world. Hence visualization plays an essential role in the applications of GIS. The use of sophisticated visualization tools and methods, especially three dimensional (3D) modeling, has been rising considerably due to the advancement of technology. There are currently many off-the-shelf technologies available in the market to build 3D GIS models. One of the objectives of this research was to examine the available ArcGIS and its extensions for 3D modeling and visualization and use them to depict a real world scenario. Furthermore, with the advent of the web, a platform for accessing and sharing spatial information on the Internet, it is possible to generate interactive online maps. Integrating Internet capacity with GIS functionality redefines the process of sharing and processing the spatial information. Enabling a 3D map online requires off-the-shelf GIS software, 3D model builders, web server, web applications and client server technologies. Such environments are either complicated or expensive because of the amount of hardware and software involved. Therefore, the second objective of this research was to investigate and develop simpler yet cost-effective 3D modeling approach that uses available ArcGIS suite products and the free 3D computer graphics software for designing 3D world scenes. Both ArcGIS Explorer and ArcGIS Online will be used to demonstrate the way of sharing and distributing 3D geographic information on the Internet. A case study of the development of 3D campus for the Southern Illinois University Edwardsville is demonstrated.

  3. 3D Ultrafast Ultrasound Imaging In Vivo

    PubMed Central

    Provost, Jean; Papadacci, Clement; Arango, Juan Esteban; Imbault, Marion; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-01-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative real-time imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in three dimensions based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32×32 matrix-array probe. Its capability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3-D Shear-Wave Imaging, 3-D Ultrafast Doppler Imaging and finally 3D Ultrafast combined Tissue and Flow Doppler. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3-D Ultrafast Doppler was used to obtain 3-D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, for the first time, the complex 3-D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, and the 3-D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3-D Ultrafast Ultrasound Imaging for the 3-D real-time mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra- and inter-observer variability. PMID:25207828

  4. The psychology of the 3D experience

    NASA Astrophysics Data System (ADS)

    Janicke, Sophie H.; Ellis, Andrew

    2013-03-01

    With 3D televisions expected to reach 50% home saturation as early as 2016, understanding the psychological mechanisms underlying the user response to 3D technology is critical for content providers, educators and academics. Unfortunately, research examining the effects of 3D technology has not kept pace with the technology's rapid adoption, resulting in large-scale use of a technology about which very little is actually known. Recognizing this need for new research, we conducted a series of studies measuring and comparing many of the variables and processes underlying both 2D and 3D media experiences. In our first study, we found narratives within primetime dramas had the power to shift viewer attitudes in both 2D and 3D settings. However, we found no difference in persuasive power between 2D and 3D content. We contend this lack of effect was the result of poor conversion quality and the unique demands of 3D production. In our second study, we found 3D technology significantly increased enjoyment when viewing sports content, yet offered no added enjoyment when viewing a movie trailer. The enhanced enjoyment of the sports content was shown to be the result of heightened emotional arousal and attention in the 3D condition. We believe the lack of effect found for the movie trailer may be genre-related. In our final study, we found 3D technology significantly enhanced enjoyment of two video games from different genres. The added enjoyment was found to be the result of an increased sense of presence.

  5. Texture mapping based on multiple aerial imageries in urban areas

    NASA Astrophysics Data System (ADS)

    Zhou, Guoqing; Ye, Siqi; Wang, Yuefeng; Han, Caiyun; Wang, Chenxi

    2015-12-01

    In the realistic 3D model reconstruction, the requirement of the texture is very high. Texture is one of the key factors that affecting realistic of the model and using texture mapping technology to realize. In this paper we present a practical approach of texture mapping based on photogrammetry theory from multiple aerial imageries in urban areas. By collinearity equation to matching the model and imageries, and in order to improving the quality of texture, we describe an automatic approach for select the optimal texture to realized 3D building from the aerial imageries of many strip. The texture of buildings can be automatically matching by the algorithm. The experimental results show that the platform of texture mapping process has a high degree of automation and improve the efficiency of the 3D modeling reconstruction.

  6. The 3D Elevation Program: summary for Michigan

    USGS Publications Warehouse

    Carswell, William J., Jr.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation's natural and constructed features. The Michigan Statewide Authoritative Imagery and Lidar (MiSAIL) program provides statewide lidar coordination with local, State, and national groups in support of 3DEP for Michigan.

  7. Medical 3D Printing for the Radiologist.

    PubMed

    Mitsouras, Dimitris; Liacouras, Peter; Imanzadeh, Amir; Giannopoulos, Andreas A; Cai, Tianrun; Kumamaru, Kanako K; George, Elizabeth; Wake, Nicole; Caterson, Edward J; Pomahac, Bohdan; Ho, Vincent B; Grant, Gerald T; Rybicki, Frank J

    2015-01-01

    While use of advanced visualization in radiology is instrumental in diagnosis and communication with referring clinicians, there is an unmet need to render Digital Imaging and Communications in Medicine (DICOM) images as three-dimensional (3D) printed models capable of providing both tactile feedback and tangible depth information about anatomic and pathologic states. Three-dimensional printed models, already entrenched in the nonmedical sciences, are rapidly being embraced in medicine as well as in the lay community. Incorporating 3D printing from images generated and interpreted by radiologists presents particular challenges, including training, materials and equipment, and guidelines. The overall costs of a 3D printing laboratory must be balanced by the clinical benefits. It is expected that the number of 3D-printed models generated from DICOM images for planning interventions and fabricating implants will grow exponentially. Radiologists should at a minimum be familiar with 3D printing as it relates to their field, including types of 3D printing technologies and materials used to create 3D-printed anatomic models, published applications of models to date, and clinical benefits in radiology. Online supplemental material is available for this article. PMID:26562233

  8. 3D bioprinting of tissues and organs.

    PubMed

    Murphy, Sean V; Atala, Anthony

    2014-08-01

    Additive manufacturing, otherwise known as three-dimensional (3D) printing, is driving major innovations in many areas, such as engineering, manufacturing, art, education and medicine. Recent advances have enabled 3D printing of biocompatible materials, cells and supporting components into complex 3D functional living tissues. 3D bioprinting is being applied to regenerative medicine to address the need for tissues and organs suitable for transplantation. Compared with non-biological printing, 3D bioprinting involves additional complexities, such as the choice of materials, cell types, growth and differentiation factors, and technical challenges related to the sensitivities of living cells and the construction of tissues. Addressing these complexities requires the integration of technologies from the fields of engineering, biomaterials science, cell biology, physics and medicine. 3D bioprinting has already been used for the generation and transplantation of several tissues, including multilayered skin, bone, vascular grafts, tracheal splints, heart tissue and cartilaginous structures. Other applications include developing high-throughput 3D-bioprinted tissue models for research, drug discovery and toxicology. PMID:25093879

  9. Optically rewritable 3D liquid crystal displays.

    PubMed

    Sun, J; Srivastava, A K; Zhang, W; Wang, L; Chigrinov, V G; Kwok, H S

    2014-11-01

    Optically rewritable liquid crystal display (ORWLCD) is a concept based on the optically addressed bi-stable display that does not need any power to hold the image after being uploaded. Recently, the demand for the 3D image display has increased enormously. Several attempts have been made to achieve 3D image on the ORWLCD, but all of them involve high complexity for image processing on both hardware and software levels. In this Letter, we disclose a concept for the 3D-ORWLCD by dividing the given image in three parts with different optic axis. A quarter-wave plate is placed on the top of the ORWLCD to modify the emerging light from different domains of the image in different manner. Thereafter, Polaroid glasses can be used to visualize the 3D image. The 3D image can be refreshed, on the 3D-ORWLCD, in one-step with proper ORWLCD printer and image processing, and therefore, with easy image refreshing and good image quality, such displays can be applied for many applications viz. 3D bi-stable display, security elements, etc. PMID:25361316

  10. Extra Dimensions: 3D in PDF Documentation

    NASA Astrophysics Data System (ADS)

    Graf, Norman A.

    2012-12-01

    Experimental science is replete with multi-dimensional information which is often poorly represented by the two dimensions of presentation slides and print media. Past efforts to disseminate such information to a wider audience have failed for a number of reasons, including a lack of standards which are easy to implement and have broad support. Adobe's Portable Document Format (PDF) has in recent years become the de facto standard for secure, dependable electronic information exchange. It has done so by creating an open format, providing support for multiple platforms and being reliable and extensible. By providing support for the ECMA standard Universal 3D (U3D) and the ISO PRC file format in its free Adobe Reader software, Adobe has made it easy to distribute and interact with 3D content. Until recently, Adobe's Acrobat software was also capable of incorporating 3D content into PDF files from a variety of 3D file formats, including proprietary CAD formats. However, this functionality is no longer available in Acrobat X, having been spun off to a separate company. Incorporating 3D content now requires the additional purchase of a separate plug-in. In this talk we present alternatives based on open source libraries which allow the programmatic creation of 3D content in PDF format. While not providing the same level of access to CAD files as the commercial software, it does provide physicists with an alternative path to incorporate 3D content into PDF files from such disparate applications as detector geometries from Geant4, 3D data sets, mathematical surfaces or tesselated volumes.

  11. FUN3D Manual: 12.7

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2015-01-01

    This manual describes the installation and execution of FUN3D version 12.7, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  12. FUN3D Manual: 12.9

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2016-01-01

    This manual describes the installation and execution of FUN3D version 12.9, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  13. FUN3D Manual: 13.0

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bill; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2016-01-01

    This manual describes the installation and execution of FUN3D version 13.0, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  14. FUN3D Manual: 12.8

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2015-01-01

    This manual describes the installation and execution of FUN3D version 12.8, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  15. 3D packaging for integrated circuit systems

    SciTech Connect

    Chu, D.; Palmer, D.W.

    1996-11-01

    A goal was set for high density, high performance microelectronics pursued through a dense 3D packing of integrated circuits. A {open_quotes}tool set{close_quotes} of assembly processes have been developed that enable 3D system designs: 3D thermal analysis, silicon electrical through vias, IC thinning, mounting wells in silicon, adhesives for silicon stacking, pretesting of IC chips before commitment to stacks, and bond pad bumping. Validation of these process developments occurred through both Sandia prototypes and subsequent commercial examples.

  16. A high capacity 3D steganography algorithm.

    PubMed

    Chao, Min-Wen; Lin, Chao-hung; Yu, Cheng-Wei; Lee, Tong-Yee

    2009-01-01

    In this paper, we present a very high-capacity and low-distortion 3D steganography scheme. Our steganography approach is based on a novel multilayered embedding scheme to hide secret messages in the vertices of 3D polygon models. Experimental results show that the cover model distortion is very small as the number of hiding layers ranges from 7 to 13 layers. To the best of our knowledge, this novel approach can provide much higher hiding capacity than other state-of-the-art approaches, while obeying the low distortion and security basic requirements for steganography on 3D models. PMID:19147891

  17. New method of 3-D object recognition

    NASA Astrophysics Data System (ADS)

    He, An-Zhi; Li, Qun Z.; Miao, Peng C.

    1991-12-01

    In this paper, a new method of 3-D object recognition using optical techniques and a computer is presented. We perform 3-D object recognition using moire contour to obtain the object's 3- D coordinates, projecting drawings of the object in three coordinate planes to describe it and using a method of inquiring library of judgement to match objects. The recognition of a simple geometrical entity is simulated by computer and studied experimentally. The recognition of an object which is composed of a few simple geometrical entities is discussed.

  18. Explicit 3-D Hydrodynamic FEM Program

    Energy Science and Technology Software Center (ESTSC)

    2000-11-07

    DYNA3D is a nonlinear explicit finite element code for analyzing 3-D structures and solid continuum. The code is vectorized and available on several computer platforms. The element library includes continuum, shell, beam, truss and spring/damper elements to allow maximum flexibility in modeling physical problems. Many materials are available to represent a wide range of material behavior, including elasticity, plasticity, composites, thermal effects and rate dependence. In addition, DYNA3D has a sophisticated contact interface capability, includingmore » frictional sliding, single surface contact and automatic contact generation.« less

  19. How We 3D-Print Aerogel

    SciTech Connect

    2015-04-23

    A new type of graphene aerogel will make for better energy storage, sensors, nanoelectronics, catalysis and separations. Lawrence Livermore National Laboratory researchers have made graphene aerogel microlattices with an engineered architecture via a 3D printing technique known as direct ink writing. The research appears in the April 22 edition of the journal, Nature Communications. The 3D printed graphene aerogels have high surface area, excellent electrical conductivity, are lightweight, have mechanical stiffness and exhibit supercompressibility (up to 90 percent compressive strain). In addition, the 3D printed graphene aerogel microlattices show an order of magnitude improvement over bulk graphene materials and much better mass transport.

  20. An Improved Version of TOPAZ 3D

    SciTech Connect

    Krasnykh, Anatoly

    2003-07-29

    An improved version of the TOPAZ 3D gun code is presented as a powerful tool for beam optics simulation. In contrast to the previous version of TOPAZ 3D, the geometry of the device under test is introduced into TOPAZ 3D directly from a CAD program, such as Solid Edge or AutoCAD. In order to have this new feature, an interface was developed, using the GiD software package as a meshing code. The article describes this method with two models to illustrate the results.

  1. FUN3D Manual: 12.4

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2014-01-01

    This manual describes the installation and execution of FUN3D version 12.4, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixedelement unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  2. FUN3D Manual: 12.5

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, William L.; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2014-01-01

    This manual describes the installation and execution of FUN3D version 12.5, including optional dependent packages. FUN3D is a suite of computational uid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables ecient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  3. FUN3D Manual: 12.6

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, William L.; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2015-01-01

    This manual describes the installation and execution of FUN3D version 12.6, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  4. Explicit 3-D Hydrodynamic FEM Program

    SciTech Connect

    2000-11-07

    DYNA3D is a nonlinear explicit finite element code for analyzing 3-D structures and solid continuum. The code is vectorized and available on several computer platforms. The element library includes continuum, shell, beam, truss and spring/damper elements to allow maximum flexibility in modeling physical problems. Many materials are available to represent a wide range of material behavior, including elasticity, plasticity, composites, thermal effects and rate dependence. In addition, DYNA3D has a sophisticated contact interface capability, including frictional sliding, single surface contact and automatic contact generation.

  5. XML3D and Xflow: combining declarative 3D for the Web with generic data flows.

    PubMed

    Klein, Felix; Sons, Kristian; Rubinstein, Dmitri; Slusallek, Philipp

    2013-01-01

    Researchers have combined XML3D, which provides declarative, interactive 3D scene descriptions based on HTML5, with Xflow, a language for declarative, high-performance data processing. The result lets Web developers combine a 3D scene graph with data flows for dynamic meshes, animations, image processing, and postprocessing. PMID:24808080

  6. JAR3D Webserver: Scoring and aligning RNA loop sequences to known 3D motifs.

    PubMed

    Roll, James; Zirbel, Craig L; Sweeney, Blake; Petrov, Anton I; Leontis, Neocles

    2016-07-01

    Many non-coding RNAs have been identified and may function by forming 2D and 3D structures. RNA hairpin and internal loops are often represented as unstructured on secondary structure diagrams, but RNA 3D structures show that most such loops are structured by non-Watson-Crick basepairs and base stacking. Moreover, different RNA sequences can form the same RNA 3D motif. JAR3D finds possible 3D geometries for hairpin and internal loops by matching loop sequences to motif groups from the RNA 3D Motif Atlas, by exact sequence match when possible, and by probabilistic scoring and edit distance for novel sequences. The scoring gauges the ability of the sequences to form the same pattern of interactions observed in 3D structures of the motif. The JAR3D webserver at http://rna.bgsu.edu/jar3d/ takes one or many sequences of a single loop as input, or else one or many sequences of longer RNAs with multiple loops. Each sequence is scored against all current motif groups. The output shows the ten best-matching motif groups. Users can align input sequences to each of the motif groups found by JAR3D. JAR3D will be updated with every release of the RNA 3D Motif Atlas, and so its performance is expected to improve over time. PMID:27235417

  7. 3D Globe Support for Arctic Science through the Arctic Research Mapping Application (ARMAP)

    NASA Astrophysics Data System (ADS)

    Brady, J.; Johnson, G. W.; Gaylord, A. G.; Cody, R.; Gonzalez, J. C.; Franko, J. C.; Dover, M.; Garcia-Lavigne, D.; Manley, W.; Score, R.; Tweedie, C. E.

    2008-12-01

    Virtual Globes or 3D Geobrowsers play a crucial role in the visualization of spatial data for scientific research. While many applications provide the ability to visualize data, they lack the necessary GIS functionality to query the information. In addition, many users want to overlay their own tabular, vector and raster data on a virtual globe. The 3D Arctic Research Mapping Application (ARMAP 3D) provides a free 3D geobrowser that includes query functionality and support for many data formats and map services. ARMAP 3D was developed on top of a free software application from the Environmental Systems Research Institute (ESRI) called ArcGIS Explorer (AGX). Several custom tasks as well as a customizable interface have been developed for ARMAP 3D with AGX's own software development kit (SDK) using .NET framework. ARMAP 3D includes high resolution imagery and information from the Arctic Research Logistics Support Service (ARLSS) database which is funded by the National Science Foundation (NSF). ARLSS includes information about NSF research locations plus locations from National Aeronautics and Space Administration (NASA), and National Oceanic and Atmospheric Administration (NOAA) locations. With special emphasis on the International Polar Year (IPY), ARMAP has targeted science planners, scientists, educators, and the general public. In sum, ARMAP goes beyond a simple map display to enable analysis, synthesis, and coordination of Arctic research. Information on the ARMAP suite of applications and services may be accessed via the gateway web site at http://www.armap.org.

  8. Quality Analysis of 3d Surface Reconstruction Using Multi-Platform Photogrammetric Systems

    NASA Astrophysics Data System (ADS)

    Lari, Z.; El-Sheimy, N.

    2016-06-01

    In recent years, the necessity of accurate 3D surface reconstruction has been more pronounced for a wide range of mapping, modelling, and monitoring applications. The 3D data for satisfying the needs of these applications can be collected using different digital imaging systems. Among them, photogrammetric systems have recently received considerable attention due to significant improvements in digital imaging sensors, emergence of new mapping platforms, and development of innovative data processing techniques. To date, a variety of techniques haven been proposed for 3D surface reconstruction using imagery collected by multi-platform photogrammetric systems. However, these approaches suffer from the lack of a well-established quality control procedure which evaluates the quality of reconstructed 3D surfaces independent of the utilized reconstruction technique. Hence, this paper aims to introduce a new quality assessment platform for the evaluation of the 3D surface reconstruction using photogrammetric data. This quality control procedure is performed while considering the quality of input data, processing procedures, and photo-realistic 3D surface modelling. The feasibility of the proposed quality control procedure is finally verified by quality assessment of the 3D surface reconstruction using images from different photogrammetric systems.

  9. A hybrid method for synthetic aperture ladar phase-error compensation

    NASA Astrophysics Data System (ADS)

    Hua, Zhili; Li, Hongping; Gu, Yongjian

    2009-07-01

    As a high resolution imaging sensor, synthetic aperture ladar data contain phase-error whose source include uncompensated platform motion and atmospheric turbulence distortion errors. Two previously devised methods, rank one phase-error estimation algorithm and iterative blind deconvolution are reexamined, of which a hybrid method that can recover both the images and PSF's without any a priori information on the PSF is built to speed up the convergence rate by the consideration in the choice of initialization. To be integrated into spotlight mode SAL imaging model respectively, three methods all can effectively reduce the phase-error distortion. For each approach, signal to noise ratio, root mean square error and CPU time are computed, from which we can see the convergence rate of the hybrid method can be improved because a more efficient initialization set of blind deconvolution. Moreover, by making a further discussion of the hybrid method, the weight distribution of ROPE and IBD is found to be an important factor that affects the final result of the whole compensation process.

  10. Phase error suppression by low-pass filtering for synthetic aperture imaging ladar

    NASA Astrophysics Data System (ADS)

    Sun, Zhiwei; Hou, Peipei; Zhi, Ya'nan; Sun, Jianfeng; Zhou, Yu; Xu, Qian; Lu, Zhiyong; Liu, Liren

    2014-09-01

    Compared to synthetic aperture radar (SAR), synthetic aperture imaging ladar (SAIL) is more sensitive to the phase errors induced by atmospheric turbulence, undesirable line-of-sight translation-vibration and waveform phase error, because the light wavelength is about 3-6 orders of magnitude less than that of the radio frequency. This phase errors will deteriorate the imaging results. In this paper, an algorithm based on low-pass filtering to suppress the phase error is proposed. In this algorithm, the azimuth quadratic phase history with phase error is compensated, then the fast Fourier transform (FFT) is performed in azimuth direction, after the low-pass filtering, the inverse FFT is performed, then the image is reconstructed simultaneously in the range and azimuth direction by the two-dimensional (2D) FFT. The highfrequency phase error can be effectively eliminated hence the imaging results can be optimized by this algorithm. The mathematical analysis by virtue of data-collection equation of side-looking SAIL is presented. The theoretical modeling results are also given. In addition, based on this algorithm, a principle scheme of optical processor is proposed. The verified experiment is performed employing the data obtained from a SAIL demonstrator.

  11. A demonstrator of all-optronic multifunctional down-looking synthetic aperture LADAR

    NASA Astrophysics Data System (ADS)

    Lu, Wei; Lu, Zhiyong; Sun, Zhiwei; Zhang, Ning; Sun, Jianfeng; Wang, Lijuan; Liu, Liren

    2015-09-01

    The design and laboratory experiment of a demonstrator of all-optronic down-looking synthetic aperture imaging ladar (SAL) is presented in this paper, in which the sensing-to-processing chain is carried out with light. The ultra-fast processing capability from image acquisition to real-time reconstruction is shown. The demonstrator consists of a down-looking SAL unit with a beam scanner and an optical processor. The down-looking SAL unit has a transmitter of two coaxial orthogonally polarized beams and a receiver of polarization-interference self-heterodyne balanced detection. The linear phase modulation and the quadratic phase history are produced by the projection of movable cylindrical lenses. Three functions of strip-map mode, spotlight mode and static mode are available. The optical processor is an astigmatic optical system, which reduces to a Fourier transform system and a free-space of the Fresnel diffraction to realize the matched filtering. A spatial light modulator is used as the input interface. The experiment is performed with an optical collimator. The system design is given, too. The down-looking SAL has the features such as a big coverage with an enhanced receiving aperture and little influence from atmospheric turbulence and the optical processor is simple.

  12. Resampling technique in the orthogonal direction for down-looking Synthetic Aperture Imaging Ladar

    NASA Astrophysics Data System (ADS)

    Li, Guangyuan; Sun, Jianfeng; Lu, Zhiyong; Zhang, Ning; Cai, Guangyu; Sun, Zhiwei; Liu, Liren

    2015-09-01

    The implementation of down-looking Synthetic Aperture Imaging Ladar(SAIL) uses quadratic phase history reconstruction in the travel direction and linear phase modulation reconstruction in the orthogonal direction. And the linear phase modulation in the orthogonal direction is generated by the shift of two cylindrical lenses in the two polarization-orthogonal beams. Therefore, the fast-moving of two cylindrical lenses is necessary for airborne down-looking SAIL to match the aircraft flight speed and to realize the compression of the orthogonal direction, but the quick start and the quick stop of the cylindrical lenses must greatly damage the motor and make the motion trail non-uniform. To reduce the damage and get relatively well trajectory, we make the motor move like a sinusoidal curve to make it more realistic movement, and through a resampling interpolation imaging algorithm, we can transform the nonlinear phase to linear phase, and get good reconstruction results of point target and area target in laboratory. The influences on imaging quality in different sampling positions when the motor make a sinusoidal motion and the necessity of the algorithm are analyzed. At last, we perform a comparison of the results of two cases in resolution.

  13. 3D-printed bioanalytical devices

    NASA Astrophysics Data System (ADS)

    Bishop, Gregory W.; Satterwhite-Warden, Jennifer E.; Kadimisetty, Karteek; Rusling, James F.

    2016-07-01

    While 3D printing technologies first appeared in the 1980s, prohibitive costs, limited materials, and the relatively small number of commercially available printers confined applications mainly to prototyping for manufacturing purposes. As technologies, printer cost, materials, and accessibility continue to improve, 3D printing has found widespread implementation in research and development in many disciplines due to ease-of-use and relatively fast design-to-object workflow. Several 3D printing techniques have been used to prepare devices such as milli- and microfluidic flow cells for analyses of cells and biomolecules as well as interfaces that enable bioanalytical measurements using cellphones. This review focuses on preparation and applications of 3D-printed bioanalytical devices.

  14. Nonlaser-based 3D surface imaging

    SciTech Connect

    Lu, Shin-yee; Johnson, R.K.; Sherwood, R.J.

    1994-11-15

    3D surface imaging refers to methods that generate a 3D surface representation of objects of a scene under viewing. Laser-based 3D surface imaging systems are commonly used in manufacturing, robotics and biomedical research. Although laser-based systems provide satisfactory solutions for most applications, there are situations where non laser-based approaches are preferred. The issues that make alternative methods sometimes more attractive are: (1) real-time data capturing, (2) eye-safety, (3) portability, and (4) work distance. The focus of this presentation is on generating a 3D surface from multiple 2D projected images using CCD cameras, without a laser light source. Two methods are presented: stereo vision and depth-from-focus. Their applications are described.

  15. Tropical Cyclone Jack in Satellite 3-D

    NASA Video Gallery

    This 3-D flyby from NASA's TRMM satellite of Tropical Cyclone Jack on April 21 shows that some of the thunderstorms were shown by TRMM PR were still reaching height of at least 17 km (10.5 miles). ...

  16. 3D Printing for Tissue Engineering

    PubMed Central

    Jia, Jia; Yao, Hai; Mei, Ying

    2016-01-01

    Tissue engineering aims to fabricate functional tissue for applications in regenerative medicine and drug testing. More recently, 3D printing has shown great promise in tissue fabrication with a structural control from micro- to macro-scale by using a layer-by-layer approach. Whether through scaffold-based or scaffold-free approaches, the standard for 3D printed tissue engineering constructs is to provide a biomimetic structural environment that facilitates tissue formation and promotes host tissue integration (e.g., cellular infiltration, vascularization, and active remodeling). This review will cover several approaches that have advanced the field of 3D printing through novel fabrication methods of tissue engineering constructs. It will also discuss the applications of synthetic and natural materials for 3D printing facilitated tissue fabrication. PMID:26869728

  17. 3D Visualization of Recent Sumatra Earthquake

    NASA Astrophysics Data System (ADS)

    Nayak, Atul; Kilb, Debi

    2005-04-01

    Scientists and visualization experts at the Scripps Institution of Oceanography have created an interactive three-dimensional visualization of the 28 March 2005 magnitude 8.7 earthquake in Sumatra. The visualization shows the earthquake's hypocenter and aftershocks recorded until 29 March 2005, and compares it with the location of the 26 December 2004 magnitude 9 event and the consequent seismicity in that region. The 3D visualization was created using the Fledermaus software developed by Interactive Visualization Systems (http://www.ivs.unb.ca/) and stored as a ``scene'' file. To view this visualization, viewers need to download and install the free viewer program iView3D (http://www.ivs3d.com/products/iview3d).

  18. Future Engineers 3-D Print Timelapse

    NASA Video Gallery

    NASA Challenges K-12 students to create a model of a container for space using 3-D modeling software. Astronauts need containers of all kinds - from advanced containers that can study fruit flies t...

  19. 3-D Flyover Visualization of Veil Nebula

    NASA Video Gallery

    This 3-D visualization flies across a small portion of the Veil Nebula as photographed by the Hubble Space Telescope. This region is a small part of a huge expanding remnant from a star that explod...

  20. Quantifying Modes of 3D Cell Migration.

    PubMed

    Driscoll, Meghan K; Danuser, Gaudenz

    2015-12-01

    Although it is widely appreciated that cells migrate in a variety of diverse environments in vivo, we are only now beginning to use experimental workflows that yield images with sufficient spatiotemporal resolution to study the molecular processes governing cell migration in 3D environments. Since cell migration is a dynamic process, it is usually studied via microscopy, but 3D movies of 3D processes are difficult to interpret by visual inspection. In this review, we discuss the technologies required to study the diversity of 3D cell migration modes with a focus on the visualization and computational analysis tools needed to study cell migration quantitatively at a level comparable to the analyses performed today on cells crawling on flat substrates. PMID:26603943

  1. 3D-patterned polymer brush surfaces

    NASA Astrophysics Data System (ADS)

    Zhou, Xuechang; Liu, Xuqing; Xie, Zhuang; Zheng, Zijian

    2011-12-01

    Polymer brush-based three-dimensional (3D) structures are emerging as a powerful platform to engineer a surface by providing abundant spatially distributed chemical and physical properties. In this feature article, we aim to give a summary of the recent progress on the fabrication of 3D structures with polymer brushes, with a particular focus on the micro- and nanoscale. We start with a brief introduction on polymer brushes and the challenges to prepare their 3D structures. Then, we highlight the recent advances of the fabrication approaches on the basis of traditional polymerization time and grafting density strategies, and a recently developed feature density strategy. Finally, we provide some perspective outlooks on the future directions of engineering the 3D structures with polymer brushes.

  2. Modeling Cellular Processes in 3-D

    PubMed Central

    Mogilner, Alex; Odde, David

    2011-01-01

    Summary Recent advances in photonic imaging and fluorescent protein technology offer unprecedented views of molecular space-time dynamics in living cells. At the same time, advances in computing hardware and software enable modeling of ever more complex systems, from global climate to cell division. As modeling and experiment become more closely integrated, we must address the issue of modeling cellular processes in 3-D. Here, we highlight recent advances related to 3-D modeling in cell biology. While some processes require full 3-D analysis, we suggest that others are more naturally described in 2-D or 1-D. Keeping the dimensionality as low as possible reduces computational time and makes models more intuitively comprehensible; however, the ability to test full 3-D models will build greater confidence in models generally and remains an important emerging area of cell biological modeling. PMID:22036197

  3. Eyes on the Earth 3D

    NASA Technical Reports Server (NTRS)

    Kulikov, anton I.; Doronila, Paul R.; Nguyen, Viet T.; Jackson, Randal K.; Greene, William M.; Hussey, Kevin J.; Garcia, Christopher M.; Lopez, Christian A.

    2013-01-01

    Eyes on the Earth 3D software gives scientists, and the general public, a realtime, 3D interactive means of accurately viewing the real-time locations, speed, and values of recently collected data from several of NASA's Earth Observing Satellites using a standard Web browser (climate.nasa.gov/eyes). Anyone with Web access can use this software to see where the NASA fleet of these satellites is now, or where they will be up to a year in the future. The software also displays several Earth Science Data sets that have been collected on a daily basis. This application uses a third-party, 3D, realtime, interactive game engine called Unity 3D to visualize the satellites and is accessible from a Web browser.

  4. 3-D Animation of Typhoon Bopha

    NASA Video Gallery

    This 3-D animation of NASA's TRMM satellite data showed Typhoon Bopha tracking over the Philippines on Dec. 3 and moving into the Sulu Sea on Dec. 4, 2012. TRMM saw heavy rain (red) was falling at ...

  5. 3-D TRMM Flyby of Hurricane Amanda

    NASA Video Gallery

    The TRMM satellite flew over Hurricane Amanda on Tuesday, May 27 at 1049 UTC (6:49 a.m. EDT) and captured rainfall rates and cloud height data that was used to create this 3-D simulated flyby. Cred...

  6. Cyclone Rusty's Landfall in 3-D

    NASA Video Gallery

    This 3-D image derived from NASA's TRMM satellite Precipitation Radar data on February 26, 2013 at 0654 UTC showed that the tops of some towering thunderstorms in Rusty's eye wall were reaching hei...

  7. TRMM 3-D Flyby of Ingrid

    NASA Video Gallery

    This 3-D flyby of Tropical Storm Ingrid's rainfall was created from TRMM satellite data for Sept. 16. Heaviest rainfall appears in red towers over the Gulf of Mexico, while moderate rainfall stretc...

  8. 3D-printed bioanalytical devices.

    PubMed

    Bishop, Gregory W; Satterwhite-Warden, Jennifer E; Kadimisetty, Karteek; Rusling, James F

    2016-07-15

    While 3D printing technologies first appeared in the 1980s, prohibitive costs, limited materials, and the relatively small number of commercially available printers confined applications mainly to prototyping for manufacturing purposes. As technologies, printer cost, materials, and accessibility continue to improve, 3D printing has found widespread implementation in research and development in many disciplines due to ease-of-use and relatively fast design-to-object workflow. Several 3D printing techniques have been used to prepare devices such as milli- and microfluidic flow cells for analyses of cells and biomolecules as well as interfaces that enable bioanalytical measurements using cellphones. This review focuses on preparation and applications of 3D-printed bioanalytical devices. PMID:27250897

  9. Palacios field: A 3-D case history

    SciTech Connect

    McWhorter, R.; Torguson, B.

    1994-12-31

    In late 1992, Mitchell Energy Corporation acquired a 7.75 sq mi (20.0 km{sup 2}) 3-D seismic survey over Palacios field. Matagorda County, Texas. The company shot the survey to help evaluate the field for further development by delineating the fault pattern of the producing Middle Oligocene Frio interval. They compare the mapping of the field before and after the 3-D survey. This comparison shows that the 3-D volume yields superior fault imaging and interpretability compared to the dense 2-D data set. The problems with the 2-D data set are improper imaging of small and oblique faults and insufficient coverage over a complex fault pattern. Whereas the 2-D data set validated a simple fault model, the 3-D volume revealed a more complex history of faulting that includes three different fault systems. This discovery enabled them to reconstruct the depositional and structural history of Palacios field.

  10. Radiosity diffusion model in 3D

    NASA Astrophysics Data System (ADS)

    Riley, Jason D.; Arridge, Simon R.; Chrysanthou, Yiorgos; Dehghani, Hamid; Hillman, Elizabeth M. C.; Schweiger, Martin

    2001-11-01

    We present the Radiosity-Diffusion model in three dimensions(3D), as an extension to previous work in 2D. It is a method for handling non-scattering spaces in optically participating media. We present the extension of the model to 3D including an extension to the model to cope with increased complexity of the 3D domain. We show that in 3D more careful consideration must be given to the issues of meshing and visibility to model the transport of light within reasonable computational bounds. We demonstrate the model to be comparable to Monte-Carlo simulations for selected geometries, and show preliminary results of comparisons to measured time-resolved data acquired on resin phantoms.

  11. 3D-HST results and prospects

    NASA Astrophysics Data System (ADS)

    Van Dokkum, Pieter G.

    2015-01-01

    The 3D-HST survey is providing a comprehensive census of the distant Universe, combining HST WFC3 imaging and grism spectroscopy with a myriad of other ground- and space-based datasets. This talk constitutes an overview of science results from the survey, with a focus on ongoing work and ways to exploit the rich public release of the 3D-HST data.

  12. Improved Prediction of Momentum and Scalar Fluxes Using MODIS Imagery

    NASA Technical Reports Server (NTRS)

    Crago, Richard D.; Jasinski, Michael F.

    2003-01-01

    There are remote sensing and science objectives. The remote sensing objectives are: To develop and test a theoretical method for estimating local momentum aerodynamic roughness length, z(sub 0m), using satellite multispectral imagery. To adapt the method to the MODIS imagery. To develop a high-resolution (approx. 1km) gridded dataset of local momentum roughness for the continental United States and southern Canada, using MODIS imagery and other MODIS derived products. The science objective is: To determine the sensitivity of improved satellite-derived (MODIS-) estimates of surface roughness on the momentum and scalar fluxes, within the context of 3-D atmospheric modeling.

  13. Assessing 3d Photogrammetry Techniques in Craniometrics

    NASA Astrophysics Data System (ADS)

    Moshobane, M. C.; de Bruyn, P. J. N.; Bester, M. N.

    2016-06-01

    Morphometrics (the measurement of morphological features) has been revolutionized by the creation of new techniques to study how organismal shape co-varies with several factors such as ecophenotypy. Ecophenotypy refers to the divergence of phenotypes due to developmental changes induced by local environmental conditions, producing distinct ecophenotypes. None of the techniques hitherto utilized could explicitly address organismal shape in a complete biological form, i.e. three-dimensionally. This study investigates the use of the commercial software, Photomodeler Scanner® (PMSc®) three-dimensional (3D) modelling software to produce accurate and high-resolution 3D models. Henceforth, the modelling of Subantarctic fur seal (Arctocephalus tropicalis) and Antarctic fur seal (Arctocephalus gazella) skulls which could allow for 3D measurements. Using this method, sixteen accurate 3D skull models were produced and five metrics were determined. The 3D linear measurements were compared to measurements taken manually with a digital caliper. In addition, repetitive measurements were recorded by varying researchers to determine repeatability. To allow for comparison straight line measurements were taken with the software, assuming that close accord with all manually measured features would illustrate the model's accurate replication of reality. Measurements were not significantly different demonstrating that realistic 3D skull models can be successfully produced to provide a consistent basis for craniometrics, with the additional benefit of allowing non-linear measurements if required.

  14. 3D model reconstruction of underground goaf

    NASA Astrophysics Data System (ADS)

    Fang, Yuanmin; Zuo, Xiaoqing; Jin, Baoxuan

    2005-10-01

    Constructing 3D model of underground goaf, we can control the process of mining better and arrange mining work reasonably. However, the shape of goaf and the laneway among goafs are very irregular, which produce great difficulties in data-acquiring and 3D model reconstruction. In this paper, we research on the method of data-acquiring and 3D model construction of underground goaf, building topological relation among goafs. The main contents are as follows: a) The paper proposed an efficient encoding rule employed to structure the field measurement data. b) A 3D model construction method of goaf is put forward, which by means of combining several TIN (triangulated irregular network) pieces, and an efficient automatic processing algorithm of boundary of TIN is proposed. c) Topological relation of goaf models is established. TIN object is the basic modeling element of goaf 3D model, and the topological relation among goaf is created and maintained by building the topological relation among TIN objects. Based on this, various 3D spatial analysis functions can be performed including transect and volume calculation of goaf. A prototype is developed, which can realized the model and algorithm proposed in this paper.

  15. 3D steerable wavelets in practice.

    PubMed

    Chenouard, Nicolas; Unser, Michael

    2012-11-01

    We introduce a systematic and practical design for steerable wavelet frames in 3D. Our steerable wavelets are obtained by applying a 3D version of the generalized Riesz transform to a primary isotropic wavelet frame. The novel transform is self-reversible (tight frame) and its elementary constituents (Riesz wavelets) can be efficiently rotated in any 3D direction by forming appropriate linear combinations. Moreover, the basis functions at a given location can be linearly combined to design custom (and adaptive) steerable wavelets. The features of the proposed method are illustrated with the processing and analysis of 3D biomedical data. In particular, we show how those wavelets can be used to characterize directional patterns and to detect edges by means of a 3D monogenic analysis. We also propose a new inverse-problem formalism along with an optimization algorithm for reconstructing 3D images from a sparse set of wavelet-domain edges. The scheme results in high-quality image reconstructions which demonstrate the feature-reduction ability of the steerable wavelets as well as their potential for solving inverse problems. PMID:22752138

  16. DYNA3D example problem manual

    SciTech Connect

    Lovejoy, S.C.; Whirley, R.G.

    1990-10-10

    This manual describes in detail the solution of ten example problems using the explicit nonlinear finite element code DYNA3D. The sample problems include solid, shell, and beam element types, and a variety of linear and nonlinear material models. For each example, there is first an engineering description of the physical problem to be studied. Next, the analytical techniques incorporated in the model are discussed and key features of DYNA3D are highlighted. INGRID commands used to generate the mesh are listed, and sample plots from the DYNA3D analysis are given. Finally, there is a description of the TAURUS post-processing commands used to generate the plots of the solution. This set of example problems is useful in verifying the installation of DYNA3D on a new computer system. In addition, these documented analyses illustrate the application of DYNA3D to a variety of engineering problems, and thus this manual should be helpful to new analysts getting started with DYNA3D. 7 refs., 56 figs., 9 tabs.

  17. Recording stereoscopic 3D neurosurgery with a head-mounted 3D camera system.

    PubMed

    Lee, Brian; Chen, Brian R; Chen, Beverly B; Lu, James Y; Giannotta, Steven L

    2015-06-01

    Stereoscopic three-dimensional (3D) imaging can present more information to the viewer and further enhance the learning experience over traditional two-dimensional (2D) video. Most 3D surgical videos are recorded from the operating microscope and only feature the crux, or the most important part of the surgery, leaving out other crucial parts of surgery including the opening, approach, and closing of the surgical site. In addition, many other surgeries including complex spine, trauma, and intensive care unit procedures are also rarely recorded. We describe and share our experience with a commercially available head-mounted stereoscopic 3D camera system to obtain stereoscopic 3D recordings of these seldom recorded aspects of neurosurgery. The strengths and limitations of using the GoPro(®) 3D system as a head-mounted stereoscopic 3D camera system in the operating room are reviewed in detail. Over the past several years, we have recorded in stereoscopic 3D over 50 cranial and spinal surgeries and created a library for education purposes. We have found the head-mounted stereoscopic 3D camera system to be a valuable asset to supplement 3D footage from a 3D microscope. We expect that these comprehensive 3D surgical videos will become an important facet of resident education and ultimately lead to improved patient care. PMID:25620087

  18. RAG-3D: a search tool for RNA 3D substructures.

    PubMed

    Zahran, Mai; Sevim Bayrak, Cigdem; Elmetwaly, Shereef; Schlick, Tamar

    2015-10-30

    To address many challenges in RNA structure/function prediction, the characterization of RNA's modular architectural units is required. Using the RNA-As-Graphs (RAG) database, we have previously explored the existence of secondary structure (2D) submotifs within larger RNA structures. Here we present RAG-3D-a dataset of RNA tertiary (3D) structures and substructures plus a web-based search tool-designed to exploit graph representations of RNAs for the goal of searching for similar 3D structural fragments. The objects in RAG-3D consist of 3D structures translated into 3D graphs, cataloged based on the connectivity between their secondary structure elements. Each graph is additionally described in terms of its subgraph building blocks. The RAG-3D search tool then compares a query RNA 3D structure to those in the database to obtain structurally similar structures and substructures. This comparison reveals conserved 3D RNA features and thus may suggest functional connections. Though RNA search programs based on similarity in sequence, 2D, and/or 3D structural elements are available, our graph-based search tool may be advantageous for illuminating similarities that are not obvious; using motifs rather than sequence space also reduces search times considerably. Ultimately, such substructuring could be useful for RNA 3D structure prediction, structure/function inference and inverse folding. PMID:26304547

  19. 3-D SAR image formation from sparse aperture data using 3-D target grids

    NASA Astrophysics Data System (ADS)

    Bhalla, Rajan; Li, Junfei; Ling, Hao

    2005-05-01

    The performance of ATR systems can potentially be improved by using three-dimensional (3-D) SAR images instead of the traditional two-dimensional SAR images or one-dimensional range profiles. 3-D SAR image formation of targets from radar backscattered data collected on wide angle, sparse apertures has been identified by AFRL as fundamental to building an object detection and recognition capability. A set of data has been released as a challenge problem. This paper describes a technique based on the concept of 3-D target grids aimed at the formation of 3-D SAR images of targets from sparse aperture data. The 3-D target grids capture the 3-D spatial and angular scattering properties of the target and serve as matched filters for SAR formation. The results of 3-D SAR formation using the backhoe public release data are presented.

  20. Rapid 360 degree imaging and stitching of 3D objects using multiple precision 3D cameras

    NASA Astrophysics Data System (ADS)

    Lu, Thomas; Yin, Stuart; Zhang, Jianzhong; Li, Jiangan; Wu, Frank

    2008-02-01

    In this paper, we present the system architecture of a 360 degree view 3D imaging system. The system consists of multiple 3D sensors synchronized to take 3D images around the object. Each 3D camera employs a single high-resolution digital camera and a color-coded light projector. The cameras are synchronized to rapidly capture the 3D and color information of a static object or a live person. The color encoded structure lighting ensures the precise reconstruction of the depth of the object. A 3D imaging system architecture is presented. The architecture employs the displacement of the camera and the projector to triangulate the depth information. The 3D camera system has achieved high depth resolution down to 0.1mm on a human head sized object and 360 degree imaging capability.

  1. CFL3D, FUN3d, and NSU3D Contributions to the Fifth Drag Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Laflin, Kelly R.; Chaffin, Mark S.; Powell, Nicholas; Levy, David W.

    2013-01-01

    Results presented at the Fifth Drag Prediction Workshop using CFL3D, FUN3D, and NSU3D are described. These are calculations on the workshop provided grids and drag adapted grids. The NSU3D results have been updated to reflect an improvement to skin friction calculation on skewed grids. FUN3D results generated after the workshop are included for custom participant generated grids and a grid from a previous workshop. Uniform grid refinement at the design condition shows a tight grouping in calculated drag, where the variation in the pressure component of drag is larger than the skin friction component. At this design condition, A fine-grid drag value was predicted with a smaller drag adjoint adapted grid via tetrahedral adaption to a metric and mixed-element subdivision. The buffet study produced larger variation than the design case, which is attributed to large differences in the predicted side-of-body separation extent. Various modeling and discretization approaches had a strong impact on predicted side-of-body separation. This large wing root separation bubble was not observed in wind tunnel tests indicating that more work is necessary in modeling wing root juncture flows to predict experiments.

  2. PLOT3D Export Tool for Tecplot

    NASA Technical Reports Server (NTRS)

    Alter, Stephen

    2010-01-01

    The PLOT3D export tool for Tecplot solves the problem of modified data being impossible to output for use by another computational science solver. The PLOT3D Exporter add-on enables the use of the most commonly available visualization tools to engineers for output of a standard format. The exportation of PLOT3D data from Tecplot has far reaching effects because it allows for grid and solution manipulation within a graphical user interface (GUI) that is easily customized with macro language-based and user-developed GUIs. The add-on also enables the use of Tecplot as an interpolation tool for solution conversion between different grids of different types. This one add-on enhances the functionality of Tecplot so significantly, it offers the ability to incorporate Tecplot into a general suite of tools for computational science applications as a 3D graphics engine for visualization of all data. Within the PLOT3D Export Add-on are several functions that enhance the operations and effectiveness of the add-on. Unlike Tecplot output functions, the PLOT3D Export Add-on enables the use of the zone selection dialog in Tecplot to choose which zones are to be written by offering three distinct options - output of active, inactive, or all zones (grid blocks). As the user modifies the zones to output with the zone selection dialog, the zones to be written are similarly updated. This enables the use of Tecplot to create multiple configurations of a geometry being analyzed. For example, if an aircraft is loaded with multiple deflections of flaps, by activating and deactivating different zones for a specific flap setting, new specific configurations of that aircraft can be easily generated by only writing out specific zones. Thus, if ten flap settings are loaded into Tecplot, the PLOT3D Export software can output ten different configurations, one for each flap setting.

  3. Statistical properties of polarization image and despeckling method by multiresolution block-matching 3D filter

    NASA Astrophysics Data System (ADS)

    Wen, D. H.; Jiang, Y. S.; Zhang, Y. Z.; Gao, Q.

    2014-03-01

    The theoretical and experimental investigations on the polarization imagery system of speckle statistical characteristics and speckle removing method are researched. A method to obtain two images encoded by polarization degree with a single measurement process is proposed. A theoretical model for polarization imagery system on Müller matrix is proposed. According to modern charge coupled device (CCD) imaging characteristics, speckles are divided into two kinds, namely small speckle and big speckle. Based on this model, a speckle reduction algorithm based on a dual-tree complex wavelet transform (DTCWT) and blockmatching 3D filter (BM3D) is proposed (DTBM3D). Original laser image data transformed by logarithmic compression is decomposed by DTCWT into approximation and detail subbands. Bilateral filtering is applied to the approximation subbands, and a suited BM3D filter is applied to the detail subbands. The despeckling results show that contrast improvement index and edge preserve index outperform those of traditional methods. The researches have important reference value in research of speckle noise level and removing speckle noise.

  4. A microfluidic device for 2D to 3D and 3D to 3D cell navigation

    NASA Astrophysics Data System (ADS)

    Shamloo, Amir; Amirifar, Leyla

    2016-01-01

    Microfluidic devices have received wide attention and shown great potential in the field of tissue engineering and regenerative medicine. Investigating cell response to various stimulations is much more accurate and comprehensive with the aid of microfluidic devices. In this study, we introduced a microfluidic device by which the matrix density as a mechanical property and the concentration profile of a biochemical factor as a chemical property could be altered. Our microfluidic device has a cell tank and a cell culture chamber to mimic both 2D to 3D and 3D to 3D migration of three types of cells. Fluid shear stress is negligible on the cells and a stable concentration gradient can be obtained by diffusion. The device was designed by a numerical simulation so that the uniformity of the concentration gradients throughout the cell culture chamber was obtained. Adult neural cells were cultured within this device and they showed different branching and axonal navigation phenotypes within varying nerve growth factor (NGF) concentration profiles. Neural stem cells were also cultured within varying collagen matrix densities while exposed to NGF concentrations and they experienced 3D to 3D collective migration. By generating vascular endothelial growth factor concentration gradients, adult human dermal microvascular endothelial cells also migrated in a 2D to 3D manner and formed a stable lumen within a specific collagen matrix density. It was observed that a minimum absolute concentration and concentration gradient were required to stimulate migration of all types of the cells. This device has the advantage of changing multiple parameters simultaneously and is expected to have wide applicability in cell studies.

  5. RAG-3D: A search tool for RNA 3D substructures

    DOE PAGESBeta

    Zahran, Mai; Sevim Bayrak, Cigdem; Elmetwaly, Shereef; Schlick, Tamar

    2015-08-24

    In this study, to address many challenges in RNA structure/function prediction, the characterization of RNA's modular architectural units is required. Using the RNA-As-Graphs (RAG) database, we have previously explored the existence of secondary structure (2D) submotifs within larger RNA structures. Here we present RAG-3D—a dataset of RNA tertiary (3D) structures and substructures plus a web-based search tool—designed to exploit graph representations of RNAs for the goal of searching for similar 3D structural fragments. The objects in RAG-3D consist of 3D structures translated into 3D graphs, cataloged based on the connectivity between their secondary structure elements. Each graph is additionally describedmore » in terms of its subgraph building blocks. The RAG-3D search tool then compares a query RNA 3D structure to those in the database to obtain structurally similar structures and substructures. This comparison reveals conserved 3D RNA features and thus may suggest functional connections. Though RNA search programs based on similarity in sequence, 2D, and/or 3D structural elements are available, our graph-based search tool may be advantageous for illuminating similarities that are not obvious; using motifs rather than sequence space also reduces search times considerably. Ultimately, such substructuring could be useful for RNA 3D structure prediction, structure/function inference and inverse folding.« less

  6. RAG-3D: A search tool for RNA 3D substructures

    SciTech Connect

    Zahran, Mai; Sevim Bayrak, Cigdem; Elmetwaly, Shereef; Schlick, Tamar

    2015-08-24

    In this study, to address many challenges in RNA structure/function prediction, the characterization of RNA's modular architectural units is required. Using the RNA-As-Graphs (RAG) database, we have previously explored the existence of secondary structure (2D) submotifs within larger RNA structures. Here we present RAG-3D—a dataset of RNA tertiary (3D) structures and substructures plus a web-based search tool—designed to exploit graph representations of RNAs for the goal of searching for similar 3D structural fragments. The objects in RAG-3D consist of 3D structures translated into 3D graphs, cataloged based on the connectivity between their secondary structure elements. Each graph is additionally described in terms of its subgraph building blocks. The RAG-3D search tool then compares a query RNA 3D structure to those in the database to obtain structurally similar structures and substructures. This comparison reveals conserved 3D RNA features and thus may suggest functional connections. Though RNA search programs based on similarity in sequence, 2D, and/or 3D structural elements are available, our graph-based search tool may be advantageous for illuminating similarities that are not obvious; using motifs rather than sequence space also reduces search times considerably. Ultimately, such substructuring could be useful for RNA 3D structure prediction, structure/function inference and inverse folding.

  7. RAG-3D: a search tool for RNA 3D substructures

    PubMed Central

    Zahran, Mai; Sevim Bayrak, Cigdem; Elmetwaly, Shereef; Schlick, Tamar

    2015-01-01

    To address many challenges in RNA structure/function prediction, the characterization of RNA's modular architectural units is required. Using the RNA-As-Graphs (RAG) database, we have previously explored the existence of secondary structure (2D) submotifs within larger RNA structures. Here we present RAG-3D—a dataset of RNA tertiary (3D) structures and substructures plus a web-based search tool—designed to exploit graph representations of RNAs for the goal of searching for similar 3D structural fragments. The objects in RAG-3D consist of 3D structures translated into 3D graphs, cataloged based on the connectivity between their secondary structure elements. Each graph is additionally described in terms of its subgraph building blocks. The RAG-3D search tool then compares a query RNA 3D structure to those in the database to obtain structurally similar structures and substructures. This comparison reveals conserved 3D RNA features and thus may suggest functional connections. Though RNA search programs based on similarity in sequence, 2D, and/or 3D structural elements are available, our graph-based search tool may be advantageous for illuminating similarities that are not obvious; using motifs rather than sequence space also reduces search times considerably. Ultimately, such substructuring could be useful for RNA 3D structure prediction, structure/function inference and inverse folding. PMID:26304547

  8. Automatic needle segmentation in 3D ultrasound images using 3D Hough transform

    NASA Astrophysics Data System (ADS)

    Zhou, Hua; Qiu, Wu; Ding, Mingyue; Zhang, Songgeng

    2007-12-01

    3D ultrasound (US) is a new technology that can be used for a variety of diagnostic applications, such as obstetrical, vascular, and urological imaging, and has been explored greatly potential in the applications of image-guided surgery and therapy. Uterine adenoma and uterine bleeding are the two most prevalent diseases in Chinese woman, and a minimally invasive ablation system using an RF button electrode which is needle-like is being used to destroy tumor cells or stop bleeding currently. Now a 3D US guidance system has been developed to avoid accidents or death of the patient by inaccurate localizations of the electrode and the tumor position during treatment. In this paper, we described two automated techniques, the 3D Hough Transform (3DHT) and the 3D Randomized Hough Transform (3DRHT), which is potentially fast, accurate, and robust to provide needle segmentation in 3D US image for use of 3D US imaging guidance. Based on the representation (Φ , θ , ρ , α ) of straight lines in 3D space, we used the 3DHT algorithm to segment needles successfully assumed that the approximate needle position and orientation are known in priori. The 3DRHT algorithm was developed to detect needles quickly without any information of the 3D US images. The needle segmentation techniques were evaluated using the 3D US images acquired by scanning water phantoms. The experiments demonstrated the feasibility of two 3D needle segmentation algorithms described in this paper.

  9. ICER-3D Hyperspectral Image Compression Software

    NASA Technical Reports Server (NTRS)

    Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received

  10. Shim3d Helmholtz Solution Package

    Energy Science and Technology Software Center (ESTSC)

    2009-01-29

    This suite of codes solves the Helmholtz Equation for the steady-state propagation of single-frequency electromagnetic radiation in an arbitrary 2D or 3D dielectric medium. Materials can be either transparent or absorptive (including metals) and are described entirely by their shape and complex dielectric constant. Dielectric boundaries are assumed to always fall on grid boundaries and the material within a single grid cell is considered to be uniform. Input to the problem is in the formmore » of a Dirichlet boundary condition on a single boundary, and may be either analytic (Gaussian) in shape, or a mode shape computed using a separate code (such as the included eigenmode solver vwave20), and written to a file. Solution is via the finite difference method using Jacobi iteration for 3D problems or direct matrix inversion for 2D problems. Note that 3D problems that include metals will require different iteration parameters than described in the above reference. For structures with curved boundaries not easily modeled on a rectangular grid, the auxillary codes helmholtz11(2D), helm3d (semivectoral), and helmv3d (full vectoral) are provided. For these codes the finite difference equations are specified on a topological regular triangular grid and solved using Jacobi iteration or direct matrix inversion as before. An automatic grid generator is supplied.« less

  11. 3D Spray Droplet Distributions in Sneezes

    NASA Astrophysics Data System (ADS)

    Techet, Alexandra; Scharfman, Barry; Bourouiba, Lydia

    2015-11-01

    3D spray droplet clouds generated during human sneezing are investigated using the Synthetic Aperture Feature Extraction (SAFE) method, which relies on light field imaging (LFI) and synthetic aperture (SA) refocusing computational photographic techniques. An array of nine high-speed cameras are used to image sneeze droplets and tracked the droplets in 3D space and time (3D + T). An additional high-speed camera is utilized to track the motion of the head during sneezing. In the SAFE method, the raw images recorded by each camera in the array are preprocessed and binarized, simplifying post processing after image refocusing and enabling the extraction of feature sizes and positions in 3D + T. These binary images are refocused using either additive or multiplicative methods, combined with thresholding. Sneeze droplet centroids, radii, distributions and trajectories are determined and compared with existing data. The reconstructed 3D droplet centroids and radii enable a more complete understanding of the physical extent and fluid dynamics of sneeze ejecta. These measurements are important for understanding the infectious disease transmission potential of sneezes in various indoor environments.

  12. T-HEMP3D user manual

    SciTech Connect

    Turner, D.

    1983-08-01

    The T-HEMP3D (Transportable HEMP3D) computer program is a derivative of the STEALTH three-dimensional thermodynamics code developed by Science Applications, Inc., under the direction of Ron Hofmann. STEALTH, in turn, is based entirely on the original HEMP3D code written at Lawrence Livermore National Laboratory. The primary advantage STEALTH has over its predecessors is that it was designed using modern structured design techniques, with rigorous programming standards enforced. This yields two benefits. First, the code is easily changeable; this is a necessity for a physics code used for research. The second benefit is that the code is easily transportable between different types of computers. The STEALTH program was transferred to LLNL under a cooperative development agreement. Changes were made primarily in three areas: material specification, coordinate generation, and the addition of sliding surface boundary conditions. The code was renamed T-HEMP3D to avoid confusion with other versions of STEALTH. This document summarizes the input to T-HEMP3D, as used at LLNL. It does not describe the physics simulated by the program, nor the numerical techniques employed. Furthermore, it does not describe the separate job steps of coordinate generation and post-processing, including graphical display of results. (WHK)

  13. Magnetic Properties of 3D Printed Toroids

    NASA Astrophysics Data System (ADS)

    Bollig, Lindsey; Otto, Austin; Hilpisch, Peter; Mowry, Greg; Nelson-Cheeseman, Brittany; Renewable Energy; Alternatives Lab (REAL) Team

    Transformers are ubiquitous in electronics today. Although toroidal geometries perform most efficiently, transformers are traditionally made with rectangular cross-sections due to the lower manufacturing costs. Additive manufacturing techniques (3D printing) can easily achieve toroidal geometries by building up a part through a series of 2D layers. To get strong magnetic properties in a 3D printed transformer, a composite filament is used containing Fe dispersed in a polymer matrix. How the resulting 3D printed toroid responds to a magnetic field depends on two structural factors of the printed 2D layers: fill factor (planar density) and fill pattern. In this work, we investigate how the fill factor and fill pattern affect the magnetic properties of 3D printed toroids. The magnetic properties of the printed toroids are measured by a custom circuit that produces a hysteresis loop for each toroid. Toroids with various fill factors and fill patterns are compared to determine how these two factors can affect the magnetic field the toroid can produce. These 3D printed toroids can be used for numerous applications in order to increase the efficiency of transformers by making it possible for manufacturers to make a toroidal geometry.

  14. 3D dynamic roadmapping for abdominal catheterizations.

    PubMed

    Bender, Frederik; Groher, Martin; Khamene, Ali; Wein, Wolfgang; Heibel, Tim Hauke; Navab, Nassir

    2008-01-01

    Despite rapid advances in interventional imaging, the navigation of a guide wire through abdominal vasculature remains, not only for novice radiologists, a difficult task. Since this navigation is mostly based on 2D fluoroscopic image sequences from one view, the process is slowed down significantly due to missing depth information and patient motion. We propose a novel approach for 3D dynamic roadmapping in deformable regions by predicting the location of the guide wire tip in a 3D vessel model from the tip's 2D location, respiratory motion analysis, and view geometry. In a first step, the method compensates for the apparent respiratory motion in 2D space before backprojecting the 2D guide wire tip into three dimensional space, using a given projection matrix. To countervail the error connected to the projection parameters and the motion compensation, as well as the ambiguity caused by vessel deformation, we establish a statistical framework, which computes a reliable estimate of the guide wire tip location within the 3D vessel model. With this 2D-to-3D transfer, the navigation can be performed from arbitrary viewing angles, disconnected from the static perspective view of the fluoroscopic sequence. Tests on a realistic breathing phantom and on synthetic data with a known ground truth clearly reveal the superiority of our approach compared to naive methods for 3D roadmapping. The concepts and information presented in this paper are based on research and are not commercially available. PMID:18982662

  15. Lifting Object Detection Datasets into 3D.

    PubMed

    Carreira, Joao; Vicente, Sara; Agapito, Lourdes; Batista, Jorge

    2016-07-01

    While data has certainly taken the center stage in computer vision in recent years, it can still be difficult to obtain in certain scenarios. In particular, acquiring ground truth 3D shapes of objects pictured in 2D images remains a challenging feat and this has hampered progress in recognition-based object reconstruction from a single image. Here we propose to bypass previous solutions such as 3D scanning or manual design, that scale poorly, and instead populate object category detection datasets semi-automatically with dense, per-object 3D reconstructions, bootstrapped from:(i) class labels, (ii) ground truth figure-ground segmentations and (iii) a small set of keypoint annotations. Our proposed algorithm first estimates camera viewpoint using rigid structure-from-motion and then reconstructs object shapes by optimizing over visual hull proposals guided by loose within-class shape similarity assumptions. The visual hull sampling process attempts to intersect an object's projection cone with the cones of minimal subsets of other similar objects among those pictured from certain vantage points. We show that our method is able to produce convincing per-object 3D reconstructions and to accurately estimate cameras viewpoints on one of the most challenging existing object-category detection datasets, PASCAL VOC. We hope that our results will re-stimulate interest on joint object recognition and 3D reconstruction from a single image. PMID:27295458

  16. 3D camera tracking from disparity images

    NASA Astrophysics Data System (ADS)

    Kim, Kiyoung; Woo, Woontack

    2005-07-01

    In this paper, we propose a robust camera tracking method that uses disparity images computed from known parameters of 3D camera and multiple epipolar constraints. We assume that baselines between lenses in 3D camera and intrinsic parameters are known. The proposed method reduces camera motion uncertainty encountered during camera tracking. Specifically, we first obtain corresponding feature points between initial lenses using normalized correlation method. In conjunction with matching features, we get disparity images. When the camera moves, the corresponding feature points, obtained from each lens of 3D camera, are robustly tracked via Kanade-Lukas-Tomasi (KLT) tracking algorithm. Secondly, relative pose parameters of each lens are calculated via Essential matrices. Essential matrices are computed from Fundamental matrix calculated using normalized 8-point algorithm with RANSAC scheme. Then, we determine scale factor of translation matrix by d-motion. This is required because the camera motion obtained from Essential matrix is up to scale. Finally, we optimize camera motion using multiple epipolar constraints between lenses and d-motion constraints computed from disparity images. The proposed method can be widely adopted in Augmented Reality (AR) applications, 3D reconstruction using 3D camera, and fine surveillance systems which not only need depth information, but also camera motion parameters in real-time.

  17. Full-color holographic 3D printer

    NASA Astrophysics Data System (ADS)

    Takano, Masami; Shigeta, Hiroaki; Nishihara, Takashi; Yamaguchi, Masahiro; Takahashi, Susumu; Ohyama, Nagaaki; Kobayashi, Akihiko; Iwata, Fujio

    2003-05-01

    A holographic 3D printer is a system that produces a direct hologram with full-parallax information using the 3-dimensional data of a subject from a computer. In this paper, we present a proposal for the reproduction of full-color images with the holographic 3D printer. In order to realize the 3-dimensional color image, we selected the 3 laser wavelength colors of red (λ=633nm), green (λ=533nm), and blue (λ=442nm), and we built a one-step optical system using a projection system and a liquid crystal display. The 3-dimensional color image is obtained by synthesizing in a 2D array the multiple exposure with these 3 wavelengths made on each 250mm elementary hologram, and moving recording medium on a x-y stage. For the natural color reproduction in the holographic 3D printer, we take the approach of the digital processing technique based on the color management technology. The matching between the input and output colors is performed by investigating first, the relation between the gray level transmittance of the LCD and the diffraction efficiency of the hologram and second, by measuring the color displayed by the hologram to establish a correlation. In our first experimental results a non-linear functional relation for single and multiple exposure of the three components were found. These results are the first step in the realization of a natural color 3D image produced by the holographic color 3D printer.

  18. Extra dimensions: 3D in PDF documentation

    SciTech Connect

    Graf, Norman A.

    2011-01-11

    Experimental science is replete with multi-dimensional information which is often poorly represented by the two dimensions of presentation slides and print media. Past efforts to disseminate such information to a wider audience have failed for a number of reasons, including a lack of standards which are easy to implement and have broad support. Adobe's Portable Document Format (PDF) has in recent years become the de facto standard for secure, dependable electronic information exchange. It has done so by creating an open format, providing support for multiple platforms and being reliable and extensible. By providing support for the ECMA standard Universal 3D (U3D) file format in its free Adobe Reader software, Adobe has made it easy to distribute and interact with 3D content. By providing support for scripting and animation, temporal data can also be easily distributed to a wide, non-technical audience. We discuss how the field of radiation imaging could benefit from incorporating full 3D information about not only the detectors, but also the results of the experimental analyses, in its electronic publications. In this article, we present examples drawn from high-energy physics, mathematics and molecular biology which take advantage of this functionality. Furthermore, we demonstrate how 3D detector elements can be documented, using either CAD drawings or other sources such as GEANT visualizations as input.

  19. Extra dimensions: 3D in PDF documentation

    DOE PAGESBeta

    Graf, Norman A.

    2011-01-11

    Experimental science is replete with multi-dimensional information which is often poorly represented by the two dimensions of presentation slides and print media. Past efforts to disseminate such information to a wider audience have failed for a number of reasons, including a lack of standards which are easy to implement and have broad support. Adobe's Portable Document Format (PDF) has in recent years become the de facto standard for secure, dependable electronic information exchange. It has done so by creating an open format, providing support for multiple platforms and being reliable and extensible. By providing support for the ECMA standard Universalmore » 3D (U3D) file format in its free Adobe Reader software, Adobe has made it easy to distribute and interact with 3D content. By providing support for scripting and animation, temporal data can also be easily distributed to a wide, non-technical audience. We discuss how the field of radiation imaging could benefit from incorporating full 3D information about not only the detectors, but also the results of the experimental analyses, in its electronic publications. In this article, we present examples drawn from high-energy physics, mathematics and molecular biology which take advantage of this functionality. Furthermore, we demonstrate how 3D detector elements can be documented, using either CAD drawings or other sources such as GEANT visualizations as input.« less

  20. The importance of 3D dosimetry

    NASA Astrophysics Data System (ADS)

    Low, Daniel

    2015-01-01

    Radiation therapy has been getting progressively more complex for the past 20 years. Early radiation therapy techniques needed only basic dosimetry equipment; motorized water phantoms, ionization chambers, and basic radiographic film techniques. As intensity modulated radiation therapy and image guided therapy came into widespread practice, medical physicists were challenged with developing effective and efficient dose measurement techniques. The complex 3-dimensional (3D) nature of the dose distributions that were being delivered demanded the development of more quantitative and more thorough methods for dose measurement. The quality assurance vendors developed a wide array of multidetector arrays that have been enormously useful for measuring and characterizing dose distributions, and these have been made especially useful with the advent of 3D dose calculation systems based on the array measurements, as well as measurements made using film and portal imagers. Other vendors have been providing 3D calculations based on data from the linear accelerator or the record and verify system, providing thorough evaluation of the dose but lacking quality assurance (QA) of the dose delivery process, including machine calibration. The current state of 3D dosimetry is one of a state of flux. The vendors and professional associations are trying to determine the optimal balance between thorough QA, labor efficiency, and quantitation. This balance will take some time to reach, but a necessary component will be the 3D measurement and independent calculation of delivered radiation therapy dose distributions.

  1. Visual inertia of rotating 3-D objects.

    PubMed

    Jiang, Y; Pantle, A J; Mark, L S

    1998-02-01

    Five experiments were designed to determine whether a rotating, transparent 3-D cloud of dots (simulated sphere) could influence the perceived direction of rotation of a subsequent sphere. Experiment 1 established conditions under which the direction of rotation of a virtual sphere was perceived unambiguously. When a near-far luminance difference and perspective depth cues were present, observers consistently saw the sphere rotate in the intended direction. In Experiment 2, a near-far luminance difference was used to create an unambiguous rotation sequence that was followed by a directionally ambiguous rotation sequence that lacked both the near-far luminance cue and the perspective cue. Observers consistently saw the second sequence as rotating in the same direction as the first, indicating the presence of 3-D visual inertia. Experiment 3 showed that 3-D visual inertia was sufficiently powerful to bias the perceived direction of a rotation sequence made unambiguous by a near-far luminance cue. Experiment 5 showed that 3-D visual inertia could be obtained using an occlusion depth cue to create an unambiguous inertia-inducing sequence. Finally, Experiments 2, 4, and 5 all revealed a fast-decay phase of inertia that lasted for approximately 800 msec, followed by an asymptotic phase that lasted for periods as long as 1,600 msec. The implications of these findings are examined with respect to motion mechanisms of 3-D visual inertia. PMID:9529911

  2. Integral 3D display using multiple LCDs

    NASA Astrophysics Data System (ADS)

    Okaichi, Naoto; Miura, Masato; Arai, Jun; Mishina, Tomoyuki

    2015-03-01

    The quality of the integral 3D images created by a 3D imaging system was improved by combining multiple LCDs to utilize a greater number of pixels than that possible with one LCD. A prototype of the display device was constructed by using four HD LCDs. An integral photography (IP) image displayed by the prototype is four times larger than that reconstructed by a single display. The pixel pitch of the HD display used is 55.5 μm, and the number of elemental lenses is 212 horizontally and 119 vertically. The 3D image pixel count is 25,228, and the viewing angle is 28°. Since this method is extensible, it is possible to display an integral 3D image of higher quality by increasing the number of LCDs. Using this integral 3D display structure makes it possible to make the whole device thinner than a projector-based display system. It is therefore expected to be applied to the home television in the future.

  3. 3D bioprinting for engineering complex tissues.

    PubMed

    Mandrycky, Christian; Wang, Zongjie; Kim, Keekyoung; Kim, Deok-Ho

    2016-01-01

    Bioprinting is a 3D fabrication technology used to precisely dispense cell-laden biomaterials for the construction of complex 3D functional living tissues or artificial organs. While still in its early stages, bioprinting strategies have demonstrated their potential use in regenerative medicine to generate a variety of transplantable tissues, including skin, cartilage, and bone. However, current bioprinting approaches still have technical challenges in terms of high-resolution cell deposition, controlled cell distributions, vascularization, and innervation within complex 3D tissues. While no one-size-fits-all approach to bioprinting has emerged, it remains an on-demand, versatile fabrication technique that may address the growing organ shortage as well as provide a high-throughput method for cell patterning at the micrometer scale for broad biomedical engineering applications. In this review, we introduce the basic principles, materials, integration strategies and applications of bioprinting. We also discuss the recent developments, current challenges and future prospects of 3D bioprinting for engineering complex tissues. Combined with recent advances in human pluripotent stem cell technologies, 3D-bioprinted tissue models could serve as an enabling platform for high-throughput predictive drug screening and more effective regenerative therapies. PMID:26724184

  4. Miniaturized 3D microscope imaging system

    NASA Astrophysics Data System (ADS)

    Lan, Yung-Sung; Chang, Chir-Weei; Sung, Hsin-Yueh; Wang, Yen-Chang; Chang, Cheng-Yi

    2015-05-01

    We designed and assembled a portable 3-D miniature microscopic image system with the size of 35x35x105 mm3 . By integrating a microlens array (MLA) into the optical train of a handheld microscope, the biological specimen's image will be captured for ease of use in a single shot. With the light field raw data and program, the focal plane can be changed digitally and the 3-D image can be reconstructed after the image was taken. To localize an object in a 3-D volume, an automated data analysis algorithm to precisely distinguish profundity position is needed. The ability to create focal stacks from a single image allows moving or specimens to be recorded. Applying light field microscope algorithm to these focal stacks, a set of cross sections will be produced, which can be visualized using 3-D rendering. Furthermore, we have developed a series of design rules in order to enhance the pixel using efficiency and reduce the crosstalk between each microlens for obtain good image quality. In this paper, we demonstrate a handheld light field microscope (HLFM) to distinguish two different color fluorescence particles separated by a cover glass in a 600um range, show its focal stacks, and 3-D position.

  5. 3D optical measuring technologies and systems

    NASA Astrophysics Data System (ADS)

    Chugui, Yuri V.

    2005-02-01

    The results of the R & D activity of TDI SIE SB RAS in the field of the 3D optical measuring technologies and systems for noncontact 3D optical dimensional inspection applied to atomic and railway industry safety problems are presented. This activity includes investigations of diffraction phenomena on some 3D objects, using the original constructive calculation method. The efficient algorithms for precise determining the transverse and longitudinal sizes of 3D objects of constant thickness by diffraction method, peculiarities on formation of the shadow and images of the typical elements of the extended objects were suggested. Ensuring the safety of nuclear reactors and running trains as well as their high exploitation reliability requires a 100% noncontact precise inspection of geometrical parameters of their components. To solve this problem we have developed methods and produced the technical vision measuring systems LMM, CONTROL, PROFIL, and technologies for noncontact 3D dimensional inspection of grid spacers and fuel elements for the nuclear reactor VVER-1000 and VVER-440, as well as automatic laser diagnostic COMPLEX for noncontact inspection of geometric parameters of running freight car wheel pairs. The performances of these systems and the results of industrial testing are presented and discussed. The created devices are in pilot operation at Atomic and Railway Companies.

  6. BEAMS3D Neutral Beam Injection Model

    NASA Astrophysics Data System (ADS)

    McMillan, Matthew; Lazerson, Samuel A.

    2014-09-01

    With the advent of applied 3D fields in Tokamaks and modern high performance stellarators, a need has arisen to address non-axisymmetric effects on neutral beam heating and fueling. We report on the development of a fully 3D neutral beam injection (NBI) model, BEAMS3D, which addresses this need by coupling 3D equilibria to a guiding center code capable of modeling neutral and charged particle trajectories across the separatrix and into the plasma core. Ionization, neutralization, charge-exchange, viscous slowing down, and pitch angle scattering are modeled with the ADAS atomic physics database. Elementary benchmark calculations are presented to verify the collisionless particle orbits, NBI model, frictional drag, and pitch angle scattering effects. A calculation of neutral beam heating in the NCSX device is performed, highlighting the capability of the code to handle 3D magnetic fields. Notice: this manuscript has been authored by Princeton University under Contract Number DE-AC02-09CH11466 with the US Department of Energy. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.

  7. Mobile 3d Mapping with a Low-Cost Uav System

    NASA Astrophysics Data System (ADS)

    Neitzel, F.; Klonowski, J.

    2011-09-01

    In this contribution it is shown how an UAV system can be built at low costs. The components of the system, the equipment as well as the control software are presented. Furthermore an implemented programme for photogrammetric flight planning and its execution are described. The main focus of this contribution is on the generation of 3D point clouds from digital imagery. For this web services and free software solutions are presented which automatically generate 3D point clouds from arbitrary image configurations. Possibilities of georeferencing are described whereas the achieved accuracy has been determined. The presented workflow is finally used for the acquisition of 3D geodata. On the example of a landfill survey it is shown that marketable products can be derived using a low-cost UAV.

  8. Sensitivity Analysis of the Scattering-Based SARBM3D Despeckling Algorithm.

    PubMed

    Di Simone, Alessio

    2016-01-01

    Synthetic Aperture Radar (SAR) imagery greatly suffers from multiplicative speckle noise, typical of coherent image acquisition sensors, such as SAR systems. Therefore, a proper and accurate despeckling preprocessing step is almost mandatory to aid the interpretation and processing of SAR data by human users and computer algorithms, respectively. Very recently, a scattering-oriented version of the popular SAR Block-Matching 3D (SARBM3D) despeckling filter, named Scattering-Based (SB)-SARBM3D, was proposed. The new filter is based on the a priori knowledge of the local topography of the scene. In this paper, an experimental sensitivity analysis of the above-mentioned despeckling algorithm is carried out, and the main results are shown and discussed. In particular, the role of both electromagnetic and geometrical parameters of the surface and the impact of its scattering behavior are investigated. Furthermore, a comprehensive sensitivity analysis of the SB-SARBM3D filter against the Digital Elevation Model (DEM) resolution and the SAR image-DEM coregistration step is also provided. The sensitivity analysis shows a significant robustness of the algorithm against most of the surface parameters, while the DEM resolution plays a key role in the despeckling process. Furthermore, the SB-SARBM3D algorithm outperforms the original SARBM3D in the presence of the most realistic scattering behaviors of the surface. An actual scenario is also presented to assess the DEM role in real-life conditions. PMID:27347971

  9. Sensitivity Analysis of the Scattering-Based SARBM3D Despeckling Algorithm

    PubMed Central

    Di Simone, Alessio

    2016-01-01

    Synthetic Aperture Radar (SAR) imagery greatly suffers from multiplicative speckle noise, typical of coherent image acquisition sensors, such as SAR systems. Therefore, a proper and accurate despeckling preprocessing step is almost mandatory to aid the interpretation and processing of SAR data by human users and computer algorithms, respectively. Very recently, a scattering-oriented version of the popular SAR Block-Matching 3D (SARBM3D) despeckling filter, named Scattering-Based (SB)-SARBM3D, was proposed. The new filter is based on the a priori knowledge of the local topography of the scene. In this paper, an experimental sensitivity analysis of the above-mentioned despeckling algorithm is carried out, and the main results are shown and discussed. In particular, the role of both electromagnetic and geometrical parameters of the surface and the impact of its scattering behavior are investigated. Furthermore, a comprehensive sensitivity analysis of the SB-SARBM3D filter against the Digital Elevation Model (DEM) resolution and the SAR image-DEM coregistration step is also provided. The sensitivity analysis shows a significant robustness of the algorithm against most of the surface parameters, while the DEM resolution plays a key role in the despeckling process. Furthermore, the SB-SARBM3D algorithm outperforms the original SARBM3D in the presence of the most realistic scattering behaviors of the surface. An actual scenario is also presented to assess the DEM role in real-life conditions. PMID:27347971

  10. Real-time monitoring of 3D cell culture using a 3D capacitance biosensor.

    PubMed

    Lee, Sun-Mi; Han, Nalae; Lee, Rimi; Choi, In-Hong; Park, Yong-Beom; Shin, Jeon-Soo; Yoo, Kyung-Hwa

    2016-03-15

    Three-dimensional (3D) cell cultures have recently received attention because they represent a more physiologically relevant environment compared to conventional two-dimensional (2D) cell cultures. However, 2D-based imaging techniques or cell sensors are insufficient for real-time monitoring of cellular behavior in 3D cell culture. Here, we report investigations conducted with a 3D capacitance cell sensor consisting of vertically aligned pairs of electrodes. When GFP-expressing human breast cancer cells (GFP-MCF-7) encapsulated in alginate hydrogel were cultured in a 3D cell culture system, cellular activities, such as cell proliferation and apoptosis at different heights, could be monitored non-invasively and in real-time by measuring the change in capacitance with the 3D capacitance sensor. Moreover, we were able to monitor cell migration of human mesenchymal stem cells (hMSCs) with our 3D capacitance sensor. PMID:26386332

  11. 3D scene reconstruction based on 3D laser point cloud combining UAV images

    NASA Astrophysics Data System (ADS)

    Liu, Huiyun; Yan, Yangyang; Zhang, Xitong; Wu, Zhenzhen

    2016-03-01

    It is a big challenge capturing and modeling 3D information of the built environment. A number of techniques and technologies are now in use. These include GPS, and photogrammetric application and also remote sensing applications. The experiment uses multi-source data fusion technology for 3D scene reconstruction based on the principle of 3D laser scanning technology, which uses the laser point cloud data as the basis and Digital Ortho-photo Map as an auxiliary, uses 3DsMAX software as a basic tool for building three-dimensional scene reconstruction. The article includes data acquisition, data preprocessing, 3D scene construction. The results show that the 3D scene has better truthfulness, and the accuracy of the scene meet the need of 3D scene construction.

  12. 3D whiteboard: collaborative sketching with 3D-tracked smart phones

    NASA Astrophysics Data System (ADS)

    Lue, James; Schulze, Jürgen P.

    2014-02-01

    We present the results of our investigation of the feasibility of a new approach for collaborative drawing in 3D, based on Android smart phones. Our approach utilizes a number of fiduciary markers, placed in the working area where they can be seen by the smart phones' cameras, in order to estimate the pose of each phone in the room. Our prototype allows two users to draw 3D objects with their smart phones by moving their phones around in 3D space. For example, 3D lines are drawn by recording the path of the phone as it is moved around in 3D space, drawing line segments on the screen along the way. Each user can see the virtual drawing space on their smart phones' displays, as if the display was a window into this space. Besides lines, our prototype application also supports 3D geometry creation, geometry transformation operations, and it shows the location of the other user's phone.

  13. 3D face analysis for demographic biometrics

    SciTech Connect

    Tokola, Ryan A; Mikkilineni, Aravind K; Boehnen, Chris Bensing

    2015-01-01

    Despite being increasingly easy to acquire, 3D data is rarely used for face-based biometrics applications beyond identification. Recent work in image-based demographic biometrics has enjoyed much success, but these approaches suffer from the well-known limitations of 2D representations, particularly variations in illumination, texture, and pose, as well as a fundamental inability to describe 3D shape. This paper shows that simple 3D shape features in a face-based coordinate system are capable of representing many biometric attributes without problem-specific models or specialized domain knowledge. The same feature vector achieves impressive results for problems as diverse as age estimation, gender classification, and race classification.

  14. 3D Printed Multimaterial Microfluidic Valve.

    PubMed

    Keating, Steven J; Gariboldi, Maria Isabella; Patrick, William G; Sharma, Sunanda; Kong, David S; Oxman, Neri

    2016-01-01

    We present a novel 3D printed multimaterial microfluidic proportional valve. The microfluidic valve is a fundamental primitive that enables the development of programmable, automated devices for controlling fluids in a precise manner. We discuss valve characterization results, as well as exploratory design variations in channel width, membrane thickness, and membrane stiffness. Compared to previous single material 3D printed valves that are stiff, these printed valves constrain fluidic deformation spatially, through combinations of stiff and flexible materials, to enable intricate geometries in an actuated, functionally graded device. Research presented marks a shift towards 3D printing multi-property programmable fluidic devices in a single step, in which integrated multimaterial valves can be used to control complex fluidic reactions for a variety of applications, including DNA assembly and analysis, continuous sampling and sensing, and soft robotics. PMID:27525809

  15. Angular description for 3D scattering centers

    NASA Astrophysics Data System (ADS)

    Bhalla, Rajan; Raynal, Ann Marie; Ling, Hao; Moore, John; Velten, Vincent J.

    2006-05-01

    The electromagnetic scattered field from an electrically large target can often be well modeled as if it is emanating from a discrete set of scattering centers (see Fig. 1). In the scattering center extraction tool we developed previously based on the shooting and bouncing ray technique, no correspondence is maintained amongst the 3D scattering center extracted at adjacent angles. In this paper we present a multi-dimensional clustering algorithm to track the angular and spatial behaviors of 3D scattering centers and group them into features. The extracted features for the Slicy and backhoe targets are presented. We also describe two metrics for measuring the angular persistence and spatial mobility of the 3D scattering centers that make up these features in order to gather insights into target physics and feature stability. We find that features that are most persistent are also the most mobile and discuss implications for optimal SAR imaging.

  16. Ames Lab 101: 3D Metals Printer

    SciTech Connect

    Ott, Ryan

    2014-02-13

    To meet one of the biggest energy challenges of the 21st century - finding alternatives to rare-earth elements and other critical materials - scientists will need new and advanced tools. The Critical Materials Institute at the U.S. Department of Energy's Ames Laboratory has a new one: a 3D printer for metals research. 3D printing technology, which has captured the imagination of both industry and consumers, enables ideas to move quickly from the initial design phase to final form using materials including polymers, ceramics, paper and even food. But the Critical Materials Institute (CMI) will apply the advantages of the 3D printing process in a unique way: for materials discovery.

  17. 3D Simulation: Microgravity Environments and Applications

    NASA Technical Reports Server (NTRS)

    Hunter, Steve L.; Dischinger, Charles; Estes, Samantha; Parker, Nelson C. (Technical Monitor)

    2001-01-01

    Most, if not all, 3-D and Virtual Reality (VR) software programs are designed for one-G gravity applications. Space environments simulations require gravity effects of one one-thousandth to one one-million of that of the Earth's surface (10(exp -3) - 10(exp -6) G), thus one must be able to generate simulations that replicate those microgravity effects upon simulated astronauts. Unfortunately, the software programs utilized by the National Aeronautical and Space Administration does not have the ability to readily neutralize the one-G gravity effect. This pre-programmed situation causes the engineer or analysis difficulty during micro-gravity simulations. Therefore, microgravity simulations require special techniques or additional code in order to apply the power of 3D graphic simulation to space related applications. This paper discusses the problem and possible solutions to allow microgravity 3-D/VR simulations to be completed successfully without program code modifications.

  18. Structured light field 3D imaging.

    PubMed

    Cai, Zewei; Liu, Xiaoli; Peng, Xiang; Yin, Yongkai; Li, Ameng; Wu, Jiachen; Gao, Bruce Z

    2016-09-01

    In this paper, we propose a method by means of light field imaging under structured illumination to deal with high dynamic range 3D imaging. Fringe patterns are projected onto a scene and modulated by the scene depth then a structured light field is detected using light field recording devices. The structured light field contains information about ray direction and phase-encoded depth, via which the scene depth can be estimated from different directions. The multidirectional depth estimation can achieve high dynamic 3D imaging effectively. We analyzed and derived the phase-depth mapping in the structured light field and then proposed a flexible ray-based calibration approach to determine the independent mapping coefficients for each ray. Experimental results demonstrated the validity of the proposed method to perform high-quality 3D imaging for highly and lowly reflective surfaces. PMID:27607639

  19. 3D holoscopic video imaging system

    NASA Astrophysics Data System (ADS)

    Steurer, Johannes H.; Pesch, Matthias; Hahne, Christopher

    2012-03-01

    Since many years, integral imaging has been discussed as a technique to overcome the limitations of standard still photography imaging systems where a three-dimensional scene is irrevocably projected onto two dimensions. With the success of 3D stereoscopic movies, a huge interest in capturing three-dimensional motion picture scenes has been generated. In this paper, we present a test bench integral imaging camera system aiming to tailor the methods of light field imaging towards capturing integral 3D motion picture content. We estimate the hardware requirements needed to generate high quality 3D holoscopic images and show a prototype camera setup that allows us to study these requirements using existing technology. The necessary steps that are involved in the calibration of the system as well as the technique of generating human readable holoscopic images from the recorded data are discussed.

  20. Spectroradiometric characterization of autostereoscopic 3D displays

    NASA Astrophysics Data System (ADS)

    Rubiño, Manuel; Salas, Carlos; Pozo, Antonio M.; Castro, J. J.; Pérez-Ocón, Francisco

    2013-11-01

    Spectroradiometric measurements have been made for the experimental characterization of the RGB channels of autostereoscopic 3D displays, giving results for different measurement angles with respect to the normal direction of the plane of the display. In the study, 2 different models of autostereoscopic 3D displays of different sizes and resolutions were used, making measurements with a spectroradiometer (model PR-670 SpectraScan of PhotoResearch). From the measurements made, goniometric results were recorded for luminance contrast, and the fundamental hypotheses have been evaluated for the characterization of the displays: independence of the RGB channels and their constancy. The results show that the display with the lower angle variability in the contrast-ratio value and constancy of the chromaticity coordinates nevertheless presented the greatest additivity deviations with the measurement angle. For both displays, when the parameters evaluated were taken into account, lower angle variability consistently resulted in the 2D mode than in the 3D mode.

  1. 3D Printed Multimaterial Microfluidic Valve

    PubMed Central

    Patrick, William G.; Sharma, Sunanda; Kong, David S.; Oxman, Neri

    2016-01-01

    We present a novel 3D printed multimaterial microfluidic proportional valve. The microfluidic valve is a fundamental primitive that enables the development of programmable, automated devices for controlling fluids in a precise manner. We discuss valve characterization results, as well as exploratory design variations in channel width, membrane thickness, and membrane stiffness. Compared to previous single material 3D printed valves that are stiff, these printed valves constrain fluidic deformation spatially, through combinations of stiff and flexible materials, to enable intricate geometries in an actuated, functionally graded device. Research presented marks a shift towards 3D printing multi-property programmable fluidic devices in a single step, in which integrated multimaterial valves can be used to control complex fluidic reactions for a variety of applications, including DNA assembly and analysis, continuous sampling and sensing, and soft robotics. PMID:27525809

  2. Decoder for 3-D color codes

    NASA Astrophysics Data System (ADS)

    Hsu, Kung-Chuan; Brun, Todd

    Transversal circuits are important components of fault-tolerant quantum computation. Several classes of quantum error-correcting codes are known to have transversal implementations of any logical Clifford operation. However, to achieve universal quantum computation, it would be helpful to have high-performance error-correcting codes that have a transversal implementation of some logical non-Clifford operation. The 3-D color codes are a class of topological codes that permit transversal implementation of the logical π / 8 -gate. The decoding problem of a 3-D color code can be understood as a graph-matching problem on a three-dimensional lattice. Whether this class of codes will be useful in terms of performance is still an open question. We investigate the decoding problem of 3-D color codes and analyze the performance of some possible decoders.

  3. Particle Acceleration in 3D Magnetic Reconnection

    NASA Astrophysics Data System (ADS)

    Dahlin, J.; Drake, J. F.; Swisdak, M.

    2015-12-01

    Magnetic reconnection is an important driver of energetic particles in phenomena such as magnetospheric storms and solar flares. Using kinetic particle-in-cell (PIC) simulations, we show that the stochastic magnetic field structure which develops during 3D reconnection plays a vital role in particle acceleration and transport. In a 2D system, electrons are trapped in magnetic islands which limits their energy gain. In a 3D system, however, the stochastic magnetic field enables the energetic electrons to access volume-filling acceleration regions and therefore gain energy much more efficiently than in the 2D system. We also examine the relative roles of two important acceleration drivers: parallel electric fields and a Fermi mechanism associated with reflection of charged particles from contracting field lines. We find that parallel electric fields are most important for accelerating low energy particles, whereas Fermi reflection dominates energetic particle production. We also find that proton energization is reduced in the 3D system.

  4. Ames Lab 101: 3D Metals Printer

    ScienceCinema

    Ott, Ryan

    2014-06-04

    To meet one of the biggest energy challenges of the 21st century - finding alternatives to rare-earth elements and other critical materials - scientists will need new and advanced tools. The Critical Materials Institute at the U.S. Department of Energy's Ames Laboratory has a new one: a 3D printer for metals research. 3D printing technology, which has captured the imagination of both industry and consumers, enables ideas to move quickly from the initial design phase to final form using materials including polymers, ceramics, paper and even food. But the Critical Materials Institute (CMI) will apply the advantages of the 3D printing process in a unique way: for materials discovery.

  5. 3-D Finite Element Heat Transfer

    Energy Science and Technology Software Center (ESTSC)

    1992-02-01

    TOPAZ3D is a three-dimensional implicit finite element computer code for heat transfer analysis. TOPAZ3D can be used to solve for the steady-state or transient temperature field on three-dimensional geometries. Material properties may be temperature-dependent and either isotropic or orthotropic. A variety of time-dependent and temperature-dependent boundary conditions can be specified including temperature, flux, convection, and radiation. By implementing the user subroutine feature, users can model chemical reaction kinetics and allow for any type of functionalmore » representation of boundary conditions and internal heat generation. TOPAZ3D can solve problems of diffuse and specular band radiation in an enclosure coupled with conduction in the material surrounding the enclosure. Additional features include thermal contact resistance across an interface, bulk fluids, phase change, and energy balances.« less

  6. Impedance mammograph 3D phantom studies.

    PubMed

    Wtorek, J; Stelter, J; Nowakowski, A

    1999-04-20

    The results obtained using the Technical University of Gdansk Electroimpedance Mammograph (TUGEM) of a 3D phantom study are presented. The TUGEM system is briefly described. The hardware contains the measurement head and DSP-based identification modules controlled by a PC computer. A specially developed reconstruction algorithm, Regulated Correction Frequency Algebraic Reconstruction Technique (RCFART), is used to obtain 3D images. To visualize results, the Advance Visualization System (AVS) is used. It allows a powerful image processing on a fast workstation or on a high-performance computer. Results of three types of 3D conductivity perturbations used in the study (aluminum, Plexiglas, and cucumber) are shown. The relative volumes of perturbations less than 2% of the measurement chamber are easily evidenced. PMID:10372188

  7. 3D EIT image reconstruction with GREIT.

    PubMed

    Grychtol, Bartłomiej; Müller, Beat; Adler, Andy

    2016-06-01

    Most applications of thoracic EIT use a single plane of electrodes on the chest from which a transverse image 'slice' is calculated. However, interpretation of EIT images is made difficult by the large region above and below the electrode plane to which EIT is sensitive. Volumetric EIT images using two (or more) electrode planes should help compensate, but are little used currently. The Graz consensus reconstruction algorithm for EIT (GREIT) has become popular in lung EIT. One shortcoming of the original formulation of GREIT is its restriction to reconstruction onto a 2D planar image. We present an extension of the GREIT algorithm to 3D and develop open-source tools to evaluate its performance as a function of the choice of stimulation and measurement pattern. Results show 3D GREIT using two electrode layers has significantly more uniform sensitivity profiles through the chest region. Overall, the advantages of 3D EIT are compelling. PMID:27203184

  8. Methods for comparing 3D surface attributes

    NASA Astrophysics Data System (ADS)

    Pang, Alex; Freeman, Adam

    1996-03-01

    A common task in data analysis is to compare two or more sets of data, statistics, presentations, etc. A predominant method in use is side-by-side visual comparison of images. While straightforward, it burdens the user with the task of discerning the differences between the two images. The user if further taxed when the images are of 3D scenes. This paper presents several methods for analyzing the extent, magnitude, and manner in which surfaces in 3D differ in their attributes. The surface geometry are assumed to be identical and only the surface attributes (color, texture, etc.) are variable. As a case in point, we examine the differences obtained when a 3D scene is rendered progressively using radiosity with different form factor calculation methods. The comparison methods include extensions of simple methods such as mapping difference information to color or transparency, and more recent methods including the use of surface texture, perturbation, and adaptive placements of error glyphs.

  9. Local Diagnosis of Reconnection in 3D

    NASA Astrophysics Data System (ADS)

    Scudder, J. D.; Karimabadi, H.; Daughton, W. S.; Roytershteyn, V.

    2014-12-01

    We demonstrate (I,II) an approach to find reconnection sites in 3D where there is no flux function for guidance, and where local observational signatures for the ``violation of frozen flux'' are under developed, if not non-existent. We use 2D and 3D PIC simulations of asymmetric guide field reconnection to test our observational hierarchy of single spacecraft kinetic diagnostics - all possible with present state of the art instrumentation. The proliferation of turbulent, electron inertial scale layers in the realistic 3D case demonstrates that electron demagnetization, while necessary, is not sufficient to identify reconnection sites. An excellent local, observable, single spacecraft proxy is demonstrated for the size of the theoretical frozen flux violation. Since even frozen flux violations need not imply reconnection is at hand, a new calibrated dimensionless method is used to determine the importance of such violations. This measure is available in 2D and 3D to help differentiate reconnection layers from weaker frozen flux violating layers. We discuss the possibility that this technique can be implemented on MMS. A technique to highlight flow geometries conducive to reconnection in 3D simulations is also suggested, that may also be implementable with the MMS flotilla. We use local analysis with multiple necessary, but theoretically independent electron kinetic conditions to help reduce the probability of misidentification of any given layer as a reconnection site. Since these local conditions are all necessary for the site, but none is known to be sufficient, the multiple tests help to greatly reduce false positive identifications. The selectivity of the results of this approach using PIC simulations of 3D asymmetric guide field reconnection will be shown using varying numbers of simultaneous conditions. Scudder, J.D., H. Karimabadi, W. Daughton and V. Roytershteyn I, II, submitted Phys. Plasma., 2014

  10. PlumeSat: A Micro-Satellite Based Plume Imagery Collection Experiment

    SciTech Connect

    Ledebuhr, A.G.; Ng, L.C.

    2002-06-30

    This paper describes a technical approach to cost-effectively collect plume imagery of boosting targets using a novel micro-satellite based platform operating in low earth orbit (LEO). The plume collection Micro-satellite or PlueSat for short, will be capable of carrying an array of multi-spectral (UV through LWIR) passive and active (Imaging LADAR) sensors and maneuvering with a lateral divert propulsion system to different observation altitudes (100 to 300 km) and different closing geometries to achieve a range of aspect angles (15 to 60 degrees) in order to simulate a variety of boost phase intercept missions. The PlumeSat will be a cost effective platform to collect boost phase plume imagery from within 1 to 10 km ranges, resulting in 0.1 to 1 meter resolution imagery of a variety of potential target missiles with a goal of demonstrating reliable plume-to-hardbody handover algorithms for future boost phase intercept missions. Once deployed on orbit, the PlumeSat would perform a series phenomenology collection experiments until expends its on-board propellants. The baseline PlumeSat concept is sized to provide from 5 to 7 separate fly by data collects of boosting targets. The total number of data collects will depend on the orbital basing altitude and the accuracy in delivering the boosting target vehicle to the nominal PlumeSat fly-by volume.

  11. 3D printed diffractive terahertz lenses.

    PubMed

    Furlan, Walter D; Ferrando, Vicente; Monsoriu, Juan A; Zagrajek, Przemysław; Czerwińska, Elżbieta; Szustakowski, Mieczysław

    2016-04-15

    A 3D printer was used to realize custom-made diffractive THz lenses. After testing several materials, phase binary lenses with periodic and aperiodic radial profiles were designed and constructed in polyamide material to work at 0.625 THz. The nonconventional focusing properties of such lenses were assessed by computing and measuring their axial point spread function (PSF). Our results demonstrate that inexpensive 3D printed THz diffractive lenses can be reliably used in focusing and imaging THz systems. Diffractive THz lenses with unprecedented features, such as extended depth of focus or bifocalization, have been demonstrated. PMID:27082335

  12. The Galicia 3D experiment: an Introduction.

    NASA Astrophysics Data System (ADS)

    Reston, Timothy; Martinez Loriente, Sara; Holroyd, Luke; Merry, Tobias; Sawyer, Dale; Morgan, Julia; Jordan, Brian; Tesi Sanjurjo, Mari; Alexanian, Ara; Shillington, Donna; Gibson, James; Minshull, Tim; Karplus, Marianne; Bayracki, Gaye; Davy, Richard; Klaeschen, Dirk; Papenberg, Cord; Ranero, Cesar; Perez-Gussinye, Marta; Martinez, Miguel

    2014-05-01

    In June and July 2013, scientists from 8 institutions took part in the Galicia 3D seismic experiment, the first ever crustal -scale academic 3D MCS survey over a rifted margin. The aim was to determine the 3D structure of a critical portion of the west Galicia rifted margin. At this margin, well-defined tilted fault blocks, bound by west-dipping faults and capped by synrift sediments are underlain by a bright reflection, undulating on time sections, termed the S reflector and thought to represent a major detachment fault of some kind. Moving west, the crust thins to zero thickness and mantle is unroofed, as evidence by the "Peridotite Ridge" first reported at this margin, but since observed at many other magma-poor margins. By imaging such a margin in detail, the experiment aimed to resolve the processes controlling crustal thinning and mantle unroofing at a type example magma poor margin. The experiment set out to collect several key datasets: a 3D seismic reflection volume measuring ~20x64km and extending down to ~14s TWT, a 3D ocean bottom seismometer dataset suitable for full wavefield inversion (the recording of the complete 3D seismic shots by 70 ocean bottom instruments), the "mirror imaging" of the crust using the same grid of OBS, a single 2D combined reflection/refraction profile extending to the west to determine the transition from unroofed mantle to true oceanic crust, and the seismic imaging of the water column, calibrated by regular deployment of XBTs to measure the temperature structure of the water column. We collected 1280 km2 of seismic reflection data, consisting of 136533 shots recorded on 1920 channels, producing 260 million seismic traces, each ~ 14s long. This adds up to ~ 8 terabytes of data, representing, we believe, the largest ever academic 3D MCS survey in terms of both the area covered and the volume of data. The OBS deployment was the largest ever within an academic 3D survey.

  13. Vector quantization of 3-D point clouds

    NASA Astrophysics Data System (ADS)

    Sim, Jae-Young; Kim, Chang-Su; Lee, Sang-Uk

    2005-10-01

    A geometry compression algorithm for 3-D QSplat data using vector quantization (VQ) is proposed in this work. The positions of child spheres are transformed to the local coordinate system, which is determined by the parent children relationship. The coordinate transform makes child positions more compactly distributed in 3-D space, facilitating effective quantization. Moreover, we develop a constrained encoding method for sphere radii, which guarantees hole-free surface rendering at the decoder side. Simulation results show that the proposed algorithm provides a faithful rendering quality even at low bitrates.

  14. Solar abundances and 3D model atmospheres

    NASA Astrophysics Data System (ADS)

    Ludwig, Hans-Günter; Caffau, Elisabetta; Steffen, Matthias; Bonifacio, Piercarlo; Freytag, Bernd; Cayrel, Roger

    2010-03-01

    We present solar photospheric abundances for 12 elements from optical and near-infrared spectroscopy. The abundance analysis was conducted employing 3D hydrodynamical (CO5BOLD) as well as standard 1D hydrostatic model atmospheres. We compare our results to others with emphasis on discrepancies and still lingering problems, in particular exemplified by the pivotal abundance of oxygen. We argue that the thermal structure of the lower solar photosphere is very well represented by our 3D model. We obtain an excellent match of the observed center-to-limb variation of the line-blanketed continuum intensity, also at wavelengths shortward of the Balmer jump.

  15. Visualization of liver in 3-D

    NASA Astrophysics Data System (ADS)

    Chen, Chin-Tu; Chou, Jin-Shin; Giger, Maryellen L.; Kahn, Charles E., Jr.; Bae, Kyongtae T.; Lin, Wei-Chung

    1991-05-01

    Visualization of the liver in three dimensions (3-D) can improve the accuracy of volumetric estimation and also aid in surgical planning. We have developed a method for 3-D visualization of the liver using x-ray computed tomography (CT) or magnetic resonance (MR) images. This method includes four major components: (1) segmentation algorithms for extracting liver data from tomographic images; (2) interpolation techniques for both shape and intensity; (3) schemes for volume rendering and display, and (4) routines for electronic surgery and image analysis. This method has been applied to cases from a living-donor liver transplant project and appears to be useful for surgical planning.

  16. Acquisition and applications of 3D images

    NASA Astrophysics Data System (ADS)

    Sterian, Paul; Mocanu, Elena

    2007-08-01

    The moiré fringes method and their analysis up to medical and entertainment applications are discussed in this paper. We describe the procedure of capturing 3D images with an Inspeck Camera that is a real-time 3D shape acquisition system based on structured light techniques. The method is a high-resolution one. After processing the images, using computer, we can use the data for creating laser fashionable objects by engraving them with a Q-switched Nd:YAG. In medical field we mention the plastic surgery and the replacement of X-Ray especially in pediatric use.

  17. Anisotropy effects on 3D waveform inversion

    NASA Astrophysics Data System (ADS)

    Stekl, I.; Warner, M.; Umpleby, A.

    2010-12-01

    In the recent years 3D waveform inversion has become achievable procedure for seismic data processing. A number of datasets has been inverted and presented (Warner el al 2008, Ben Hadj at all, Sirgue et all 2010) using isotropic 3D waveform inversion. However the question arises will the results be affected by isotropic assumption. Full-wavefield inversion techniques seek to match field data, wiggle-for-wiggle, to synthetic data generated by a high-resolution model of the sub-surface. In this endeavour, correctly matching the travel times of the principal arrivals is a necessary minimal requirement. In many, perhaps most, long-offset and wide-azimuth datasets, it is necessary to introduce some form of p-wave velocity anisotropy to match the travel times successfully. If this anisotropy is not also incorporated into the wavefield inversion, then results from the inversion will necessarily be compromised. We have incorporated anisotropy into our 3D wavefield tomography codes, characterised as spatially varying transverse isotropy with a tilted axis of symmetry - TTI anisotropy. This enhancement approximately doubles both the run time and the memory requirements of the code. We show that neglect of anisotropy can lead to significant artefacts in the recovered velocity models. We will present inversion results of inverting anisotropic 3D dataset by assuming isotropic earth and compare them with anisotropic inversion result. As a test case Marmousi model extended to 3D with no velocity variation in third direction and with added spatially varying anisotropy is used. Acquisition geometry is assumed as OBC with sources and receivers everywhere at the surface. We attempted inversion using both 2D and full 3D acquisition for this dataset. Results show that if no anisotropy is taken into account although image looks plausible most features are miss positioned in depth and space, even for relatively low anisotropy, which leads to incorrect result. This may lead to

  18. FARGO3D: Hydrodynamics/magnetohydrodynamics code

    NASA Astrophysics Data System (ADS)

    Benítez Llambay, Pablo; Masset, Frédéric

    2015-09-01

    A successor of FARGO (ascl:1102.017), FARGO3D is a versatile HD/MHD code that runs on clusters of CPUs or GPUs, with special emphasis on protoplanetary disks. FARGO3D offers Cartesian, cylindrical or spherical geometry; 1-, 2- or 3-dimensional calculations; and orbital advection (aka FARGO) for HD and MHD calculations. As in FARGO, a simple Runge-Kutta N-body solver may be used to describe the orbital evolution of embedded point-like objects. There is no need to know CUDA; users can develop new functions in C and have them translated to CUDA automatically to run on GPUs.

  19. 3D Modeling Engine Representation Summary Report

    SciTech Connect

    Steven Prescott; Ramprasad Sampath; Curtis Smith; Timothy Yang

    2014-09-01

    Computers have been used for 3D modeling and simulation, but only recently have computational resources been able to give realistic results in a reasonable time frame for large complex models. This summary report addressed the methods, techniques, and resources used to develop a 3D modeling engine to represent risk analysis simulation for advanced small modular reactor structures and components. The simulations done for this evaluation were focused on external events, specifically tsunami floods, for a hypothetical nuclear power facility on a coastline.

  20. Immersive 3D geovisualisation in higher education

    NASA Astrophysics Data System (ADS)

    Philips, Andrea; Walz, Ariane; Bergner, Andreas; Graeff, Thomas; Heistermann, Maik; Kienzler, Sarah; Korup, Oliver; Lipp, Torsten; Schwanghart, Wolfgang; Zeilinger, Gerold

    2014-05-01

    Through geovisualisation we explore spatial data, we analyse it towards a specific questions, we synthesise results, and we present and communicate them to a specific audience (MacEachren & Kraak 1997). After centuries of paper maps, the means to represent and visualise our physical environment and its abstract qualities have changed dramatically since the 1990s - and accordingly the methods how to use geovisualisation in teaching. Whereas some people might still consider the traditional classroom as ideal setting for teaching and learning geographic relationships and its mapping, we used a 3D CAVE (computer-animated virtual environment) as environment for a problem-oriented learning project called "GEOSimulator". Focussing on this project, we empirically investigated, if such a technological advance like the CAVE make 3D visualisation, including 3D geovisualisation, not only an important tool for businesses (Abulrub et al. 2012) and for the public (Wissen et al. 2008), but also for educational purposes, for which it had hardly been used yet. The 3D CAVE is a three-sided visualisation platform, that allows for immersive and stereoscopic visualisation of observed and simulated spatial data. We examined the benefits of immersive 3D visualisation for geographic research and education and synthesized three fundamental technology-based visual aspects: First, the conception and comprehension of space and location does not need to be generated, but is instantaneously and intuitively present through stereoscopy. Second, optical immersion into virtual reality strengthens this spatial perception which is in particular important for complex 3D geometries. And third, a significant benefit is interactivity, which is enhanced through immersion and allows for multi-discursive and dynamic data exploration and knowledge transfer. Based on our problem-oriented learning project, which concentrates on a case study on flood risk management at the Wilde Weisseritz in Germany, a river

  1. Cryogenic 3D printing for tissue engineering.

    PubMed

    Adamkiewicz, Michal; Rubinsky, Boris

    2015-12-01

    We describe a new cryogenic 3D printing technology for freezing hydrogels, with a potential impact to tissue engineering. We show that complex frozen hydrogel structures can be generated when the 3D object is printed immersed in a liquid coolant (liquid nitrogen), whose upper surface is maintained at the same level as the highest deposited layer of the object. This novel approach ensures that the process of freezing is controlled precisely, and that already printed frozen layers remain at a constant temperature. We describe the device and present results which illustrate the potential of the new technology. PMID:26548335

  2. Innovations in 3D printing: a 3D overview from optics to organs.

    PubMed

    Schubert, Carl; van Langeveld, Mark C; Donoso, Larry A

    2014-02-01

    3D printing is a method of manufacturing in which materials, such as plastic or metal, are deposited onto one another in layers to produce a three dimensional object, such as a pair of eye glasses or other 3D objects. This process contrasts with traditional ink-based printers which produce a two dimensional object (ink on paper). To date, 3D printing has primarily been used in engineering to create engineering prototypes. However, recent advances in printing materials have now enabled 3D printers to make objects that are comparable with traditionally manufactured items. In contrast with conventional printers, 3D printing has the potential to enable mass customisation of goods on a large scale and has relevance in medicine including ophthalmology. 3D printing has already been proved viable in several medical applications including the manufacture of eyeglasses, custom prosthetic devices and dental implants. In this review, we discuss the potential for 3D printing to revolutionise manufacturing in the same way as the printing press revolutionised conventional printing. The applications and limitations of 3D printing are discussed; the production process is demonstrated by producing a set of eyeglass frames from 3D blueprints. PMID:24288392

  3. Recent developments in DFD (depth-fused 3D) display and arc 3D display

    NASA Astrophysics Data System (ADS)

    Suyama, Shiro; Yamamoto, Hirotsugu

    2015-05-01

    We will report our recent developments in DFD (Depth-fused 3D) display and arc 3D display, both of which have smooth movement parallax. Firstly, fatigueless DFD display, composed of only two layered displays with a gap, has continuous perceived depth by changing luminance ratio between two images. Two new methods, called "Edge-based DFD display" and "Deep DFD display", have been proposed in order to solve two severe problems of viewing angle and perceived depth limitations. Edge-based DFD display, layered by original 2D image and its edge part with a gap, can expand the DFD viewing angle limitation both in 2D and 3D perception. Deep DFD display can enlarge the DFD image depth by modulating spatial frequencies of front and rear images. Secondly, Arc 3D display can provide floating 3D images behind or in front of the display by illuminating many arc-shaped directional scattering sources, for example, arcshaped scratches on a flat board. Curved Arc 3D display, composed of many directional scattering sources on a curved surface, can provide a peculiar 3D image, for example, a floating image in the cylindrical bottle. The new active device has been proposed for switching arc 3D images by using the tips of dual-frequency liquid-crystal prisms as directional scattering sources. Directional scattering can be switched on/off by changing liquid-crystal refractive index, resulting in switching of arc 3D image.

  4. The EISCAT_3D Science Case

    NASA Astrophysics Data System (ADS)

    Tjulin, A.; Mann, I.; McCrea, I.; Aikio, A. T.

    2013-05-01

    EISCAT_3D will be a world-leading international research infrastructure using the incoherent scatter technique to study the atmosphere in the Fenno-Scandinavian Arctic and to investigate how the Earth's atmosphere is coupled to space. The EISCAT_3D phased-array multistatic radar system will be operated by EISCAT Scientific Association and thus be an integral part of an organisation that has successfully been running incoherent scatter radars for more than thirty years. The baseline design of the radar system contains a core site with transmitting and receiving capabilities located close to the intersection of the Swedish, Norwegian and Finnish borders and five receiving sites located within 50 to 250 km from the core. The EISCAT_3D project is currently in its Preparatory Phase and can smoothly transit into implementation in 2014, provided sufficient funding. Construction can start 2016 and first operations in 2018. The EISCAT_3D Science Case is prepared as part of the Preparatory Phase. It is regularly updated with annual new releases, and it aims at being a common document for the whole future EISCAT_3D user community. The areas covered by the Science Case are atmospheric physics and global change; space and plasma physics; solar system research; space weather and service applications; and radar techniques, new methods for coding and analysis. Two of the aims for EISCAT_3D are to understand the ways natural variability in the upper atmosphere, imposed by the Sun-Earth system, can influence the middle and lower atmosphere, and to improve the predictivity of atmospheric models by providing higher resolution observations to replace the current parametrised input. Observations by EISCAT_3D will also be used to monitor the direct effects from the Sun on the ionosphere-atmosphere system and those caused by solar wind magnetosphere-ionosphere interaction. In addition, EISCAT_3D will be used for remote sensing the large-scale behaviour of the magnetosphere from its

  5. Scoops3D: software to analyze 3D slope stability throughout a digital landscape

    USGS Publications Warehouse

    Reid, Mark E.; Christian, Sarah B.; Brien, Dianne L.; Henderson, Scott T.

    2015-01-01

    The computer program, Scoops3D, evaluates slope stability throughout a digital landscape represented by a digital elevation model (DEM). The program uses a three-dimensional (3D) method of columns approach to assess the stability of many (typically millions) potential landslides within a user-defined size range. For each potential landslide (or failure), Scoops3D assesses the stability of a rotational, spherical slip surface encompassing many DEM cells using a 3D version of either Bishop’s simplified method or the Ordinary (Fellenius) method of limit-equilibrium analysis. Scoops3D has several options for the user to systematically and efficiently search throughout an entire DEM, thereby incorporating the effects of complex surface topography. In a thorough search, each DEM cell is included in multiple potential failures, and Scoops3D records the lowest stability (factor of safety) for each DEM cell, as well as the size (volume or area) associated with each of these potential landslides. It also determines the least-stable potential failure for the entire DEM. The user has a variety of options for building a 3D domain, including layers or full 3D distributions of strength and pore-water pressures, simplistic earthquake loading, and unsaturated suction conditions. Results from Scoops3D can be readily incorporated into a geographic information system (GIS) or other visualization software. This manual includes information on the theoretical basis for the slope-stability analysis, requirements for constructing and searching a 3D domain, a detailed operational guide (including step-by-step instructions for using the graphical user interface [GUI] software, Scoops3D-i) and input/output file specifications, practical considerations for conducting an analysis, results of verification tests, and multiple examples illustrating the capabilities of Scoops3D. Easy-to-use software installation packages are available for the Windows or Macintosh operating systems; these packages

  6. Effect of viewing distance on 3D fatigue caused by viewing mobile 3D content

    NASA Astrophysics Data System (ADS)

    Mun, Sungchul; Lee, Dong-Su; Park, Min-Chul; Yano, Sumio

    2013-05-01

    With an advent of autostereoscopic display technique and increased needs for smart phones, there has been a significant growth in mobile TV markets. The rapid growth in technical, economical, and social aspects has encouraged 3D TV manufacturers to apply 3D rendering technology to mobile devices so that people have more opportunities to come into contact with many 3D content anytime and anywhere. Even if the mobile 3D technology leads to the current market growth, there is an important thing to consider for consistent development and growth in the display market. To put it briefly, human factors linked to mobile 3D viewing should be taken into consideration before developing mobile 3D technology. Many studies have investigated whether mobile 3D viewing causes undesirable biomedical effects such as motion sickness and visual fatigue, but few have examined main factors adversely affecting human health. Viewing distance is considered one of the main factors to establish optimized viewing environments from a viewer's point of view. Thus, in an effort to determine human-friendly viewing environments, this study aims to investigate the effect of viewing distance on human visual system when exposing to mobile 3D environments. Recording and analyzing brainwaves before and after watching mobile 3D content, we explore how viewing distance affects viewing experience from physiological and psychological perspectives. Results obtained in this study are expected to provide viewing guidelines for viewers, help ensure viewers against undesirable 3D effects, and lead to make gradual progress towards a human-friendly mobile 3D viewing.

  7. GPM 3D Flyby of Hurricane Lester

    NASA Video Gallery

    This 3-D flyby of Lester was created using GPM's Radar data. NASA/JAXA's GPM core observatory satellite flew over Hurricane Lester on August 29, 2016 at 7:21 p.m. EDT. Rain was measured by GPM's ra...

  8. Spatial Visualization by Realistic 3D Views

    ERIC Educational Resources Information Center

    Yue, Jianping

    2008-01-01

    In this study, the popular Purdue Spatial Visualization Test-Visualization by Rotations (PSVT-R) in isometric drawings was recreated with CAD software that allows 3D solid modeling and rendering to provide more realistic pictorial views. Both the original and the modified PSVT-R tests were given to students and their scores on the two tests were…

  9. 3D printed PLA-based scaffolds

    PubMed Central

    Serra, Tiziano; Mateos-Timoneda, Miguel A; Planell, Josep A; Navarro, Melba

    2013-01-01

    Rapid prototyping (RP), also known as additive manufacturing (AM), has been well received and adopted in the biomedical field. The capacity of this family of techniques to fabricate customized 3D structures with complex geometries and excellent reproducibility has revolutionized implantology and regenerative medicine. In particular, nozzle-based systems allow the fabrication of high-resolution polylactic acid (PLA) structures that are of interest in regenerative medicine. These 3D structures find interesting applications in the regenerative medicine field where promising applications including biodegradable templates for tissue regeneration purposes, 3D in vitro platforms for studying cell response to different scaffolds conditions and for drug screening are considered among others. Scaffolds functionality depends not only on the fabrication technique, but also on the material used to build the 3D structure, the geometry and inner architecture of the structure, and the final surface properties. All being crucial parameters affecting scaffolds success. This Commentary emphasizes the importance of these parameters in scaffolds’ fabrication and also draws the attention toward the versatility of these PLA scaffolds as a potential tool in regenerative medicine and other medical fields. PMID:23959206

  10. 3D printed microfluidics for biological applications.

    PubMed

    Ho, Chee Meng Benjamin; Ng, Sum Huan; Li, King Ho Holden; Yoon, Yong-Jin

    2015-01-01

    The term "Lab-on-a-Chip," is synonymous with describing microfluidic devices with biomedical applications. Even though microfluidics have been developing rapidly over the past decade, the uptake rate in biological research has been slow. This could be due to the tedious process of fabricating a chip and the absence of a "killer application" that would outperform existing traditional methods. In recent years, three dimensional (3D) printing has been drawing much interest from the research community. It has the ability to make complex structures with high resolution. Moreover, the fast building time and ease of learning has simplified the fabrication process of microfluidic devices to a single step. This could possibly aid the field of microfluidics in finding its "killer application" that will lead to its acceptance by researchers, especially in the biomedical field. In this paper, a review is carried out of how 3D printing helps to improve the fabrication of microfluidic devices, the 3D printing technologies currently used for fabrication and the future of 3D printing in the field of microfluidics. PMID:26237523

  11. Rubber Impact on 3D Textile Composites

    NASA Astrophysics Data System (ADS)

    Heimbs, Sebastian; Van Den Broucke, Björn; Duplessis Kergomard, Yann; Dau, Frederic; Malherbe, Benoit

    2012-06-01

    A low velocity impact study of aircraft tire rubber on 3D textile-reinforced composite plates was performed experimentally and numerically. In contrast to regular unidirectional composite laminates, no delaminations occur in such a 3D textile composite. Yarn decohesions, matrix cracks and yarn ruptures have been identified as the major damage mechanisms under impact load. An increase in the number of 3D warp yarns is proposed to improve the impact damage resistance. The characteristic of a rubber impact is the high amount of elastic energy stored in the impactor during impact, which was more than 90% of the initial kinetic energy. This large geometrical deformation of the rubber during impact leads to a less localised loading of the target structure and poses great challenges for the numerical modelling. A hyperelastic Mooney-Rivlin constitutive law was used in Abaqus/Explicit based on a step-by-step validation with static rubber compression tests and low velocity impact tests on aluminium plates. Simulation models of the textile weave were developed on the meso- and macro-scale. The final correlation between impact simulation results on 3D textile-reinforced composite plates and impact test data was promising, highlighting the potential of such numerical simulation tools.

  12. Introduction to 3D Graphics through Excel

    ERIC Educational Resources Information Center

    Benacka, Jan

    2013-01-01

    The article presents a method of explaining the principles of 3D graphics through making a revolvable and sizable orthographic parallel projection of cuboid in Excel. No programming is used. The method was tried in fourteen 90 minute lessons with 181 participants, which were Informatics teachers, undergraduates of Applied Informatics and gymnasium…

  13. 3D Virtual Reality for Teaching Astronomy

    NASA Astrophysics Data System (ADS)

    Speck, Angela; Ruzhitskaya, L.; Laffey, J.; Ding, N.

    2012-01-01

    We are developing 3D virtual learning environments (VLEs) as learning materials for an undergraduate astronomy course, in which will utilize advances both in technologies available and in our understanding of the social nature of learning. These learning materials will be used to test whether such VLEs can indeed augment science learning so that it is more engaging, active, visual and effective. Our project focuses on the challenges and requirements of introductory college astronomy classes. Here we present our virtual world of the Jupiter system and how we plan to implement it to allow students to learn course material - physical laws and concepts in astronomy - while engaging them into exploration of the Jupiter's system, encouraging their imagination, curiosity, and motivation. The VLE can allow students to work individually or collaboratively. The 3D world also provides an opportunity for research in astronomy education to investigate impact of social interaction, gaming features, and use of manipulatives offered by a learning tool on students’ motivation and learning outcomes. Use of this VLE is also a valuable source for exploration of how the learners’ spatial awareness can be enhanced by working in 3D environment. We will present the Jupiter-system environment along with a preliminary study of the efficacy and usability of our Jupiter 3D VLE.

  14. Spacecraft 3D Augmented Reality Mobile App

    NASA Technical Reports Server (NTRS)

    Hussey, Kevin J.; Doronila, Paul R.; Kumanchik, Brian E.; Chan, Evan G.; Ellison, Douglas J.; Boeck, Andrea; Moore, Justin M.

    2013-01-01

    The Spacecraft 3D application allows users to learn about and interact with iconic NASA missions in a new and immersive way using common mobile devices. Using Augmented Reality (AR) techniques to project 3D renditions of the mission spacecraft into real-world surroundings, users can interact with and learn about Curiosity, GRAIL, Cassini, and Voyager. Additional updates on future missions, animations, and information will be ongoing. Using a printed AR Target and camera on a mobile device, users can get up close with these robotic explorers, see how some move, and learn about these engineering feats, which are used to expand knowledge and understanding about space. The software receives input from the mobile device's camera to recognize the presence of an AR marker in the camera's field of view. It then displays a 3D rendition of the selected spacecraft in the user's physical surroundings, on the mobile device's screen, while it tracks the device's movement in relation to the physical position of the spacecraft's 3D image on the AR marker.

  15. How to See Shadows in 3D

    ERIC Educational Resources Information Center

    Parikesit, Gea O. F.

    2014-01-01

    Shadows can be found easily everywhere around us, so that we rarely find it interesting to reflect on how they work. In order to raise curiosity among students on the optics of shadows, we can display the shadows in 3D, particularly using a stereoscopic set-up. In this paper we describe the optics of stereoscopic shadows using simple schematic…

  16. 3-D Volume Rendering of Sand Specimen

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Computed tomography (CT) images of resin-impregnated Mechanics of Granular Materials (MGM) specimens are assembled to provide 3-D volume renderings of density patterns formed by dislocation under the external loading stress profile applied during the experiments. Experiments flown on STS-79 and STS-89. Principal Investigator: Dr. Stein Sture

  17. Crack interaction with 3-D dislocation loops

    NASA Astrophysics Data System (ADS)

    Gao, Huajian

    CRACKS in a solid often interact with other crystal defects such as dislocation loops. The interaction effects are of 3-D character yet their analytical treatment has been mostly limited to the 2-D regime due to mathematical complications. This paper shows that distribution of the stress intensity factors along a crack front due to arbitrary dislocation loops may be expressed as simple line integrals along the loop contours. The method of analysis is based on the 3-D Bueckner-Rice weight function theory for elastic crack analysis. Our results have significantly simplified the calculations for 3-D dislocation loops produced in the plastic processes at the crack front due to highly concentrated crack tip stress fields. Examples for crack-tip 3-D loops and 2-D straight dislocations emerging from the crack tip are given to demonstrate applications of the derived formulae. The results are consistent with some previous analytical solutions existing in the literature. As further applications we also analyse straight dislocations that are parallel or perpendicular to the crack plane but are not parallel to the crack front.

  18. Holography of incoherently illuminated 3D scenes

    NASA Astrophysics Data System (ADS)

    Shaked, Natan T.; Rosen, Joseph

    2008-04-01

    We review several methods of generating holograms of 3D realistic objects illuminated by incoherent white light. Using these methods, it is possible to obtain holograms with a simple digital camera, operating in regular light conditions. Thus, most disadvantages characterizing conventional holography, namely the need for a powerful, highly coherent laser and meticulous stability of the optical system are avoided. These holograms can be reconstructed optically by illuminating them with a coherent plane wave, or alternatively by using a digital reconstruction technique. In order to generate the proposed hologram, the 3D scene is captured from multiple points of view by a simple digital camera. Then, the acquired projections are digitally processed to yield the final hologram of the 3D scene. Based on this principle, we can generate Fourier, Fresnel, image or other types of holograms. To obtain certain advantages over the regular holograms, we also propose new digital holograms, such as modified Fresnel holograms and protected correlation holograms. Instead of shifting the camera mechanically to acquire a different projection of the 3D scene each time, it is possible to use a microlens array for acquiring the entire projections in a single camera shot. Alternatively, only the extreme projections can be acquired experimentally, while the middle projections are predicted digitally by using the view synthesis algorithm. The prospective goal of these methods is to facilitate the design of a simple, portable digital holographic camera which can be useful for a variety of practical applications.

  19. 3D puzzle reconstruction for archeological fragments

    NASA Astrophysics Data System (ADS)

    Jampy, F.; Hostein, A.; Fauvet, E.; Laligant, O.; Truchetet, F.

    2015-03-01

    The reconstruction of broken artifacts is a common task in archeology domain; it can be supported now by 3D data acquisition device and computer processing. Many works have been dedicated in the past to reconstructing 2D puzzles but very few propose a true 3D approach. We present here a complete solution including a dedicated transportable 3D acquisition set-up and a virtual tool with a graphic interface allowing the archeologists to manipulate the fragments and to, interactively, reconstruct the puzzle. The whole lateral part is acquired by rotating the fragment around an axis chosen within a light sheet thanks to a step-motor synchronized with the camera frame clock. Another camera provides a top view of the fragment under scanning. A scanning accuracy of 100μm is attained. The iterative automatic processing algorithm is based on segmentation into facets of the lateral part of the fragments followed by a 3D matching providing the user with a ranked short list of possible assemblies. The device has been applied to the reconstruction of a set of 1200 fragments from broken tablets supporting a Latin inscription dating from the first century AD.

  20. 3D Cell Culture in Alginate Hydrogels

    PubMed Central

    Andersen, Therese; Auk-Emblem, Pia; Dornish, Michael

    2015-01-01

    This review compiles information regarding the use of alginate, and in particular alginate hydrogels, in culturing cells in 3D. Knowledge of alginate chemical structure and functionality are shown to be important parameters in design of alginate-based matrices for cell culture. Gel elasticity as well as hydrogel stability can be impacted by the type of alginate used, its concentration, the choice of gelation technique (ionic or covalent), and divalent cation chosen as the gel inducing ion. The use of peptide-coupled alginate can control cell–matrix interactions. Gelation of alginate with concomitant immobilization of cells can take various forms. Droplets or beads have been utilized since the 1980s for immobilizing cells. Newer matrices such as macroporous scaffolds are now entering the 3D cell culture product market. Finally, delayed gelling, injectable, alginate systems show utility in the translation of in vitro cell culture to in vivo tissue engineering applications. Alginate has a history and a future in 3D cell culture. Historically, cells were encapsulated in alginate droplets cross-linked with calcium for the development of artificial organs. Now, several commercial products based on alginate are being used as 3D cell culture systems that also demonstrate the possibility of replacing or regenerating tissue. PMID:27600217

  1. [3D virtual endoscopy of heart].

    PubMed

    Du, Aan; Yang, Xin; Xue, Haihong; Yao, Liping; Sun, Kun

    2012-10-01

    In this paper, we present a virtual endoscopy (VE) for diagnosis of heart diseases, which is proved efficient and affordable, easy to popularize for viewing the interior of the heart. The dual source CT (DSCT) data were used as primary data in our system. The 3D structure of virtual heart was reconstructed with 3D texture mapping technology based on graphics processing unit (GPU), and could be displayed dynamically in real time. When we displayed it in real time, we could not only observe the inside of the chambers of heart but also examine from the new angle of view by the 3D data which were already clipped according to doctor's desire. In the pattern of observation, we used both mutual interactive mode and auto mode. In the auto mode, we used Dijkstra Algorithm which treated the 3D Euler distance as weighting factor to find out the view path quickly, and, used view path to calculate the four chamber plane. PMID:23198444

  2. The New Realm of 3-D Vision

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Dimension Technologies Inc., developed a line of 2-D/3-D Liquid Crystal Display (LCD) screens, including a 15-inch model priced at consumer levels. DTI's family of flat panel LCD displays, called the Virtual Window(TM), provide real-time 3-D images without the use of glasses, head trackers, helmets, or other viewing aids. Most of the company initial 3-D display research was funded through NASA's Small Business Innovation Research (SBIR) program. The images on DTI's displays appear to leap off the screen and hang in space. The display accepts input from computers or stereo video sources, and can be switched from 3-D to full-resolution 2-D viewing with the push of a button. The Virtual Window displays have applications in data visualization, medicine, architecture, business, real estate, entertainment, and other research, design, military, and consumer applications. Displays are currently used for computer games, protein analysis, and surgical imaging. The technology greatly benefits the medical field, as surgical simulators are helping to increase the skills of surgical residents. Virtual Window(TM) is a trademark of Dimension Technologies Inc.

  3. Virtual Representations in 3D Learning Environments

    ERIC Educational Resources Information Center

    Shonfeld, Miri; Kritz, Miki

    2013-01-01

    This research explores the extent to which virtual worlds can serve as online collaborative learning environments for students by increasing social presence and engagement. 3D environments enable learning, which simulates face-to-face encounters while retaining the advantages of online learning. Students in Education departments created avatars…

  4. NASA Sees Typhoon Rammasun in 3-D

    NASA Video Gallery

    NASA's TRMM satellite flew over on July 14, 2014 at 1819 UTC and data was used to make this 3-D flyby showing thunderstorms to heights of almost 17km (10.5 miles). Rain was measured falling at a ra...

  5. 3-D Teaching Models for All

    ERIC Educational Resources Information Center

    Bradley, Joan; Farland-Smith, Donna

    2010-01-01

    Allowing a student to "see" through touch what other students see through a microscope can be a challenging task. Therefore, author Joan Bradley created three-dimensional (3-D) models with one student's visual impairment in mind. They are meant to benefit all students and can be used to teach common high school biology topics, including the…

  6. A Rotation Invariant in 3-D Reaching

    ERIC Educational Resources Information Center

    Mitra, Suvobrata; Turvey, M. T.

    2004-01-01

    In 3 experiments, the authors investigated changes in hand orientation during a 3-D reaching task that imposed specific position and orientation requirements on the hand's initial and final postures. Instantaneous hand orientation was described using 3-element rotation vectors representing current orientation as a rotation from a fixed reference…

  7. D Surface Generation from Aerial Thermal Imagery

    NASA Astrophysics Data System (ADS)

    Khodaei, B.; Samadzadegan, F.; Dadras Javan, F.; Hasani, H.

    2015-12-01

    Aerial thermal imagery has been recently applied to quantitative analysis of several scenes. For the mapping purpose based on aerial thermal imagery, high accuracy photogrammetric process is necessary. However, due to low geometric resolution and low contrast of thermal imaging sensors, there are some challenges in precise 3D measurement of objects. In this paper the potential of thermal video in 3D surface generation is evaluated. In the pre-processing step, thermal camera is geometrically calibrated using a calibration grid based on emissivity differences between the background and the targets. Then, Digital Surface Model (DSM) generation from thermal video imagery is performed in four steps. Initially, frames are extracted from video, then tie points are generated by Scale-Invariant Feature Transform (SIFT) algorithm. Bundle adjustment is then applied and the camera position and orientation parameters are determined. Finally, multi-resolution dense image matching algorithm is used to create 3D point cloud of the scene. Potential of the proposed method is evaluated based on thermal imaging cover an industrial area. The thermal camera has 640×480 Uncooled Focal Plane Array (UFPA) sensor, equipped with a 25 mm lens which mounted in the Unmanned Aerial Vehicle (UAV). The obtained results show the comparable accuracy of 3D model generated based on thermal images with respect to DSM generated from visible images, however thermal based DSM is somehow smoother with lower level of texture. Comparing the generated DSM with the 9 measured GCPs in the area shows the Root Mean Square Error (RMSE) value is smaller than 5 decimetres in both X and Y directions and 1.6 meters for the Z direction.

  8. Uncertainty in 3D gel dosimetry

    NASA Astrophysics Data System (ADS)

    De Deene, Yves; Jirasek, Andrew

    2015-01-01

    Three-dimensional (3D) gel dosimetry has a unique role to play in safeguarding conformal radiotherapy treatments as the technique can cover the full treatment chain and provides the radiation oncologist with the integrated dose distribution in 3D. It can also be applied to benchmark new treatment strategies such as image guided and tracking radiotherapy techniques. A major obstacle that has hindered the wider dissemination of gel dosimetry in radiotherapy centres is a lack of confidence in the reliability of the measured dose distribution. Uncertainties in 3D dosimeters are attributed to both dosimeter properties and scanning performance. In polymer gel dosimetry with MRI readout, discrepancies in dose response of large polymer gel dosimeters versus small calibration phantoms have been reported which can lead to significant inaccuracies in the dose maps. The sources of error in polymer gel dosimetry with MRI readout are well understood and it has been demonstrated that with a carefully designed scanning protocol, the overall uncertainty in absolute dose that can currently be obtained falls within 5% on an individual voxel basis, for a minimum voxel size of 5 mm3. However, several research groups have chosen to use polymer gel dosimetry in a relative manner by normalizing the dose distribution towards an internal reference dose within the gel dosimeter phantom. 3D dosimetry with optical scanning has also been mostly applied in a relative way, although in principle absolute calibration is possible. As the optical absorption in 3D dosimeters is less dependent on temperature it can be expected that the achievable accuracy is higher with optical CT. The precision in optical scanning of 3D dosimeters depends to a large extend on the performance of the detector. 3D dosimetry with X-ray CT readout is a low contrast imaging modality for polymer gel dosimetry. Sources of error in x-ray CT polymer gel dosimetry (XCT) are currently under investigation and include inherent

  9. Automatic Building Extraction and Roof Reconstruction in 3k Imagery Based on Line Segments

    NASA Astrophysics Data System (ADS)

    Köhn, A.; Tian, J.; Kurz, F.

    2016-06-01

    We propose an image processing workflow to extract rectangular building footprints using georeferenced stereo-imagery and a derivative digital surface model (DSM) product. The approach applies a line segment detection procedure to the imagery and subsequently verifies identified line segments individually to create a footprint on the basis of the DSM. The footprint is further optimized by morphological filtering. Towards the realization of 3D models, we decompose the produced footprint and generate a 3D point cloud from DSM height information. By utilizing the robust RANSAC plane fitting algorithm, the roof structure can be correctly reconstructed. In an experimental part, the proposed approach has been performed on 3K aerial imagery.

  10. Laser printing of 3D metallic interconnects

    NASA Astrophysics Data System (ADS)

    Beniam, Iyoel; Mathews, Scott A.; Charipar, Nicholas A.; Auyeung, Raymond C. Y.; Piqué, Alberto

    2016-04-01

    The use of laser-induced forward transfer (LIFT) techniques for the printing of functional materials has been demonstrated for numerous applications. The printing gives rise to patterns, which can be used to fabricate planar interconnects. More recently, various groups have demonstrated electrical interconnects from laser-printed 3D structures. The laser printing of these interconnects takes place through aggregation of voxels of either molten metal or of pastes containing dispersed metallic particles. However, the generated 3D structures do not posses the same metallic conductivity as a bulk metal interconnect of the same cross-section and length as those formed by wire bonding or tab welding. An alternative is to laser transfer entire 3D structures using a technique known as lase-and-place. Lase-and-place is a LIFT process whereby whole components and parts can be transferred from a donor substrate onto a desired location with one single laser pulse. This paper will describe the use of LIFT to laser print freestanding, solid metal foils or beams precisely over the contact pads of discrete devices to interconnect them into fully functional circuits. Furthermore, this paper will also show how the same laser can be used to bend or fold the bulk metal foils prior to transfer, thus forming compliant 3D structures able to provide strain relief for the circuits under flexing or during motion from thermal mismatch. These interconnect "ridges" can span wide gaps (on the order of a millimeter) and accommodate height differences of tens of microns between adjacent devices. Examples of these laser printed 3D metallic bridges and their role in the development of next generation electronics by additive manufacturing will be presented.

  11. Volume rendering for interactive 3D segmentation

    NASA Astrophysics Data System (ADS)

    Toennies, Klaus D.; Derz, Claus

    1997-05-01

    Combined emission/absorption and reflection/transmission volume rendering is able to display poorly segmented structures from 3D medical image sequences. Visual cues such as shading and color let the user distinguish structures in the 3D display that are incompletely extracted by threshold segmentation. In order to be truly helpful, analyzed information needs to be quantified and transferred back into the data. We extend our previously presented scheme for such display be establishing a communication between visual analysis and the display process. The main tool is a selective 3D picking device. For being useful on a rather rough segmentation, the device itself and the display offer facilities for object selection. Selective intersection planes let the user discard information prior to choosing a tissue of interest. Subsequently, a picking is carried out on the 2D display by casting a ray into the volume. The picking device is made pre-selective using already existing segmentation information. Thus, objects can be picked that are visible behind semi-transparent surfaces of other structures. Information generated by a later connected- component analysis can then be integrated into the data. Data examination is continued on an improved display letting the user actively participate in the analysis process. Results of this display-and-interaction scheme proved to be very effective. The viewer's ability to extract relevant information form a complex scene is combined with the computer's ability to quantify this information. The approach introduces 3D computer graphics methods into user- guided image analysis creating an analysis-synthesis cycle for interactive 3D segmentation.

  12. 3-D inversion of magnetotelluric Phase Tensor

    NASA Astrophysics Data System (ADS)

    Patro, Prasanta; Uyeshima, Makoto

    2010-05-01

    Three-dimensional (3-D) inversion of the magnetotelluric (MT) has become a routine practice among the MT community due to progress of algorithms for 3-D inverse problems (e.g. Mackie and Madden, 1993; Siripunvaraporn et al., 2005). While availability of such 3-D inversion codes have increased the resolving power of the MT data and improved the interpretation, on the other hand, still the galvanic effects poses difficulties in interpretation of resistivity structure obtained from the MT data. In order to tackle the galvanic distortion of MT data, Caldwell et al., (2004) introduced the concept of phase tensor. They demonstrated how the regional phase information can be retrieved from the observed impedance tensor without any assumptions for structural dimension, where both the near surface inhomogeneity and the regional conductivity structures can be 3-D. We made an attempt to modify a 3-D inversion code (Siripunvaraporn et al., 2005) to directly invert the phase tensor elements. We present here the main modification done in the sensitivity calculation and then show a few synthetic studies and its application to the real data. The synthetic model study suggests that the prior model (m_0) setting is important in retrieving the true model. This is because estimation of correct induction scale length lacks in the phase tensor inversion process. Comparison between results from conventional impedance inversion and new phase tensor inversion suggests that, in spite of presence of the galvanic distortion (due to near surface checkerboard anomalies in our case), the new inverion algorithm retrieves the regional conductivitity structure reliably. We applied the new inversion to the real data from the Indian sub continent and compared with the results from conventional impedance inversion.

  13. Recognition methods for 3D textured surfaces

    NASA Astrophysics Data System (ADS)

    Cula, Oana G.; Dana, Kristin J.

    2001-06-01

    Texture as a surface representation is the subject of a wide body of computer vision and computer graphics literature. While texture is always associated with a form of repetition in the image, the repeating quantity may vary. The texture may be a color or albedo variation as in a checkerboard, a paisley print or zebra stripes. Very often in real-world scenes, texture is instead due to a surface height variation, e.g. pebbles, gravel, foliage and any rough surface. Such surfaces are referred to here as 3D textured surfaces. Standard texture recognition algorithms are not appropriate for 3D textured surfaces because the appearance of these surfaces changes in a complex manner with viewing direction and illumination direction. Recent methods have been developed for recognition of 3D textured surfaces using a database of surfaces observed under varied imaging parameters. One of these methods is based on 3D textons obtained using K-means clustering of multiscale feature vectors. Another method uses eigen-analysis originally developed for appearance-based object recognition. In this work we develop a hybrid approach that employs both feature grouping and dimensionality reduction. The method is tested using the Columbia-Utrecht texture database and provides excellent recognition rates. The method is compared with existing recognition methods for 3D textured surfaces. A direct comparison is facilitated by empirical recognition rates from the same texture data set. The current method has key advantages over existing methods including requiring less prior information on both the training and novel images.

  14. 3D Printed Programmable Release Capsules.

    PubMed

    Gupta, Maneesh K; Meng, Fanben; Johnson, Blake N; Kong, Yong Lin; Tian, Limei; Yeh, Yao-Wen; Masters, Nina; Singamaneni, Srikanth; McAlpine, Michael C

    2015-08-12

    The development of methods for achieving precise spatiotemporal control over chemical and biomolecular gradients could enable significant advances in areas such as synthetic tissue engineering, biotic-abiotic interfaces, and bionanotechnology. Living organisms guide tissue development through highly orchestrated gradients of biomolecules that direct cell growth, migration, and differentiation. While numerous methods have been developed to manipulate and implement biomolecular gradients, integrating gradients into multiplexed, three-dimensional (3D) matrices remains a critical challenge. Here we present a method to 3D print stimuli-responsive core/shell capsules for programmable release of multiplexed gradients within hydrogel matrices. These capsules are composed of an aqueous core, which can be formulated to maintain the activity of payload biomolecules, and a poly(lactic-co-glycolic) acid (PLGA, an FDA approved polymer) shell. Importantly, the shell can be loaded with plasmonic gold nanorods (AuNRs), which permits selective rupturing of the capsule when irradiated with a laser wavelength specifically determined by the lengths of the nanorods. This precise control over space, time, and selectivity allows for the ability to pattern 2D and 3D multiplexed arrays of enzyme-loaded capsules along with tunable laser-triggered rupture and release of active enzymes into a hydrogel ambient. The advantages of this 3D printing-based method include (1) highly monodisperse capsules, (2) efficient encapsulation of biomolecular payloads, (3) precise spatial patterning of capsule arrays, (4) "on the fly" programmable reconfiguration of gradients, and (5) versatility for incorporation in hierarchical architectures. Indeed, 3D printing of programmable release capsules may represent a powerful new tool to enable spatiotemporal control over biomolecular gradients. PMID:26042472

  15. 3D Printed Programmable Release Capsules

    PubMed Central

    Gupta, Maneesh K.; Meng, Fanben; Johnson, Blake N.; Kong, Yong Lin; Tian, Limei; Yeh, Yao-Wen; Masters, Nina; Singamaneni, Srikanth; McAlpine, Michael C.

    2015-01-01

    The development of methods for achieving precise spatiotemporal control over chemical and biomolecular gradients could enable significant advances in areas such as synthetic tissue engineering, biotic–abiotic interfaces, and bionanotechnology. Living organisms guide tissue development through highly orchestrated gradients of biomolecules that direct cell growth, migration, and differentiation. While numerous methods have been developed to manipulate and implement biomolecular gradients, integrating gradients into multiplexed, three-dimensional (3D) matrices remains a critical challenge. Here we present a method to 3D print stimuli-responsive core/shell capsules for programmable release of multiplexed gradients within hydrogel matrices. These capsules are composed of an aqueous core, which can be formulated to maintain the activity of payload biomolecules, and a poly(lactic-co-glycolic) acid (PLGA, an FDA approved polymer) shell. Importantly, the shell can be loaded with plasmonic gold nanorods (AuNRs), which permits selective rupturing of the capsule when irradiated with a laser wavelength specifically determined by the lengths of the nanorods. This precise control over space, time, and selectivity allows for the ability to pattern 2D and 3D multiplexed arrays of enzyme-loaded capsules along with tunable laser-triggered rupture and release of active enzymes into a hydrogel ambient. The advantages of this 3D printing-based method include (1) highly monodisperse capsules, (2) efficient encapsulation of biomolecular payloads, (3) precise spatial patterning of capsule arrays, (4) “on the fly” programmable reconfiguration of gradients, and (5) versatility for incorporation in hierarchical architectures. Indeed, 3D printing of programmable release capsules may represent a powerful new tool to enable spatiotemporal control over biomolecular gradients. PMID:26042472

  16. Interpretation of 2d and 3d Building Details on Facades and Roofs

    NASA Astrophysics Data System (ADS)

    Meixner, P.; Leberl, F.; Brédif, M.

    2011-04-01

    Current Internet-inspired mapping data are in the form of street maps, orthophotos, 3D models or street-side images and serve to support mostly search and navigation. Yet the only mapping data that currently can really be searched are the street maps via their addresses and coordinates. The orthophotos, 3D models and street-side images represent predominantly "eye candy" with little added value to the Internet-user. We are interested in characterizing the elements of the urban space from imagery. In this paper we discuss the use of street side imagery and aerial imagery to develop descriptions of urban spaces, initially of building facades and roofs. We present methods (a) to segment facades using high-overlap street side facade images, (b) to map facades and facade details from vertical aerial images, and (c) to characterize roofs by their type and details, also from aerial photography. This paper describes a method of roof segmentation with the goal of assigning each roof to a specific architectural style. Questions of the use of the attic space, or the placement of solar panels, are of interest. It is of interest that roofs have recently been mapped using LiDAR point clouds. We demonstrate that aerial images are a useful and economical alternative to LiDAR for the characterization of building roofs, and that they also contain very valuable information about facades.

  17. R3D-2-MSA: the RNA 3D structure-to-multiple sequence alignment server.

    PubMed

    Cannone, Jamie J; Sweeney, Blake A; Petrov, Anton I; Gutell, Robin R; Zirbel, Craig L; Leontis, Neocles

    2015-07-01

    The RNA 3D Structure-to-Multiple Sequence Alignment Server (R3D-2-MSA) is a new web service that seamlessly links RNA three-dimensional (3D) structures to high-quality RNA multiple sequence alignments (MSAs) from diverse biological sources. In this first release, R3D-2-MSA provides manual and programmatic access to curated, representative ribosomal RNA sequence alignments from bacterial, archaeal, eukaryal and organellar ribosomes, using nucleotide numbers from representative atomic-resolution 3D structures. A web-based front end is available for manual entry and an Application Program Interface for programmatic access. Users can specify up to five ranges of nucleotides and 50 nucleotide positions per range. The R3D-2-MSA server maps these ranges to the appropriate columns of the corresponding MSA and returns the contents of the columns, either for display in a web browser or in JSON format for subsequent programmatic use. The browser output page provides a 3D interactive display of the query, a full list of sequence variants with taxonomic information and a statistical summary of distinct sequence variants found. The output can be filtered and sorted in the browser. Previous user queries can be viewed at any time by resubmitting the output URL, which encodes the search and re-generates the results. The service is freely available with no login requirement at http://rna.bgsu.edu/r3d-2-msa. PMID:26048960

  18. A 3D radiative transfer framework. VI. PHOENIX/3D example applications

    NASA Astrophysics Data System (ADS)

    Hauschildt, P. H.; Baron, E.

    2010-01-01

    Aims: We demonstrate the application of our 3D radiative transfer framework in the model atmosphere code PHOENIX for a number of spectrum synthesis calculations for very different conditions. Methods: The 3DRT framework discussed in the previous papers of this series was added to our general-purpose model atmosphere code PHOENIX/1D and an extended 3D version PHOENIX/3D was created. The PHOENIX/3D code is parallelized via the MPI library using a hierarchical domain decomposition and displays very good strong scaling. Results: We present the results of several test cases for widely different atmosphere conditions and compare the 3D calculations with equivalent 1D models to assess the internal accuracy of the 3D modeling. In addition, we show the results for a number of parameterized 3D structures. Conclusions: With presently available computational resources it is possible to solve the full 3D radiative transfer (including scattering) problem with the same micro-physics as included in 1D modeling.

  19. R3D-2-MSA: the RNA 3D structure-to-multiple sequence alignment server

    PubMed Central

    Cannone, Jamie J.; Sweeney, Blake A.; Petrov, Anton I.; Gutell, Robin R.; Zirbel, Craig L.; Leontis, Neocles

    2015-01-01

    The RNA 3D Structure-to-Multiple Sequence Alignment Server (R3D-2-MSA) is a new web service that seamlessly links RNA three-dimensional (3D) structures to high-quality RNA multiple sequence alignments (MSAs) from diverse biological sources. In this first release, R3D-2-MSA provides manual and programmatic access to curated, representative ribosomal RNA sequence alignments from bacterial, archaeal, eukaryal and organellar ribosomes, using nucleotide numbers from representative atomic-resolution 3D structures. A web-based front end is available for manual entry and an Application Program Interface for programmatic access. Users can specify up to five ranges of nucleotides and 50 nucleotide positions per range. The R3D-2-MSA server maps these ranges to the appropriate columns of the corresponding MSA and returns the contents of the columns, either for display in a web browser or in JSON format for subsequent programmatic use. The browser output page provides a 3D interactive display of the query, a full list of sequence variants with taxonomic information and a statistical summary of distinct sequence variants found. The output can be filtered and sorted in the browser. Previous user queries can be viewed at any time by resubmitting the output URL, which encodes the search and re-generates the results. The service is freely available with no login requirement at http://rna.bgsu.edu/r3d-2-msa. PMID:26048960

  20. The dimension added by 3D scanning and 3D printing of meteorites

    NASA Astrophysics Data System (ADS)

    de Vet, S. J.

    2016-01-01

    An overview for the 3D photodocumentation of meteorites is presented, focussing on two 3D scanning methods in relation to 3D printing. The 3D photodocumention of meteorites provides new ways for the digital preservation of culturally, historically or scientifically unique meteorites. It has the potential for becoming a new documentation standard of meteorites that can exist complementary to traditional photographic documentation. Notable applications include (i.) use of physical properties in dark flight-, strewn field-, or aerodynamic modelling; (ii.) collection research of meteorites curated by different museum collections, and (iii.) public dissemination of meteorite models as a resource for educational users. The possible applications provided by the additional dimension of 3D illustrate the benefits for the meteoritics community.