Science.gov

Sample records for 3d ladar imagery

  1. Identifying targets under trees: jigsaw 3D ladar test results

    NASA Astrophysics Data System (ADS)

    Ludwig, David; Kongable, Albert; Krywick, Scott; Albrecht, H. T.; Kamrath, G.; Milam, Jerry; Brown, David; Fetzer, Gregory J.; Hanna, Keith

    2003-08-01

    A 3D direct detection imaging laser radar was developed and tested to demonstrate the ability to image objects highly obscured by foliage or camouflage netting. The LADAR provides high-resolution imagery from a narrow pulse-width transmitter, high frequency receiver, and 3D visualization software for near-real-time data display. This work accomplished under DARPA contract number DAAD17-01-D0006/0002.

  2. Threat object identification performance for LADAR imagery: comparison of 2-dimensional versus 3-dimensional imagery

    NASA Astrophysics Data System (ADS)

    Chaudhuri, Matthew A.; Driggers, Ronald G.; Redman, Brian; Krapels, Keith A.

    2006-05-01

    This research was conducted to determine the change in human observer range performance when LADAR imagery is presented in stereo 3D vice 2D. It compares the ability of observers to correctly identify twelve common threatening and non-threatening single-handed objects (e.g. a pistol versus a cell phone). Images were collected with the Army Research Lab/Office of Naval Research (ARL/ONR) Short Wave Infrared (SWIR) Imaging LADAR. A perception experiment, utilizing both military and civilian observers, presented subjects with images of varying angular resolutions. The results of this experiment were used to create identification performance curves for the 2D and 3D imagery, which show probability of identification as a function of range. Analysis of the results indicates that there is no evidence of a statistically significant difference in performance between 2D and 3D imagery.

  3. Chirped amplitude modulation ladar for range and Doppler measurements and 3-D imaging

    NASA Astrophysics Data System (ADS)

    Stann, Barry; Redman, Brian C.; Lawler, William; Giza, Mark; Dammann, John; Krapels, Keith

    2007-04-01

    Shipboard infrared search and track (IRST) systems can detect sea-skimming anti-ship missiles at long ranges, but cannot distinguish missiles from slowly moving false targets and clutter. In a joint Army-Navy program, the Army Research Laboratory (ARL) is developing a ladar to provide unambiguous range and velocity measurements of targets detected by the distributed aperture system (DAS) IRST system being developed by the Naval Research Laboratory (NRL) sponsored by the Office of Naval Research (ONR). By using the ladar's range and velocity data, false alarms and clutter objects will be distinguished from incoming missiles. Because the ladar uses an array receiver, it can also provide three-dimensional (3-D) imagery of potential threats at closer ranges in support of the force protection/situational awareness mission. The ladar development is being accomplished in two phases. In Phase I, ARL designed, built, and reported on an initial breadboard ladar for proof-of-principle static platform field tests. In Phase II, ARL was tasked to design, and test an advanced breadboard ladar that corrected various shortcomings in the transmitter optics and receiver electronics and improved the signal processing and display code. The advanced breadboard will include a high power laser source utilizing a long pulse erbium amplifier built under contract. Because award of the contract for the erbium amplifier was delayed, final assembly of the advanced ladar is delayed. In the course of this year's work we built a "research receiver" to facilitate design revisions, and when combined with a low-power laser, enabled us to demonstrate the viability of the components and subsystems comprising the advanced ladar.

  4. Spectral ladar: towards active 3D multispectral imaging

    NASA Astrophysics Data System (ADS)

    Powers, Michael A.; Davis, Christopher C.

    2010-04-01

    In this paper we present our Spectral LADAR concept, an augmented implementation of traditional LADAR. This sensor uses a polychromatic source to obtain range-resolved 3D spectral images which are used to identify objects based on combined spatial and spectral features, resolving positions in three dimensions and up to hundreds of meters in distance. We report on a proof-of-concept Spectral LADAR demonstrator that generates spectral point clouds from static scenes. The demonstrator transmits nanosecond supercontinuum pulses generated in a photonic crystal fiber. Currently we use a rapidly tuned receiver with a high-speed InGaAs APD for 25 spectral bands with the future expectation of implementing a linear APD array spectrograph. Each spectral band is independently range resolved with multiple return pulse recognition. This is a critical feature, enabling simultaneous spectral and spatial unmixing of partially obscured objects when not achievable using image fusion of monochromatic LADAR and passive spectral imagers. This enables higher identification confidence in highly cluttered environments such as forested or urban areas (e.g. vehicles behind camouflage or foliage). These environments present challenges for situational awareness and robotic perception which can benefit from the unique attributes of Spectral LADAR. Results from this demonstrator unit are presented for scenes typical of military operations and characterize the operation of the device. The results are discussed here in the context of autonomous vehicle navigation and target recognition.

  5. A model and simulation to predict the performance of angle-angle-range 3D flash LADAR imaging sensor systems

    NASA Astrophysics Data System (ADS)

    Grasso, Robert J.; Odhner, Jefferson E.; Russo, Leonard E.; McDaniel, Robert V.

    2005-10-01

    BAE SYSTEMS reports on a program to develop a high-fidelity model and simulation to predict the performance of angle-angle-range 3D flash LADAR Imaging Sensor systems. 3D Flash LADAR is the latest evolution of laser radar systems and provides unique capability in its ability to provide high-resolution LADAR imagery upon a single laser pulse; rather than constructing an image from multiple pulses as with conventional scanning LADAR systems. However, accurate methods to model and simulate performance from these 3D LADAR systems have been lacking, relying upon either single pixel LADAR performance or extrapolating from passive detection FPA performance. The model and simulation developed and reported here is expressly for 3D angle-angle-range imaging LADAR systems. To represent an accurate "real world" type environment, this model and simulation accounts for: 1) laser pulse shape; 2) detector array size; 3) atmospheric transmission; 4) atmospheric backscatter; 5) atmospheric turbulence; 6) obscurants, and; 7) obscurant path length. The angle-angle-range 3D flash LADAR model and simulation accounts for all pixels in the detector array by modeling and accounting for the non-uniformity of each individual pixel in the array. Here, noise sources are modeled based upon their pixel-to-pixel statistical variation. A cumulative probability function is determined by integrating the normal distribution with respect to detector gain, and, for each pixel, a random number is compared with the cumulative probability function resulting in a different gain for each pixel within the array. In this manner very accurate performance is determined pixel-by-pixel. Model outputs are in the form of 3D images of the far-field distribution across the array as intercepted by the target, gain distribution, power distribution, average signal-to-noise, and probability of detection across the array. Other outputs include power distribution from a target, signal-to-noise vs. range, probability of

  6. A model and simulation to predict the performance of angle-angle-range 3D flash ladar imaging sensor systems

    NASA Astrophysics Data System (ADS)

    Grasso, Robert J.; Odhner, Jefferson E.; Russo, Leonard E.; McDaniel, Robert V.

    2004-11-01

    BAE SYSTEMS reports on a program to develop a high-fidelity model and simulation to predict the performance of angle-angle-range 3D flash LADAR Imaging Sensor systems. 3D Flash LADAR is the latest evolution of laser radar systems and provides unique capability in its ability to provide high-resolution LADAR imagery upon a single laser pulse; rather than constructing an image from multiple pulses as with conventional scanning LADAR systems. However, accurate methods to model and simulate performance from these 3D LADAR systems have been lacking, relying upon either single pixel LADAR performance or extrapolating from passive detection FPA performance. The model and simulation developed and reported here is expressly for 3D angle-angle-range imaging LADAR systems. To represent an accurate "real world" type environment, this model and simulation accounts for: 1) laser pulse shape; 2) detector array size; 3) atmospheric transmission; 4) atmospheric backscatter; 5) atmospheric turbulence; 6) obscurants, and; 7) obscurant path length. The angle-angle-range 3D flash LADAR model and simulation accounts for all pixels in the detector array by modeling and accounting for the non-uniformity of each individual pixel in the array. Here, noise sources are modeled based upon their pixel-to-pixel statistical variation. A cumulative probability function is determined by integrating the normal distribution with respect to detector gain, and, for each pixel, a random number is compared with the cumulative probability function resulting in a different gain for each pixel within the array. In this manner very accurate performance is determined pixel-by-pixel. Model outputs are in the form of 3D images of the far-field distribution across the array as intercepted by the target, gain distribution, power distribution, average signal-to-noise, and probability of detection across the array. Other outputs include power distribution from a target, signal-to-noise vs. range, probability of

  7. 3D-LZ helicopter ladar imaging system

    NASA Astrophysics Data System (ADS)

    Savage, James; Harrington, Walter; McKinley, R. Andrew; Burns, H. N.; Braddom, Steven; Szoboszlay, Zoltan

    2010-04-01

    A joint-service team led by the Air Force Research Laboratory's Munitions and Sensors Directorates completed a successful flight test demonstration of the 3D-LZ Helicopter LADAR Imaging System. This was a milestone demonstration in the development of technology solutions for a problem known as "helicopter brownout", the loss of situational awareness caused by swirling sand during approach and landing. The 3D-LZ LADAR was developed by H.N. Burns Engineering and integrated with the US Army Aeroflightdynamics Directorate's Brown-Out Symbology System aircraft state symbology aboard a US Army EH-60 Black Hawk helicopter. The combination of these systems provided an integrated degraded visual environment landing solution with landing zone situational awareness as well as aircraft guidance and obstacle avoidance information. Pilots from the U.S. Army, Air Force, Navy, and Marine Corps achieved a 77% landing rate in full brownout conditions at a test range at Yuma Proving Ground, Arizona. This paper will focus on the LADAR technology used in 3D-LZ and the results of this milestone demonstration.

  8. A range/depth modulation transfer function (RMTF) framework for characterizing 3D imaging LADAR performance

    NASA Astrophysics Data System (ADS)

    Staple, Bevan; Earhart, R. P.; Slaymaker, Philip A.; Drouillard, Thomas F., II; Mahony, Thomas

    2005-05-01

    3D imaging LADARs have emerged as the key technology for producing high-resolution imagery of targets in 3-dimensions (X and Y spatial, and Z in the range/depth dimension). Ball Aerospace & Technologies Corp. continues to make significant investments in this technology to enable critical NASA, Department of Defense, and national security missions. As a consequence of rapid technology developments, two issues have emerged that need resolution. First, the terminology used to rate LADAR performance (e.g., range resolution) is inconsistently defined, is improperly used, and thus has become misleading. Second, the terminology does not include a metric of the system"s ability to resolve the 3D depth features of targets. These two issues create confusion when translating customer requirements into hardware. This paper presents a candidate framework for addressing these issues. To address the consistency issue, the framework utilizes only those terminologies proposed and tested by leading LADAR research and standards institutions. We also provide suggestions for strengthening these definitions by linking them to the well-known Rayleigh criterion extended into the range dimension. To address the inadequate 3D image quality metrics, the framework introduces the concept of a Range/Depth Modulation Transfer Function (RMTF). The RMTF measures the impact of the spatial frequencies of a 3D target on its measured modulation in range/depth. It is determined using a new, Range-Based, Slanted Knife-Edge test. We present simulated results for two LADAR pulse detection techniques and compare them to a baseline centroid technique. Consistency in terminology plus a 3D image quality metric enable improved system standardization.

  9. Small SWAP 3D imaging flash ladar for small tactical unmanned air systems

    NASA Astrophysics Data System (ADS)

    Bird, Alan; Anderson, Scott A.; Wojcik, Michael; Budge, Scott E.

    2015-05-01

    The Space Dynamics Laboratory (SDL), working with Naval Research Laboratory (NRL) and industry leaders Advanced Scientific Concepts (ASC) and Hood Technology Corporation, has developed a small SWAP (size, weight, and power) 3D imaging flash ladar (LAser Detection And Ranging) sensor system concept design for small tactical unmanned air systems (STUAS). The design utilizes an ASC 3D flash ladar camera and laser in a Hood Technology gyro-stabilized gimbal system. The design is an autonomous, intelligent, geo-aware sensor system that supplies real-time 3D terrain and target images. Flash ladar and visible camera data are processed at the sensor using a custom digitizer/frame grabber with compression. Mounted in the aft housing are power, controls, processing computers, and GPS/INS. The onboard processor controls pointing and handles image data, detection algorithms and queuing. The small SWAP 3D imaging flash ladar sensor system generates georeferenced terrain and target images with a low probability of false return and <10 cm range accuracy through foliage in real-time. The 3D imaging flash ladar is designed for a STUAS with a complete system SWAP estimate of <9 kg, <0.2 m3 and <350 W power. The system is modeled using LadarSIM, a MATLAB® and Simulink®- based ladar system simulator designed and developed by the Center for Advanced Imaging Ladar (CAIL) at Utah State University. We will present the concept design and modeled performance predictions.

  10. ROIC for gated 3D imaging LADAR receiver

    NASA Astrophysics Data System (ADS)

    Chen, Guoqiang; Zhang, Junling; Wang, Pan; Zhou, Jie; Gao, Lei; Ding, Ruijun

    2013-09-01

    Time of flight laser range finding, deep space communications and scanning video imaging are three applications requiring very low noise optical receivers to achieve detection of fast and weak optical signal. HgCdTe electrons initiated avalanche photodiodes (e-APDs) in linear multiplication mode is the detector of choice thanks to its high quantum efficiency, high gain at low bias, high bandwidth and low noise factor. In this project, a readout integrated circuit of hybrid e-APD focal plane array (FPA) with 100um pitch for 3D-LADAR was designed for gated optical receiver. The ROIC works at 77K, including unit cell circuit, column-level circuit, timing control, bias circuit and output driver. The unit cell circuit is a key component, which consists of preamplifier, correlated double Sampling (CDS), bias circuit and timing control module. Specially, the preamplifier used the capacitor feedback transimpedance amplifier (CTIA) structure which has two capacitors to offer switchable capacitance for passive/active dual mode imaging. The main circuit of column-level circuit is a precision Multiply-by-Two circuit which is implemented by switched-capacitor circuit. Switched-capacitor circuit is quite suitable for the signal processing of readout integrated circuit (ROIC) due to the working characteristics. The output driver uses a simply unity-gain buffer. Because the signal is amplified in column-level circuit, the amplifier in unity-gain buffer uses a rail-rail amplifier. In active imaging mode, the integration time is 80ns. Integrating current from 200nA to 4uA, this circuit shows the nonlinearity is less than 1%. In passive imaging mode, the integration time is 150ns. Integrating current from 1nA to 20nA shows the nonlinearity less than 1%.

  11. Ultra-Compact, High-Resolution LADAR System for 3D Imaging

    NASA Technical Reports Server (NTRS)

    Xu, Jing; Gutierrez, Roman

    2009-01-01

    An eye-safe LADAR system weighs under 500 grams and has range resolution of 1 mm at 10 m. This laser uses an adjustable, tiny microelectromechanical system (MEMS) mirror that was made in SiWave to sweep laser frequency. The size of the laser device is small (70x50x13 mm). The LADAR uses all the mature fiber-optic telecommunication technologies in the system, making this innovation an efficient performer. The tiny size and light weight makes the system useful for commercial and industrial applications including surface damage inspections, range measurements, and 3D imaging.

  12. Geiger-mode APD camera system for single-photon 3D LADAR imaging

    NASA Astrophysics Data System (ADS)

    Entwistle, Mark; Itzler, Mark A.; Chen, Jim; Owens, Mark; Patel, Ketan; Jiang, Xudong; Slomkowski, Krystyna; Rangwala, Sabbir

    2012-06-01

    The unparalleled sensitivity of 3D LADAR imaging sensors based on single photon detection provides substantial benefits for imaging at long stand-off distances and minimizing laser pulse energy requirements. To obtain 3D LADAR images with single photon sensitivity, we have demonstrated focal plane arrays (FPAs) based on InGaAsP Geiger-mode avalanche photodiodes (GmAPDs) optimized for use at either 1.06 μm or 1.55 μm. These state-of-the-art FPAs exhibit excellent pixel-level performance and the capability for 100% pixel yield on a 32 x 32 format. To realize the full potential of these FPAs, we have recently developed an integrated camera system providing turnkey operation based on FPGA control. This system implementation enables the extremely high frame-rate capability of the GmAPD FPA, and frame rates in excess of 250 kHz (for 0.4 μs range gates) can be accommodated using an industry-standard CameraLink interface in full configuration. Real-time data streaming for continuous acquisition of 2 μs range gate point cloud data with 13-bit time-stamp resolution at 186 kHz frame rates has been established using multiple solid-state storage drives. Range gate durations spanning 4 ns to 10 μs provide broad operational flexibility. The camera also provides real-time signal processing in the form of multi-frame gray-scale contrast images and single-frame time-stamp histograms, and automated bias control has been implemented to maintain a constant photon detection efficiency in the presence of ambient temperature changes. A comprehensive graphical user interface has been developed to provide complete camera control using a simple serial command set, and this command set supports highly flexible end-user customization.

  13. A model and simulation to predict 3D imaging LADAR sensor systems performance in real-world type environments

    NASA Astrophysics Data System (ADS)

    Grasso, Robert J.; Dippel, George F.; Russo, Leonard E.

    2006-08-01

    BAE SYSTEMS reports on a program to develop a high-fidelity model and simulation to predict the performance of angle-angle-range 3D flash LADAR Imaging Sensor systems. Accurate methods to model and simulate performance from 3D LADAR systems have been lacking, relying upon either single pixel LADAR performance or extrapolating from passive detection FPA performance. The model and simulation here is developed expressly for 3D angle-angle-range imaging LADAR systems. To represent an accurate "real world" type environment this model and simulation accounts for: 1) laser pulse shape; 2) detector array size; 3) detector noise figure; 4) detector gain; 5) target attributes; 6) atmospheric transmission; 7) atmospheric backscatter; 8) atmospheric turbulence; 9) obscurants; 10) obscurant path length, and; 11) platform motion. The angle-angle-range 3D flash LADAR model and simulation accounts for all pixels in the detector array by modeling and accounting for the non-uniformity of each individual pixel. Here, noise sources and gain are modeled based upon their pixel-to-pixel statistical variation. A cumulative probability function is determined by integrating the normal distribution with respect to detector gain, and, for each pixel, a random number is compared with the cumulative probability function resulting in a different gain for each pixel within the array. In this manner very accurate performance is determined pixel-by-pixel for the entire array. Model outputs are 3D images of the far-field distribution across the array as intercepted by the target, gain distribution, power distribution, average signal-to-noise, and probability of detection across the array.

  14. Design and performance of single photon APD focal plane arrays for 3-D LADAR imaging

    NASA Astrophysics Data System (ADS)

    Itzler, Mark A.; Entwistle, Mark; Owens, Mark; Patel, Ketan; Jiang, Xudong; Slomkowski, Krystyna; Rangwala, Sabbir; Zalud, Peter F.; Senko, Tom; Tower, John; Ferraro, Joseph

    2010-08-01

    ×We describe the design, fabrication, and performance of focal plane arrays (FPAs) for use in 3-D LADAR imaging applications requiring single photon sensitivity. These 32 × 32 FPAs provide high-efficiency single photon sensitivity for three-dimensional LADAR imaging applications at 1064 nm. Our GmAPD arrays are designed using a planarpassivated avalanche photodiode device platform with buried p-n junctions that has demonstrated excellent performance uniformity, operational stability, and long-term reliability. The core of the FPA is a chip stack formed by hybridizing the GmAPD photodiode array to a custom CMOS read-out integrated circuit (ROIC) and attaching a precision-aligned GaP microlens array (MLA) to the back-illuminated detector array. Each ROIC pixel includes an active quenching circuit governing Geiger-mode operation of the corresponding avalanche photodiode pixel as well as a pseudo-random counter to capture per-pixel time-of-flight timestamps in each frame. The FPA has been designed to operate at frame rates as high as 186 kHz for 2 μs range gates. Effective single photon detection efficiencies as high as 40% (including all optical transmission and MLA losses) are achieved for dark count rates below 20 kHz. For these planar-geometry diffused-junction GmAPDs, isolation trenches are used to reduce crosstalk due to hot carrier luminescence effects during avalanche events, and we present details of the crosstalk performance for different operating conditions. Direct measurement of temporal probability distribution functions due to cumulative timing uncertainties of the GmAPDs and ROIC circuitry has demonstrated a FWHM timing jitter as low as 265 ps (standard deviation is ~100 ps).

  15. Fusion of current technologies with real-time 3D MEMS ladar for novel security and defense applications

    NASA Astrophysics Data System (ADS)

    Siepmann, James P.

    2006-05-01

    Through the utilization of scanning MEMS mirrors in ladar devices, a whole new range of potential military, Homeland Security, law enforcement, and civilian applications is now possible. Currently, ladar devices are typically large (>15,000 cc), heavy (>15 kg), and expensive (>$100,000) while current MEMS ladar designs are more than a magnitude less, opening up a myriad of potential new applications. One such application with current technology is a GPS integrated MEMS ladar unit, which could be used for real-time border monitoring or the creation of virtual 3D battlefields after being dropped or propelled into hostile territory. Another current technology that can be integrated into a MEMS ladar unit is digital video that can give high resolution and true color to a picture that is then enhanced with range information in a real-time display format that is easier for the user to understand and assimilate than typical gray-scale or false color images. The problem with using 2-axis MEMS mirrors in ladar devices is that in order to have a resonance frequency capable of practical real-time scanning, they must either be quite small and/or have a low maximum tilt angle. Typically, this value has been less than (< or = to 10 mg-mm2-kHz2)-degrees. We have been able to solve this problem by using angle amplification techniques that utilize a series of MEMS mirrors and/or a specialized set of optics to achieve a broad field of view. These techniques and some of their novel applications mentioned will be explained and discussed herein.

  16. Flattop beam illumination for 3D imaging ladar with simple optical devices in the wide distance range

    NASA Astrophysics Data System (ADS)

    Tsuji, Hidenobu; Nakano, Takayuki; Matsumoto, Yoshihiro; Kameyama, Shumpei

    2016-04-01

    We have developed an illumination optical system for 3D imaging ladar (laser detection and ranging) which forms flattop beam shape by transformation of the Gaussian beam in the wide distance range. The illumination is achieved by beam division and recombination using a prism and a negative powered lens. The optimum condition of the transformation by the optical system is derived. It is confirmed that the flattop distribution can be formed in the wide range of the propagation distance from 1 to 1000 m. The experimental result with the prototype is in good agreement with the calculation result.

  17. HMI aspects of the usage of ladar 3D data in pilot DVE support systems

    NASA Astrophysics Data System (ADS)

    Münsterer, Thomas; Völschow, Philipp; Singer, Bernhard; Strobel, Michael; Kramper, Patrick; Bühler, Daniel

    2015-06-01

    The paper discusses specifics of high resolution 3D sensor systems employed in helicopter DVE support systems and the consequences for the resulting HMI. 3D sensors have a number of specifics making them a cornerstone for helicopter pilot support or pilotage systems intended for use in DVE. Retrieving depth information gives specific advantages over 2D imagers. On the other hand certain technology and physics inherent specifics require a more elaborate visualization procedure compared to 2D image visualization. The goal of all displayed information has to be to reduce pilots workload in DVE operations. Therefore especially for displaying the processed information on an HMD as 3D conformal data requires thorough HMI considerations.

  18. Automatic Reconstruction of Spacecraft 3D Shape from Imagery

    NASA Astrophysics Data System (ADS)

    Poelman, C.; Radtke, R.; Voorhees, H.

    We describe a system that computes the three-dimensional (3D) shape of a spacecraft from a sequence of uncalibrated, two-dimensional images. While the mathematics of multi-view geometry is well understood, building a system that accurately recovers 3D shape from real imagery remains an art. A novel aspect of our approach is the combination of algorithms from computer vision, photogrammetry, and computer graphics. We demonstrate our system by computing spacecraft models from imagery taken by the Air Force Research Laboratory's XSS-10 satellite and DARPA's Orbital Express satellite. Using feature tie points (each identified in two or more images), we compute the relative motion of each frame and the 3D location of each feature using iterative linear factorization followed by non-linear bundle adjustment. The "point cloud" that results from this traditional shape-from-motion approach is typically too sparse to generate a detailed 3D model. Therefore, we use the computed motion solution as input to a volumetric silhouette-carving algorithm, which constructs a solid 3D model based on viewpoint consistency with the image frames. The resulting voxel model is then converted to a facet-based surface representation and is texture-mapped, yielding realistic images from arbitrary viewpoints. We also illustrate other applications of the algorithm, including 3D mensuration and stereoscopic 3D movie generation.

  19. Improvements in the Visualization of Stereoscopic 3D Imagery

    NASA Astrophysics Data System (ADS)

    Gurrieri, Luis E.

    2015-09-01

    A pleasant visualization of stereoscopic imagery must take into account factors that may produce eye strain and fatigue. Fortunately, our binocular vision system has embedded mechanisms to perceive depth for extended periods of time without producing eye fatigue; however, stereoscopic imagery may still induce visual discomfort in certain displaying scenarios. An important source of eye fatigue originates in the conflict between vergence eye movement and focusing mechanisms. Today's eye-tracking technology makes possible to know the viewers' gaze direction; hence, 3D imagery can be dynamically corrected based on this information. In this paper, I introduce a method to improve the visualization of stereoscopic imagery on planar displays based on emulating vergence and accommodation mechanisms of binocular human vision. Unlike other methods to improve the visual comfort that introduce depth distortions, in the stereoscopic visual media, this technique aims to produce a gentler and more natural binocular viewing experience without distorting the original depth of the scene.

  20. Anti-ship missile tracking with a chirped amplitude modulation ladar

    NASA Astrophysics Data System (ADS)

    Redman, Brian C.; Stann, Barry L.; Ruff, William C.; Giza, Mark M.; Aliberti, Keith; Lawler, William B.

    2004-09-01

    Shipboard infrared search and track (IRST) systems can detect sea-skimming anti-ship missiles at long ranges. Since IRST systems cannot measure range and velocity, they have difficulty distinguishing missiles from slowly moving false targets and clutter. ARL is developing a ladar based on its patented chirped amplitude modulation (AM) technique to provide unambiguous range and velocity measurements of targets handed over to it by the IRST. Using the ladar's range and velocity data, false alarms and clutter objects will be distinguished from valid targets. If the target is valid, it's angular location, range, and velocity, will be used to update the target track until remediation has been effected. By using an array receiver, ARL's ladar can also provide 3D imagery of potential threats in support of force protection. The ladar development program will be accomplished in two phases. In Phase I, currently in progress, ARL is designing and building a breadboard ladar test system for proof-of-principle static platform field tests. In Phase II, ARL will build a brassboard ladar test system that will meet operational goals in shipboard testing against realistic targets. The principles of operation for the chirped AM ladar for range and velocity measurements, the ladar performance model, and the top-level design for the Phase I breadboard are presented in this paper.

  1. Advances in HgCdTe APDs and LADAR Receivers

    NASA Technical Reports Server (NTRS)

    Bailey, Steven; McKeag, William; Wang, Jinxue; Jack, Michael; Amzajerdian, Farzin

    2010-01-01

    Raytheon is developing NIR sensor chip assemblies (SCAs) for scanning and staring 3D LADAR systems. High sensitivity is obtained by integrating high performance detectors with gain i.e. APDs with very low noise Readout Integrated Circuits. Unique aspects of these designs include: independent acquisition (non-gated) of pulse returns, multiple pulse returns with both time and intensity reported to enable full 3D reconstruction of the image. Recent breakthrough in device design has resulted in HgCdTe APDs operating at 300K with essentially no excess noise to gains in excess of 100, low NEP <1nW and GHz bandwidths and have demonstrated linear mode photon counting. SCAs utilizing these high performance APDs have been integrated and demonstrated excellent spatial and range resolution enabling detailed 3D imagery both at short range and long ranges. In this presentation we will review progress in high resolution scanning, staring and ultra-high sensitivity photon counting LADAR sensors.

  2. Depth-fused 3D imagery on an immaterial display.

    PubMed

    Lee, Cha; Diverdi, Stephen; Höllerer, Tobias

    2009-01-01

    We present an immaterial display that uses a generalized form of depth-fused 3D (DFD) rendering to create unencumbered 3D visuals. To accomplish this result, we demonstrate a DFD display simulator that extends the established depth-fused 3D principle by using screens in arbitrary configurations and from arbitrary viewpoints. The feasibility of the generalized DFD effect is established with a user study using the simulator. Based on these results, we developed a prototype display using one or two immaterial screens to create an unencumbered 3D visual that users can penetrate, examining the potential for direct walk-through and reach-through manipulation of the 3D scene. We evaluate the prototype system in formative and summative user studies and report the tolerance thresholds discovered for both tracking and projector errors.

  3. Imaging through obscurants with a heterodyne detection-based ladar system

    NASA Astrophysics Data System (ADS)

    Reibel, Randy R.; Roos, Peter A.; Kaylor, Brant M.; Berg, Trenton J.; Curry, James R.

    2014-06-01

    Bridger Photonics has been researching and developing a ladar system based on heterodyne detection for imaging through brownout and other DVEs. There are several advantages that an FMCW ladar system provides compared to direct detect pulsed time-of-flight systems including: 1) Higher average powers, 2) Single photon sensitive while remaining tolerant to strong return signals, 3) Doppler sensitivity for clutter removal, and 4) More flexible system for sensing during various stages of flight. In this paper, we provide a review of our sensor, discuss lessons learned during various DVE tests, and show our latest 3D imagery.

  4. Characterization of scannerless ladar

    NASA Astrophysics Data System (ADS)

    Monson, Todd C.; Grantham, Jeffrey W.; Childress, Steve W.; Sackos, John T.; Nellums, Robert O.; Lebien, Steve M.

    1999-05-01

    Scannerless laser radar (LADAR) is the next revolutionary step in laser radar technology. It has the potential to dramatically increase the image frame rate over raster-scanned systems while eliminating mechanical moving parts. The system presented here uses a negative lens to diverge the light from a pulsed laser to floodlight illuminate a target. Return light is collected by a commercial camera lens, an image intensifier tube applies a modulated gain, and a relay lens focuses the resulting image onto a commercial CCD camera. To produce range data, a minimum of three snapshots is required while modulating the gain of the image intensifier tube's microchannel plate (MCP) at a MHz rate. Since November 1997 the scannerless LADAR designed by Sandia National Laboratories has undergone extensive testing. It has been taken on numerous field tests and has imaged calibrated panels up to a distance of 1 km on an outdoor range. Images have been taken at ranges over a kilometer and can be taken at much longer ranges with modified range gate settings. Sample imagery and potential applications are presented here. The accuracy of range imagery produced by this scannerless LADAR has been evaluated and the range resolution was found to be approximately 15 cm. Its sensitivity was also quantified and found to be many factors better than raster- scanned direct detection LADAR systems. Additionally, the effect of the number of snapshots and the phase spacing between them on the quality of the range data has been evaluated. Overall, the impressive results produced by scannerless LADAR are ideal for autonomous munitions guidance and various other applications.

  5. High Accuracy 3D Processing of Satellite Imagery

    NASA Technical Reports Server (NTRS)

    Gruen, A.; Zhang, L.; Kocaman, S.

    2007-01-01

    Automatic DSM/DTM generation reproduces not only general features, but also detailed features of the terrain relief. Height accuracy of around 1 pixel in cooperative terrain. RMSE values of 1.3-1.5 m (1.0-2.0 pixels) for IKONOS and RMSE values of 2.9-4.6 m (0.5-1.0 pixels) for SPOT5 HRS. For 3D city modeling, the manual and semi-automatic feature extraction capability of SAT-PP provides a good basis. The tools of SAT-PP allowed the stereo-measurements of points on the roofs in order to generate a 3D city model with CCM The results show that building models with main roof structures can be successfully extracted by HRSI. As expected, with Quickbird more details are visible.

  6. Impact of Building Heights on 3d Urban Density Estimation from Spaceborne Stereo Imagery

    NASA Astrophysics Data System (ADS)

    Peng, Feifei; Gong, Jianya; Wang, Le; Wu, Huayi; Yang, Jiansi

    2016-06-01

    In urban planning and design applications, visualization of built up areas in three dimensions (3D) is critical for understanding building density, but the accurate building heights required for 3D density calculation are not always available. To solve this problem, spaceborne stereo imagery is often used to estimate building heights; however estimated building heights might include errors. These errors vary between local areas within a study area and related to the heights of the building themselves, distorting 3D density estimation. The impact of building height accuracy on 3D density estimation must be determined across and within a study area. In our research, accurate planar information from city authorities is used during 3D density estimation as reference data, to avoid the errors inherent to planar information extracted from remotely sensed imagery. Our experimental results show that underestimation of building heights is correlated to underestimation of the Floor Area Ratio (FAR). In local areas, experimental results show that land use blocks with low FAR values often have small errors due to small building height errors for low buildings in the blocks; and blocks with high FAR values often have large errors due to large building height errors for high buildings in the blocks. Our study reveals that the accuracy of 3D density estimated from spaceborne stereo imagery is correlated to heights of buildings in a scene; therefore building heights must be considered when spaceborne stereo imagery is used to estimate 3D density to improve precision.

  7. The Maintenance Of 3-D Scene Databases Using The Analytical Imagery Matching System (Aims)

    NASA Astrophysics Data System (ADS)

    Hovey, Stanford T.

    1987-06-01

    The increased demand for multi-resolution displays of simulated scene data for aircraft training or mission planning has led to a need for digital databases of 3-dimensional topography and geographically positioned objects. This data needs to be at varying resolutions or levels of detail as well as be positionally accurate to satisfy close-up and long distance scene views. The generation and maintenance processes for this type of digital database requires that relative and absolute spatial positions of geographic and cultural features be carefully controlled in order for the scenes to be representative and useful for simulation applications. Autometric, Incorporated has designed a modular Analytical Image Matching System (AIMS) which allows digital 3-D terrain feature data to be derived from cartographic and imagery sources by a combination of automatic and man-machine techniques. This system provides a means for superimposing the scenes of feature information in 3-D over imagery for updating. It also allows for real-time operator interaction between a monoscopic digital imagery display, a digital map display, a stereoscopic digital imagery display and automatically detected feature changes for transferring 3-D data from one coordinate system's frame of reference to another for updating the scene simulation database. It is an advanced, state-of-the-art means for implementing a modular, 3-D scene database maintenance capability, where original digital or converted-to-digital analog source imagery is used as a basic input to perform accurate updating.

  8. On Fundamental Evaluation Using Uav Imagery and 3d Modeling Software

    NASA Astrophysics Data System (ADS)

    Nakano, K.; Suzuki, H.; Tamino, T.; Chikatsu, H.

    2016-06-01

    Unmanned aerial vehicles (UAVs), which have been widely used in recent years, can acquire high-resolution images with resolutions in millimeters; such images cannot be acquired with manned aircrafts. Moreover, it has become possible to obtain a surface reconstruction of a realistic 3D model using high-overlap images and 3D modeling software such as Context capture, Pix4Dmapper, Photoscan based on computer vision technology such as structure from motion and multi-view stereo. 3D modeling software has many applications. However, most of them seem to not have obtained appropriate accuracy control in accordance with the knowledge of photogrammetry and/or computer vision. Therefore, we performed flight tests in a test field using an UAV equipped with a gimbal stabilizer and consumer grade digital camera. Our UAV is a hexacopter and can fly according to the waypoints for autonomous flight and can record flight logs. We acquired images from different altitudes such as 10 m, 20 m, and 30 m. We obtained 3D reconstruction results of orthoimages, point clouds, and textured TIN models for accuracy evaluation in some cases with different image scale conditions using 3D modeling software. Moreover, the accuracy aspect was evaluated for different units of input image—course unit and flight unit. This paper describes the fundamental accuracy evaluation for 3D modeling using UAV imagery and 3D modeling software from the viewpoint of close-range photogrammetry.

  9. Advances in ladar components and subsystems at Raytheon

    NASA Astrophysics Data System (ADS)

    Jack, Michael; Chapman, George; Edwards, John; Mc Keag, William; Veeder, Tricia; Wehner, Justin; Roberts, Tom; Robinson, Tom; Neisz, James; Andressen, Cliff; Rinker, Robert; Hall, Donald N. B.; Jacobson, Shane M.; Amzajerdian, Farzin; Cook, T. Dean

    2012-06-01

    Raytheon is developing NIR sensor chip assemblies (SCAs) for scanning and staring 3D LADAR systems. High sensitivity is obtained by integrating high performance detectors with gain, i.e., APDs with very low noise Readout Integrated Circuits (ROICs). Unique aspects of these designs include: independent acquisition (non-gated) of pulse returns, multiple pulse returns with both time and intensity reported to enable full 3D reconstruction of the image. Recent breakthrough in device design has resulted in HgCdTe APDs operating at 300K with essentially no excess noise to gains in excess of 100, low NEP <1nW and GHz bandwidths and have demonstrated linear mode photon counting. SCAs utilizing these high performance APDs have been integrated and demonstrated excellent spatial and range resolution enabling detailed 3D imagery both at short range and long ranges. In the following we will review progress in real-time 3D LADAR imaging receiver products in three areas: (1) scanning 256 × 4 configuration for the Multi-Mode Sensor Seeker (MMSS) program and (2) staring 256 × 256 configuration for the Autonomous Landing and Hazard Avoidance Technology (ALHAT) lunar landing mission and (3) Photon-Counting SCAs which have demonstrated a dramatic reduction in dark count rate due to improved design, operation and processing.

  10. Advances in LADAR Components and Subsystems at Raytheon

    NASA Technical Reports Server (NTRS)

    Jack, Michael; Chapman, George; Edwards, John; McKeag, William; Veeder, Tricia; Wehner, Justin; Roberts, Tom; Robinson, Tom; Neisz, James; Andressen, Cliff; Rinker, Robert; Hall, Donald N. B.; Jacobson, Shane M.; Amzajerdian, Farzin; Cook, T. Dean

    2012-01-01

    Raytheon is developing NIR sensor chip assemblies (SCAs) for scanning and staring 3D LADAR systems. High sensitivity is obtained by integrating high performance detectors with gain, i.e., APDs with very low noise Readout Integrated Circuits (ROICs). Unique aspects of these designs include: independent acquisition (non-gated) of pulse returns, multiple pulse returns with both time and intensity reported to enable full 3D reconstruction of the image. Recent breakthrough in device design has resulted in HgCdTe APDs operating at 300K with essentially no excess noise to gains in excess of 100, low NEP <1nW and GHz bandwidths and have demonstrated linear mode photon counting. SCAs utilizing these high performance APDs have been integrated and demonstrated excellent spatial and range resolution enabling detailed 3D imagery both at short range and long ranges. In the following we will review progress in real-time 3D LADAR imaging receiver products in three areas: (1) scanning 256 x 4 configuration for the Multi-Mode Sensor Seeker (MMSS) program and (2) staring 256 x 256 configuration for the Autonomous Landing and Hazard Avoidance Technology (ALHAT) lunar landing mission and (3) Photon-Counting SCAs which have demonstrated a dramatic reduction in dark count rate due to improved design, operation and processing.

  11. True 3D High Resolution imagery of a Buried Shipwreck: the Invincible (1758)

    NASA Astrophysics Data System (ADS)

    Dix, J. K.; Bull, J. M.; Henstock, T.; Gutowski, M.; Hogarth, P.; Leighton, T. G.; White, P. R.

    2005-12-01

    This paper will present the first true 3D high resolution acoustic imagery of a wreck site buried in the marine environment. Using a 3D Chirp system developed at the University of Southampton, a marine seismic survey of the mid-eighteenth century wreck site has been undertaken. The Invincible was a 74 gun warship built by the French in 1744, captured by the British in 1747 and subsequently lost off Portsmouth, UK in February 1758. The wreck was re-discovered by divers in 1979, partially buried on the margins of a mobile sandbank in approximately 8 metres of water. In 2004 the system was surveyed using a 60 channel, rigid framed 3D Chirp (1.5-13 kHz source sweep) system with integral RTK GPS and attitude systems. An area of 160 m x 160 m, centered over the wreck site, was surveyed with a total of 150 Gb data being acquired. The data was processed, using 3D Promax, to produce 25 cm bins with typical 3-6 fold coverage. The stacked traces have been visualized and interpreted using Kingdom Suite software. The final imagery shows at unprecedented resolution the full three-dimensional buried form of the wreck and it's relationship to the surrounding sedimentary sequences, enabling the full evolution of the site to be discussed. Further, the data is compared to previously acquired swath bathymetry and 2D seismic data in order to illustrate the impact of such a device for underwater cultural heritage management.

  12. Influence of Gsd for 3d City Modeling and Visualization from Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Alrajhi, Muhamad; Alam, Zafare; Afroz Khan, Mohammad; Alobeid, Abdalla

    2016-06-01

    Ministry of Municipal and Rural Affairs (MOMRA), aims to establish solid infrastructure required for 3D city modelling, for decision making to set a mark in urban development. MOMRA is responsible for the large scale mapping 1:1,000; 1:2,500; 1:10,000 and 1:20,000 scales for 10cm, 20cm and 40 GSD with Aerial Triangulation data. As 3D city models are increasingly used for the presentation exploration, and evaluation of urban and architectural designs. Visualization capabilities and animations support of upcoming 3D geo-information technologies empower architects, urban planners, and authorities to visualize and analyze urban and architectural designs in the context of the existing situation. To make use of this possibility, first of all 3D city model has to be created for which MOMRA uses the Aerial Triangulation data and aerial imagery. The main concise for 3D city modelling in the Kingdom of Saudi Arabia exists due to uneven surface and undulations. Thus real time 3D visualization and interactive exploration support planning processes by providing multiple stakeholders such as decision maker, architects, urban planners, authorities, citizens or investors with a three - dimensional model. Apart from advanced visualization, these 3D city models can be helpful for dealing with natural hazards and provide various possibilities to deal with exotic conditions by better and advanced viewing technological infrastructure. Riyadh on one side is 5700m above sea level and on the other hand Abha city is 2300m, this uneven terrain represents a drastic change of surface in the Kingdom, for which 3D city models provide valuable solutions with all possible opportunities. In this research paper: influence of different GSD (Ground Sample Distance) aerial imagery with Aerial Triangulation is used for 3D visualization in different region of the Kingdom, to check which scale is more sophisticated for obtaining better results and is cost manageable, with GSD (7.5cm, 10cm, 20cm and 40cm

  13. A study of the effects of degraded imagery on tactical 3D model generation using structure-from-motion

    NASA Astrophysics Data System (ADS)

    Bolick, Leslie; Harguess, Josh

    2016-05-01

    An emerging technology in the realm of airborne intelligence, surveillance, and reconnaissance (ISR) systems is structure-from-motion (SfM), which enables the creation of three-dimensional (3D) point clouds and 3D models from two-dimensional (2D) imagery. There are several existing tools, such as VisualSFM and open source project OpenSfM, to assist in this process, however, it is well-known that pristine imagery is usually required to create meaningful 3D data from the imagery. In military applications, such as the use of unmanned aerial vehicles (UAV) for surveillance operations, imagery is rarely pristine. Therefore, we present an analysis of structure-from-motion packages on imagery that has been degraded in a controlled manner.

  14. Extracting Semantically Annotated 3d Building Models with Textures from Oblique Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Frommholz, D.; Linkiewicz, M.; Meissner, H.; Dahlke, D.; Poznanska, A.

    2015-03-01

    This paper proposes a method for the reconstruction of city buildings with automatically derived textures that can be directly used for façade element classification. Oblique and nadir aerial imagery recorded by a multi-head camera system is transformed into dense 3D point clouds and evaluated statistically in order to extract the hull of the structures. For the resulting wall, roof and ground surfaces high-resolution polygonal texture patches are calculated and compactly arranged in a texture atlas without resampling. The façade textures subsequently get analyzed by a commercial software package to detect possible windows whose contours are projected into the original oriented source images and sparsely ray-casted to obtain their 3D world coordinates. With the windows being reintegrated into the previously extracted hull the final building models are stored as semantically annotated CityGML "LOD-2.5" objects.

  15. Towards 3D Matching of Point Clouds Derived from Oblique and Nadir Airborne Imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Ming

    Because of the low-expense high-efficient image collection process and the rich 3D and texture information presented in the images, a combined use of 2D airborne nadir and oblique images to reconstruct 3D geometric scene has a promising market for future commercial usage like urban planning or first responders. The methodology introduced in this thesis provides a feasible way towards fully automated 3D city modeling from oblique and nadir airborne imagery. In this thesis, the difficulty of matching 2D images with large disparity is avoided by grouping the images first and applying the 3D registration afterward. The procedure starts with the extraction of point clouds using a modified version of the RIT 3D Extraction Workflow. Then the point clouds are refined by noise removal and surface smoothing processes. Since the point clouds extracted from different image groups use independent coordinate systems, there are translation, rotation and scale differences existing. To figure out these differences, 3D keypoints and their features are extracted. For each pair of point clouds, an initial alignment and a more accurate registration are applied in succession. The final transform matrix presents the parameters describing the translation, rotation and scale requirements. The methodology presented in the thesis has been shown to behave well for test data. The robustness of this method is discussed by adding artificial noise to the test data. For Pictometry oblique aerial imagery, the initial alignment provides a rough alignment result, which contains a larger offset compared to that of test data because of the low quality of the point clouds themselves, but it can be further refined through the final optimization. The accuracy of the final registration result is evaluated by comparing it to the result obtained from manual selection of matched points. Using the method introduced, point clouds extracted from different image groups could be combined with each other to build a

  16. Quality Analysis on 3d Buidling Models Reconstructed from Uav Imagery

    NASA Astrophysics Data System (ADS)

    Jarzabek-Rychard, M.; Karpina, M.

    2016-06-01

    Recent developments in UAV technology and structure from motion techniques have effected that UAVs are becoming standard platforms for 3D data collection. Because of their flexibility and ability to reach inaccessible urban parts, drones appear as optimal solution for urban applications. Building reconstruction from the data collected with UAV has the important potential to reduce labour cost for fast update of already reconstructed 3D cities. However, especially for updating of existing scenes derived from different sensors (e.g. airborne laser scanning), a proper quality assessment is necessary. The objective of this paper is thus to evaluate the potential of UAV imagery as an information source for automatic 3D building modeling at LOD2. The investigation process is conducted threefold: (1) comparing generated SfM point cloud to ALS data; (2) computing internal consistency measures of the reconstruction process; (3) analysing the deviation of Check Points identified on building roofs and measured with a tacheometer. In order to gain deep insight in the modeling performance, various quality indicators are computed and analysed. The assessment performed according to the ground truth shows that the building models acquired with UAV-photogrammetry have the accuracy of less than 18 cm for the plannimetric position and about 15 cm for the height component.

  17. Experiments with Uas Imagery for Automatic Modeling of Power Line 3d Geometry

    NASA Astrophysics Data System (ADS)

    Jóźków, G.; Vander Jagt, B.; Toth, C.

    2015-08-01

    The ideal mapping technology for transmission line inspection is the airborne LiDAR executed from helicopter platforms. It allows for full 3D geometry extraction in highly automated manner. Large scale aerial images can be also used for this purpose, however, automation is possible only for finding transmission line positions (2D geometry), and the sag needs to be estimated manually. For longer lines, these techniques are less expensive than ground surveys, yet they are still expensive. UAS technology has the potential to reduce these costs, especially if using inexpensive platforms with consumer grade cameras. This study investigates the potential of using high resolution UAS imagery for automatic modeling of transmission line 3D geometry. The key point of this experiment was to employ dense matching algorithms to appropriately acquired UAS images to have points created also on wires. This allowed to model the 3D geometry of transmission lines similarly to LiDAR acquired point clouds. Results showed that the transmission line modeling is possible with a high internal accuracy for both, horizontal and vertical directions, even when wires were represented by a partial (sparse) point cloud.

  18. Three-dimensional landing zone ladar

    NASA Astrophysics Data System (ADS)

    Savage, James; Goodrich, Shawn; Burns, H. N.

    2016-05-01

    Three-Dimensional Landing Zone (3D-LZ) refers to a series of Air Force Research Laboratory (AFRL) programs to develop high-resolution, imaging ladar to address helicopter approach and landing in degraded visual environments with emphasis on brownout; cable warning and obstacle avoidance; and controlled flight into terrain. Initial efforts adapted ladar systems built for munition seekers, and success led to a the 3D-LZ Joint Capability Technology Demonstration (JCTD) , a 27-month program to develop and demonstrate a ladar subsystem that could be housed with the AN/AAQ-29 FLIR turret flown on US Air Force Combat Search and Rescue (CSAR) HH-60G Pave Hawk helicopters. Following the JCTD flight demonstration, further development focused on reducing size, weight, and power while continuing to refine the real-time geo-referencing, dust rejection, obstacle and cable avoidance, and Helicopter Terrain Awareness and Warning (HTAWS) capability demonstrated under the JCTD. This paper summarizes significant ladar technology development milestones to date, individual LADAR technologies within 3D-LZ, and results of the flight testing.

  19. Spectral ladar as a UGV navigation sensor

    NASA Astrophysics Data System (ADS)

    Powers, Michael A.; Davis, Christopher C.

    2011-06-01

    We demonstrate new results using our Spectral LADAR prototype, which highlight the benefits of this sensor for Unmanned Ground Vehicle (UGV) navigation applications. This sensor is an augmentation of conventional LADAR and uses a polychromatic source to obtain range-resolved 3D spectral point clouds. These point cloud images can be used to identify objects based on combined spatial and spectral features in three dimensions and at long standoff range. The Spectral LADAR transmits nanosecond supercontinuum pulses generated in a photonic crystal fiber. Backscatter from distant targets is dispersed into 25 spectral bands, where each spectral band is independently range resolved with multiple return pulse recognition. Our new results show that Spectral LADAR can spectrally differentiate hazardous terrain (mud) from favorable driving surfaces (dry ground). This is a critical capability, since in UGV contexts mud is potentially hazardous, requires modified vehicle dynamics, and is difficult to identify based on 3D spatial signatures. Additionally, we demonstrate the benefits of range resolved spectral imaging, where highly cluttered 3D images of scenes (e.g. containing camouflage, foliage) are spectrally unmixed by range separation and segmented accordingly. Spectral LADAR can achieve this unambiguously and without the need for stereo correspondence, sub-pixel detection algorithms, or multi-sensor registration and data fusion.

  20. Automatic registration of optical imagery with 3d lidar data using local combined mutual information

    NASA Astrophysics Data System (ADS)

    Parmehr, E. G.; Fraser, C. S.; Zhang, C.; Leach, J.

    2013-10-01

    Automatic registration of multi-sensor data is a basic step in data fusion for photogrammetric and remote sensing applications. The effectiveness of intensity-based methods such as Mutual Information (MI) for automated registration of multi-sensor image has been previously reported for medical and remote sensing applications. In this paper, a new multivariable MI approach that exploits complementary information of inherently registered LiDAR DSM and intensity data to improve the robustness of registering optical imagery and LiDAR point cloud, is presented. LiDAR DSM and intensity information has been utilised in measuring the similarity of LiDAR and optical imagery via the Combined MI. An effective histogramming technique is adopted to facilitate estimation of a 3D probability density function (pdf). In addition, a local similarity measure is introduced to decrease the complexity of optimisation at higher dimensions and computation cost. Therefore, the reliability of registration is improved due to the use of redundant observations of similarity. The performance of the proposed method for registration of satellite and aerial images with LiDAR data in urban and rural areas is experimentally evaluated and the results obtained are discussed.

  1. 3D Modeling of Building Indoor Spaces and Closed Doors from Imagery and Point Clouds

    PubMed Central

    Díaz-Vilariño, Lucía; Khoshelham, Kourosh; Martínez-Sánchez, Joaquín; Arias, Pedro

    2015-01-01

    3D models of indoor environments are increasingly gaining importance due to the wide range of applications to which they can be subjected: from redesign and visualization to monitoring and simulation. These models usually exist only for newly constructed buildings; therefore, the development of automatic approaches for reconstructing 3D indoors from imagery and/or point clouds can make the process easier, faster and cheaper. Among the constructive elements defining a building interior, doors are very common elements and their detection can be very useful either for knowing the environment structure, to perform an efficient navigation or to plan appropriate evacuation routes. The fact that doors are topologically connected to walls by being coplanar, together with the unavoidable presence of clutter and occlusions indoors, increases the inherent complexity of the automation of the recognition process. In this work, we present a pipeline of techniques used for the reconstruction and interpretation of building interiors based on point clouds and images. The methodology analyses the visibility problem of indoor environments and goes in depth with door candidate detection. The presented approach is tested in real data sets showing its potential with a high door detection rate and applicability for robust and efficient envelope reconstruction. PMID:25654723

  2. 3D modeling of building indoor spaces and closed doors from imagery and point clouds.

    PubMed

    Díaz-Vilariño, Lucía; Khoshelham, Kourosh; Martínez-Sánchez, Joaquín; Arias, Pedro

    2015-02-03

    3D models of indoor environments are increasingly gaining importance due to the wide range of applications to which they can be subjected: from redesign and visualization to monitoring and simulation. These models usually exist only for newly constructed buildings; therefore, the development of automatic approaches for reconstructing 3D indoors from imagery and/or point clouds can make the process easier, faster and cheaper. Among the constructive elements defining a building interior, doors are very common elements and their detection can be very useful either for knowing the environment structure, to perform an efficient navigation or to plan appropriate evacuation routes. The fact that doors are topologically connected to walls by being coplanar, together with the unavoidable presence of clutter and occlusions indoors, increases the inherent complexity of the automation of the recognition process. In this work, we present a pipeline of techniques used for the reconstruction and interpretation of building interiors based on point clouds and images. The methodology analyses the visibility problem of indoor environments and goes in depth with door candidate detection. The presented approach is tested in real data sets showing its potential with a high door detection rate and applicability for robust and efficient envelope reconstruction.

  3. 3D modeling of building indoor spaces and closed doors from imagery and point clouds.

    PubMed

    Díaz-Vilariño, Lucía; Khoshelham, Kourosh; Martínez-Sánchez, Joaquín; Arias, Pedro

    2015-01-01

    3D models of indoor environments are increasingly gaining importance due to the wide range of applications to which they can be subjected: from redesign and visualization to monitoring and simulation. These models usually exist only for newly constructed buildings; therefore, the development of automatic approaches for reconstructing 3D indoors from imagery and/or point clouds can make the process easier, faster and cheaper. Among the constructive elements defining a building interior, doors are very common elements and their detection can be very useful either for knowing the environment structure, to perform an efficient navigation or to plan appropriate evacuation routes. The fact that doors are topologically connected to walls by being coplanar, together with the unavoidable presence of clutter and occlusions indoors, increases the inherent complexity of the automation of the recognition process. In this work, we present a pipeline of techniques used for the reconstruction and interpretation of building interiors based on point clouds and images. The methodology analyses the visibility problem of indoor environments and goes in depth with door candidate detection. The presented approach is tested in real data sets showing its potential with a high door detection rate and applicability for robust and efficient envelope reconstruction. PMID:25654723

  4. 3D visualization of movements can amplify motor cortex activation during subsequent motor imagery.

    PubMed

    Sollfrank, Teresa; Hart, Daniel; Goodsell, Rachel; Foster, Jonathan; Tan, Tele

    2015-01-01

    A repetitive movement practice by motor imagery (MI) can influence motor cortical excitability in the electroencephalogram (EEG). This study investigated if a realistic visualization in 3D of upper and lower limb movements can amplify motor related potentials during subsequent MI. We hypothesized that a richer sensory visualization might be more effective during instrumental conditioning, resulting in a more pronounced event related desynchronization (ERD) of the upper alpha band (10-12 Hz) over the sensorimotor cortices thereby potentially improving MI based brain-computer interface (BCI) protocols for motor rehabilitation. The results show a strong increase of the characteristic patterns of ERD of the upper alpha band components for left and right limb MI present over the sensorimotor areas in both visualization conditions. Overall, significant differences were observed as a function of visualization modality (VM; 2D vs. 3D). The largest upper alpha band power decrease was obtained during MI after a 3-dimensional visualization. In total in 12 out of 20 tasks the end-user of the 3D visualization group showed an enhanced upper alpha ERD relative to 2D VM group, with statistical significance in nine tasks.With a realistic visualization of the limb movements, we tried to increase motor cortex activation during subsequent MI. The feedback and the feedback environment should be inherently motivating and relevant for the learner and should have an appeal of novelty, real-world relevance or aesthetic value (Ryan and Deci, 2000; Merrill, 2007). Realistic visual feedback, consistent with the participant's MI, might be helpful for accomplishing successful MI and the use of such feedback may assist in making BCI a more natural interface for MI based BCI rehabilitation.

  5. 3D visualization of movements can amplify motor cortex activation during subsequent motor imagery

    PubMed Central

    Sollfrank, Teresa; Hart, Daniel; Goodsell, Rachel; Foster, Jonathan; Tan, Tele

    2015-01-01

    A repetitive movement practice by motor imagery (MI) can influence motor cortical excitability in the electroencephalogram (EEG). This study investigated if a realistic visualization in 3D of upper and lower limb movements can amplify motor related potentials during subsequent MI. We hypothesized that a richer sensory visualization might be more effective during instrumental conditioning, resulting in a more pronounced event related desynchronization (ERD) of the upper alpha band (10–12 Hz) over the sensorimotor cortices thereby potentially improving MI based brain-computer interface (BCI) protocols for motor rehabilitation. The results show a strong increase of the characteristic patterns of ERD of the upper alpha band components for left and right limb MI present over the sensorimotor areas in both visualization conditions. Overall, significant differences were observed as a function of visualization modality (VM; 2D vs. 3D). The largest upper alpha band power decrease was obtained during MI after a 3-dimensional visualization. In total in 12 out of 20 tasks the end-user of the 3D visualization group showed an enhanced upper alpha ERD relative to 2D VM group, with statistical significance in nine tasks.With a realistic visualization of the limb movements, we tried to increase motor cortex activation during subsequent MI. The feedback and the feedback environment should be inherently motivating and relevant for the learner and should have an appeal of novelty, real-world relevance or aesthetic value (Ryan and Deci, 2000; Merrill, 2007). Realistic visual feedback, consistent with the participant’s MI, might be helpful for accomplishing successful MI and the use of such feedback may assist in making BCI a more natural interface for MI based BCI rehabilitation. PMID:26347642

  6. Comparison of 32 x 128 and 32 x 32 Geiger-mode APD FPAs for single photon 3D LADAR imaging

    NASA Astrophysics Data System (ADS)

    Itzler, Mark A.; Entwistle, Mark; Owens, Mark; Patel, Ketan; Jiang, Xudong; Slomkowski, Krystyna; Rangwala, Sabbir; Zalud, Peter F.; Senko, Tom; Tower, John; Ferraro, Joseph

    2011-05-01

    We present results obtained from 3D imaging focal plane arrays (FPAs) employing planar-geometry InGaAsP/InP Geiger-mode avalanche photodiodes (GmAPDs) with high-efficiency single photon sensitivity at 1.06 μm. We report results obtained for new 32 x 128 format FPAs with 50 μm pitch and compare these results to those obtained for 32 x 32 format FPAs with 100 μm pitch. We show excellent pixel-level yield-including 100% pixel operability-for both formats. The dark count rate (DCR) and photon detection efficiency (PDE) performance is found to be similar for both types of arrays, including the fundamental DCR vs. PDE tradeoff. The optical crosstalk due to photon emission induced by pixel-level avalanche detection events is found to be qualitatively similar for both formats, with some crosstalk metrics for the 32 x 128 format found to be moderately elevated relative to the 32 x 32 FPA results. Timing jitter measurements are also reported for the 32 x 128 FPAs.

  7. New developments in HgCdTe APDs and LADAR receivers

    NASA Astrophysics Data System (ADS)

    McKeag, William; Veeder, Tricia; Wang, Jinxue; Jack, Michael; Roberts, Tom; Robinson, Tom; Neisz, James; Andressen, Cliff; Rinker, Robert; Cook, T. Dean; Amzajerdian, Farzin

    2011-06-01

    Raytheon is developing NIR sensor chip assemblies (SCAs) for scanning and staring 3D LADAR systems. High sensitivity is obtained by integrating high performance detectors with gain, i.e., APDs with very low noise Readout Integrated Circuits (ROICs). Unique aspects of these designs include: independent acquisition (non-gated) of pulse returns, multiple pulse returns with both time and intensity reported to enable full 3D reconstruction of the image. Recent breakthrough in device design has resulted in HgCdTe APDs operating at 300K with essentially no excess noise to gains in excess of 100, low NEP <1nW and GHz bandwidths and have demonstrated linear mode photon counting. SCAs utilizing these high performance APDs have been integrated and demonstrated excellent spatial and range resolution enabling detailed 3D imagery both at short range and long ranges. In the following we will review progress in real-time 3D LADAR imaging receiver products in two areas: (1) scanning 256 × 4 configuration for the Multi-Mode Sensor Seeker (MMSS) program and (2) staring 256 × 256 configuration for the Autonomous Landing and Hazard Avoidance Technology (ALHAT) lunar landing mission.

  8. A novel window based method for approximating the Hausdorff in 3D range imagery.

    SciTech Connect

    Koch, Mark William

    2004-10-01

    Matching a set of 3D points to another set of 3D points is an important part of any 3D object recognition system. The Hausdorff distance is known for it robustness in the face of obscuration, clutter, and noise. We show how to approximate the 3D Hausdorff fraction with linear time complexity and quadratic space complexity. We empirically demonstrate that the approximation is very good when compared to actual Hausdorff distances.

  9. Meteoroid and debris special investigation group; status of 3-D crater analysis from binocular imagery

    NASA Technical Reports Server (NTRS)

    Sapp, Clyde A.; See, Thomas H.; Zolensky, Michael E.

    1992-01-01

    During the 3 month deintegration of the LDEF, the M&D SIG generated approximately 5000 digital color stereo image pairs of impact related features from all space exposed surfaces. Currently, these images are being processed at JSC to yield more accurate feature information. Work is currently underway to determine the minimum number of data points necessary to parametrically define impact crater morphologies in order to minimize the man-hour intensive task of tie point selection. Initial attempts at deriving accurate crater depth and diameter measurements from binocular imagery were based on the assumption that the crater geometries were best defined by paraboloid. We made no assumptions regarding the crater depth/diameter ratios but instead allowed each crater to define its own coefficients by performing a least-squares fit based on user-selected tiepoints. Initial test cases resulted in larger errors than desired, so it was decided to test our basic assumptions that the crater geometries could be parametrically defined as paraboloids. The method for testing this assumption was to carefully slice test craters (experimentally produced in an appropriate aluminum alloy) vertically through the center resulting in a readily visible cross-section of the crater geometry. Initially, five separate craters were cross-sectioned in this fashion. A digital image of each cross-section was then created, and the 2-D crater geometry was then hand-digitized to create a table of XY position for each crater. A 2nd order polynomial (parabolic) was fitted to the data using a least-squares approach. The differences between the fit equation and the actual data were fairly significant, and easily large enough to account for the errors found in the 3-D fits. The differences between the curve fit and the actual data were consistent between the caters. This consistency suggested that the differences were due to the fact that a parabola did not sufficiently define the generic crater geometry

  10. LADAR for structural damage detection

    NASA Astrophysics Data System (ADS)

    Moosa, Adil G.; Fu, Gongkang

    1999-12-01

    LADAR here stands for laser radar, using laser reflectivity for measurement. This paper presents a new technique using LADAR for structure evaluation. It is experimentally investigated in the laboratory. With cooperation of the US Federal Highway Administration, a recently developed LADAR system was used to measure structural deformation. The data were then treated for reducing noise and used to derive multiple features for diagnosis. The results indicate a promising direction of nondestructive evaluation using LADAR.

  11. 3D Building Modeling and Reconstruction using Photometric Satellite and Aerial Imageries

    NASA Astrophysics Data System (ADS)

    Izadi, Mohammad

    In this thesis, the problem of three dimensional (3D) reconstruction of building models using photometric satellite and aerial images is investigated. Here, two systems are pre-sented: 1) 3D building reconstruction using a nadir single-view image, and 2) 3D building reconstruction using slant multiple-view aerial images. The first system detects building rooftops in orthogonal aerial/satellite images using a hierarchical segmentation algorithm and a shadow verification approach. The heights of detected buildings are then estimated using a fuzzy rule-based method, which measures the height of a building by comparing its predicted shadow region with the actual shadow evidence in the image. This system finally generated a KML (Keyhole Markup Language) file as the output, that contains 3D models of detected buildings. The second system uses the geolocation information of a scene containing a building of interest and uploads all slant-view images that contain this scene from an input image dataset. These images are then searched automatically to choose image pairs with different views of the scene (north, east, south and west) based on the geolocation and auxiliary data accompanying the input data (metadata that describes the acquisition parameters at the capture time). The camera parameters corresponding to these images are refined using a novel point matching algorithm. Next, the system independently reconstructs 3D flat surfaces that are visible in each view using an iterative algorithm. 3D surfaces generated for all views are combined, and redundant surfaces are removed to create a complete set of 3D surfaces. Finally, the combined 3D surfaces are connected together to generate a more complete 3D model. For the experimental results, both presented systems are evaluated quantitatively and qualitatively and different aspects of the two systems including accuracy, stability, and execution time are discussed.

  12. 3D exploitation of large urban photo archives

    NASA Astrophysics Data System (ADS)

    Cho, Peter; Snavely, Noah; Anderson, Ross

    2010-04-01

    Recent work in computer vision has demonstrated the potential to automatically recover camera and scene geometry from large collections of uncooperatively-collected photos. At the same time, aerial ladar and Geographic Information System (GIS) data are becoming more readily accessible. In this paper, we present a system for fusing these data sources in order to transfer 3D and GIS information into outdoor urban imagery. Applying this system to 1000+ pictures shot of the lower Manhattan skyline and the Statue of Liberty, we present two proof-of-concept examples of geometry-based photo enhancement which are difficult to perform via conventional image processing: feature annotation and image-based querying. In these examples, high-level knowledge projects from 3D world-space into georegistered 2D image planes and/or propagates between different photos. Such automatic capabilities lay the groundwork for future real-time labeling of imagery shot in complex city environments by mobile smart phones.

  13. Dubai 3d Textuerd Mesh Using High Quality Resolution Vertical/oblique Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Tayeb Madani, Adib; Ziad Ahmad, Abdullateef; Christoph, Lueken; Hammadi, Zamzam; Manal Abdullah Sabeal, Manal Abdullah x.

    2016-06-01

    Providing high quality 3D data with reasonable quality and cost were always essential, affording the core data and foundation for developing an information-based decision-making tool of urban environments with the capability of providing decision makers, stakeholders, professionals, and public users with 3D views and 3D analysis tools of spatial information that enables real-world views. Helps and assist in improving users' orientation and also increase their efficiency in performing their tasks related to city planning, Inspection, infrastructures, roads, and cadastre management. In this paper, the capability of multi-view Vexcel UltraCam Osprey camera images is examined to provide a 3D model of building façades using an efficient image-based modeling workflow adopted by commercial software's. The main steps of this work include: Specification, point cloud generation, and 3D modeling. After improving the initial values of interior and exterior parameters at first step, an efficient image matching technique such as Semi Global Matching (SGM) is applied on the images to generate point cloud. Then, a mesh model of points is calculated using and refined to obtain an accurate model of buildings. Finally, a texture is assigned to mesh in order to create a realistic 3D model. The resulting model has provided enough LoD2 details of the building based on visual assessment. The objective of this paper is neither comparing nor promoting a specific technique over the other and does not mean to promote a sensor-based system over another systems or mechanism presented in existing or previous paper. The idea is to share experience.

  14. 3D building reconstruction based on given ground plan information and surface models extracted from spaceborne imagery

    NASA Astrophysics Data System (ADS)

    Tack, Frederik; Buyuksalih, Gurcan; Goossens, Rudi

    2012-01-01

    3D surface models have gained field as an important tool for urban planning and mapping. However, urban environments have a complex nature to model and they provide a challenge to investigate the current limits of automatic digital surface modeling from high resolution satellite imagery. An approach is introduced to improve a 3D surface model, extracted photogrammetrically from satellite imagery, based on the geometric building information embodied in existing 2D ground plans. First buildings are clipped from the extracted DSM based on the 2D polygonal building ground plans. To generate prismatic shaped structures with vertical walls and flat roofs, building shape is retrieved from the cadastre database while elevation information is extracted from the DSM. Within each 2D building boundary, a constant roof height is extracted based on statistical calculations of the height values. After buildings are extracted from the initial surface model, the remaining DSM is further processed to simplify to a smooth DTM that reflects bare ground, without artifacts, local relief, vegetation, cars and city furniture. In a next phase, both models are merged to yield an integrated city model or generalized DSM. The accuracy of the generalized surface model is assessed according to a quantitative-statistical analysis by comparison with two different types of reference data.

  15. 3-D Raman Imagery and Atomic Force Microscopy of Ancient Microscopic Fossils

    NASA Astrophysics Data System (ADS)

    Schopf, J.

    2003-12-01

    Investigations of the Precambrian (~540- to ~3,500-Ma-old) fossil record depend critically on identification of authentic microbial fossils. Combined with standard paleontologic studies (e.g., of paleoecologic setting, population structure, cellular morphology, preservational variants), two techniques recently introduced to such studies -- Raman imagery and atomic force microscopy -- can help meet this need. Laser-Raman imagery is a non-intrusive, non-destructive technique that can be used to demonstrate a micron-scale one-to-one correlation between optically discernable morphology and the organic (kerogenous) composition of individual microbial fossils(1,2), a prime indicator of biogencity. Such analyses can be used to characterize the molecular-structural makeup of organic-walled microscopic fossils both in acid-resistant residues and in petrographic thin sections, and whether the fossils analyzed are exposed at the upper surface of, or are embedded within (to depths >65 microns), the section studied. By providing means to map chemically, in three dimensions, whole fossils or parts of such fossils(3), Raman imagery can also show the presence of cell lumina, interior cellular cavities, another prime indicator of biogenicity. Atomic force microscopy (AFM) has been used to visualize the nanometer-scale structure of the kerogenous components of single Precambrian microscopic fossils(4). Capable of analyzing minute fragments of ancient organic matter exposed at the upper surface of thin sections (or of kerogen particles deposited on flat surfaces), such analyses hold promise not only for discriminating between biotic and abiotic micro-objects but for elucidation of the domain size -- and, thus, the degree of graphitization -- of the graphene subunits of the carbonaceous matter analyzed. These techniques -- both new to paleobiology -- can provide useful insight into the biogenicity and geochemical maturity of ancient organic matter. References: (1) Kudryavtsev, A.B. et

  16. 3-D Reconstruction of Structure and Dynamics of Coronal Twistors From STEREO and SDO Imagery

    NASA Astrophysics Data System (ADS)

    Slater, G. L.; Freeland, S. L.

    2014-12-01

    Although observed anecdotally for decades in H-alpha and EUV, so-called coronal 'tornadoes' have only recently become the focus of systematic and quantitative study and modeling. This increased focus has primarily been driven by data from the SDO observatory and more recently the IRIS observatory and ground-based telescopes. These ubiquitous magnetic structures differ in appearance and apparent dynamics depending upon position on the sun relative to the observer and upon observational wavelength. One of the key outstanding questions is whether they are actually rotating structures. Progress has been made using spectroscopic observations (IRIS, etc.) but the question is still not settled. We will present true stereographic movies of a set of these structures at various locations on the sun, using combinations of simultaneous STEREO and SDO imagery, in order to address the question of the actual motion of the structures.

  17. The Maradi fault zone: 3-D imagery of a classic wrench fault in Oman

    SciTech Connect

    Neuhaus, D. )

    1993-09-01

    The Maradi fault zone extends for almost 350 km in a north-northwest-south-southeast direction from the Oman Mountain foothills into the Arabian Sea, thereby dissecting two prolific hydrocarbon provinces, the Ghaba and Fahud salt basins. During its major Late Cretaceous period of movement, the Maradi fault zone acted as a left-lateral wrench fault. An early exploration campaign based on two-dimensional seismic targeted at fractured Cretaceous carbonates had mixed success and resulted in the discovery of one producing oil field. The structural complexity, rapidly varying carbonate facies, and uncertain fracture distribution prevented further drilling activity. In 1990 a three-dimensional (3-D) seismic survey covering some 500 km[sup 2] was acquired over the transpressional northern part of the Maradi fault zone. The good data quality and the focusing power of 3-D has enabled stunning insight into the complex structural style of a [open quotes]textbook[close quotes] wrench fault, even at deeper levels and below reverse faults hitherto unexplored. Subtle thickness changes within the carbonate reservoir and the unconformably overlying shale seal provided the tool for the identification of possible shoals and depocenters. Horizon attribute maps revealed in detail the various structural components of the wrench assemblage and highlighted areas of increased small-scale faulting/fracturing. The results of four recent exploration wells will be demonstrated and their impact on the interpretation discussed.

  18. 3D target tracking in infrared imagery by SIFT-based distance histograms

    NASA Astrophysics Data System (ADS)

    Yan, Ruicheng; Cao, Zhiguo

    2011-11-01

    SIFT tracking algorithm is an excellent point-based tracking algorithm, which has high tracking performance and accuracy due to its robust capability against rotation, scale change and occlusion. However, when tracking a huge 3D target in complicated real scenarios in a forward-looking infrared (FLIR) image sequence taken from an airborne moving platform, the tracked point locating in the vertical surface usually shifts away from the correct position. In this paper, we propose a novel algorithm for 3D target tracking in FLIR image sequences. Our approach uses SIFT keypoints detected in consecutive frames for point correspondence. The candidate position of the tracked point is firstly estimated by computing the affine transformation using local corresponding SIFT keypoints. Then the correct position is located via an optimal method. Euclidean distances between a candidate point and SIFT keypoints nearby are calculated and formed into a SIFT-based distance histogram. The distance histogram is defined a cost of associating each candidate point to a correct tracked point using the constraint based on the topology of each candidate point with its surrounding SIFT keypoints. Minimization of the cost is formulated as a combinatorial optimization problem. Experiments demonstrate that the proposed algorithm efficiently improves the tracking performance and accuracy.

  19. Very fast road database verification using textured 3D city models obtained from airborne imagery

    NASA Astrophysics Data System (ADS)

    Bulatov, Dimitri; Ziems, Marcel; Rottensteiner, Franz; Pohl, Melanie

    2014-10-01

    Road databases are known to be an important part of any geodata infrastructure, e.g. as the basis for urban planning or emergency services. Updating road databases for crisis events must be performed quickly and with the highest possible degree of automation. We present a semi-automatic algorithm for road verification using textured 3D city models, starting from aerial or even UAV-images. This algorithm contains two processes, which exchange input and output, but basically run independently from each other. These processes are textured urban terrain reconstruction and road verification. The first process contains a dense photogrammetric reconstruction of 3D geometry of the scene using depth maps. The second process is our core procedure, since it contains various methods for road verification. Each method represents a unique road model and a specific strategy, and thus is able to deal with a specific type of roads. Each method is designed to provide two probability distributions, where the first describes the state of a road object (correct, incorrect), and the second describes the state of its underlying road model (applicable, not applicable). Based on the Dempster-Shafer Theory, both distributions are mapped to a single distribution that refers to three states: correct, incorrect, and unknown. With respect to the interaction of both processes, the normalized elevation map and the digital orthophoto generated during 3D reconstruction are the necessary input - together with initial road database entries - for the road verification process. If the entries of the database are too obsolete or not available at all, sensor data evaluation enables classification of the road pixels of the elevation map followed by road map extraction by means of vectorization and filtering of the geometrically and topologically inconsistent objects. Depending on the time issue and availability of a geo-database for buildings, the urban terrain reconstruction procedure has semantic models

  20. Inlining 3d Reconstruction, Multi-Source Texture Mapping and Semantic Analysis Using Oblique Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Frommholz, D.; Linkiewicz, M.; Poznanska, A. M.

    2016-06-01

    This paper proposes an in-line method for the simplified reconstruction of city buildings from nadir and oblique aerial images that at the same time are being used for multi-source texture mapping with minimal resampling. Further, the resulting unrectified texture atlases are analyzed for façade elements like windows to be reintegrated into the original 3D models. Tests on real-world data of Heligoland/ Germany comprising more than 800 buildings exposed a median positional deviation of 0.31 m at the façades compared to the cadastral map, a correctness of 67% for the detected windows and good visual quality when being rendered with GPU-based perspective correction. As part of the process building reconstruction takes the oriented input images and transforms them into dense point clouds by semi-global matching (SGM). The point sets undergo local RANSAC-based regression and topology analysis to detect adjacent planar surfaces and determine their semantics. Based on this information the roof, wall and ground surfaces found get intersected and limited in their extension to form a closed 3D building hull. For texture mapping the hull polygons are projected into each possible input bitmap to find suitable color sources regarding the coverage and resolution. Occlusions are detected by ray-casting a full-scale digital surface model (DSM) of the scene and stored in pixel-precise visibility maps. These maps are used to derive overlap statistics and radiometric adjustment coefficients to be applied when the visible image parts for each building polygon are being copied into a compact texture atlas without resampling whenever possible. The atlas bitmap is passed to a commercial object-based image analysis (OBIA) tool running a custom rule set to identify windows on the contained façade patches. Following multi-resolution segmentation and classification based on brightness and contrast differences potential window objects are evaluated against geometric constraints and

  1. Two Eyes, 3D Early Results: Stereoscopic vs 2D Representations of Highly Spatial Scientific Imagery

    NASA Astrophysics Data System (ADS)

    Price, Aaron

    2013-06-01

    "Two Eyes, 3D" is a 3-year NSF funded research project to study the educational impacts of using stereoscopic representations in informal settings. The first study conducted as part of the project tested children 5-12 on their ability to perceive spatial elements of slides of scientific objects shown to them in either stereoscopic or 2D format. Children were also tested for prior spatial ability. Early results suggest that stereoscopy does not have a major impact on perceiving spatial elements of an image, but it does have a more significant impact on how the children apply that knowledge when presented with a common sense situation. The project is run by the AAVSO and this study was conducted at the Boston Museum of Science.

  2. Automatic Detection, Segmentation and Classification of Retinal Horizontal Neurons in Large-scale 3D Confocal Imagery

    SciTech Connect

    Karakaya, Mahmut; Kerekes, Ryan A; Gleason, Shaun Scott; Martins, Rodrigo; Dyer, Michael

    2011-01-01

    Automatic analysis of neuronal structure from wide-field-of-view 3D image stacks of retinal neurons is essential for statistically characterizing neuronal abnormalities that may be causally related to neural malfunctions or may be early indicators for a variety of neuropathies. In this paper, we study classification of neuron fields in large-scale 3D confocal image stacks, a challenging neurobiological problem because of the low spatial resolution imagery and presence of intertwined dendrites from different neurons. We present a fully automated, four-step processing approach for neuron classification with respect to the morphological structure of their dendrites. In our approach, we first localize each individual soma in the image by using morphological operators and active contours. By using each soma position as a seed point, we automatically determine an appropriate threshold to segment dendrites of each neuron. We then use skeletonization and network analysis to generate the morphological structures of segmented dendrites, and shape-based features are extracted from network representations of each neuron to characterize the neuron. Based on qualitative results and quantitative comparisons, we show that we are able to automatically compute relevant features that clearly distinguish between normal and abnormal cases for postnatal day 6 (P6) horizontal neurons.

  3. Differential Synthetic Aperture Ladar

    SciTech Connect

    Stappaerts, E A; Scharlemann, E

    2005-02-07

    We report a differential synthetic aperture ladar (DSAL) concept that relaxes platform and laser requirements compared to conventional SAL. Line-of-sight translation/vibration constraints are reduced by several orders of magnitude, while laser frequency stability is typically relaxed by an order of magnitude. The technique is most advantageous for shorter laser wavelengths, ultraviolet to mid-infrared. Analytical and modeling results, including the effect of speckle and atmospheric turbulence, are presented. Synthetic aperture ladars are of growing interest, and several theoretical and experimental papers have been published on the subject. Compared to RF synthetic aperture radar (SAR), platform/ladar motion and transmitter bandwidth constraints are especially demanding at optical wavelengths. For mid-IR and shorter wavelengths, deviations from a linear trajectory along the synthetic aperture length have to be submicron, or their magnitude must be measured to that precision for compensation. The laser coherence time has to be the synthetic aperture transit time, or transmitter phase has to be recorded and a correction applied on detection.

  4. Knowledge Based 3d Building Model Recognition Using Convolutional Neural Networks from LIDAR and Aerial Imageries

    NASA Astrophysics Data System (ADS)

    Alidoost, F.; Arefi, H.

    2016-06-01

    In recent years, with the development of the high resolution data acquisition technologies, many different approaches and algorithms have been presented to extract the accurate and timely updated 3D models of buildings as a key element of city structures for numerous applications in urban mapping. In this paper, a novel and model-based approach is proposed for automatic recognition of buildings' roof models such as flat, gable, hip, and pyramid hip roof models based on deep structures for hierarchical learning of features that are extracted from both LiDAR and aerial ortho-photos. The main steps of this approach include building segmentation, feature extraction and learning, and finally building roof labeling in a supervised pre-trained Convolutional Neural Network (CNN) framework to have an automatic recognition system for various types of buildings over an urban area. In this framework, the height information provides invariant geometric features for convolutional neural network to localize the boundary of each individual roofs. CNN is a kind of feed-forward neural network with the multilayer perceptron concept which consists of a number of convolutional and subsampling layers in an adaptable structure and it is widely used in pattern recognition and object detection application. Since the training dataset is a small library of labeled models for different shapes of roofs, the computation time of learning can be decreased significantly using the pre-trained models. The experimental results highlight the effectiveness of the deep learning approach to detect and extract the pattern of buildings' roofs automatically considering the complementary nature of height and RGB information.

  5. 3D Case Studies of Monitoring Dynamic Structural Tests using Long Exposure Imagery

    NASA Astrophysics Data System (ADS)

    McCarthy, D. M. J.; Chandler, J. H.; Palmeri, A.

    2014-06-01

    Structural health monitoring uses non-destructive testing programmes to detect long-term degradation phenomena in civil engineering structures. Structural testing may also be carried out to assess a structure's integrity following a potentially damaging event. Such investigations are increasingly carried out with vibration techniques, in which the structural response to artificial or natural excitations is recorded and analysed from a number of monitoring locations. Photogrammetry is of particular interest here since a very high number of monitoring locations can be measured using just a few images. To achieve the necessary imaging frequency to capture the vibration, it has been necessary to reduce the image resolution at the cost of spatial measurement accuracy. Even specialist sensors are limited by a compromise between sensor resolution and imaging frequency. To alleviate this compromise, a different approach has been developed and is described in this paper. Instead of using high-speed imaging to capture the instantaneous position at each epoch, long-exposure images are instead used, in which the localised image of the object becomes blurred. The approach has been extended to create 3D displacement vectors for each target point via multiple camera locations, which allows the simultaneous detection of transverse and torsional mode shapes. The proposed approach is frequency invariant allowing monitoring of higher modal frequencies irrespective of a sampling frequency. Since there is no requirement for imaging frequency, a higher image resolution is possible for the most accurate spatial measurement. The results of a small scale laboratory test using off-the-shelf consumer cameras are demonstrated. A larger experiment also demonstrates the scalability of the approach.

  6. Quantification of gully volume using very high resolution DSM generated through 3D reconstruction from airborne and field digital imagery

    NASA Astrophysics Data System (ADS)

    Castillo, Carlos; Zarco-Tejada, Pablo; Laredo, Mario; Gómez, Jose Alfonso

    2013-04-01

    Major advances have been made recently in automatic 3D photo-reconstruction techniques using uncalibrated and non-metric cameras (James and Robson, 2012). However, its application on soil conservation studies and landscape feature identification is currently at the outset. The aim of this work is to compare the performance of a remote sensing technique using a digital camera mounted on an airborne platform, with 3D photo-reconstruction, a method already validated for gully erosion assessment purposes (Castillo et al., 2012). A field survey was conducted in November 2012 in a 250 m-long gully located in field crops on a Vertisol in Cordoba (Spain). The airborne campaign was conducted with a 4000x3000 digital camera installed onboard an aircraft flying at 300 m above ground level to acquire 6 cm resolution imagery. A total of 990 images were acquired over the area ensuring a large overlap in the across- and along-track direction of the aircraft. An ortho-mosaic and the digital surface model (DSM) were obtained through automatic aerial triangulation and camera calibration methods. For the field-level photo-reconstruction technique, the gully was divided in several reaches to allow appropriate reconstruction (about 150 pictures taken per reach) and, finally, the resulting point clouds were merged into a unique mesh. A centimetric-accuracy GPS provided a benchmark dataset for gully perimeter and distinguishable reference points in order to allow the assessment of measurement errors of the airborne technique and the georeferenciation of the photo-reconstruction 3D model. The uncertainty on the gully limits definition was explicitly addressed by comparison of several criteria obtained by 3D models (slope and second derivative) with the outer perimeter obtained by the GPS operator identifying visually the change in slope at the top of the gully walls. In this study we discussed the magnitude of planimetric and altimetric errors and the differences observed between the

  7. Flight test results of ladar brownout look-through capability

    NASA Astrophysics Data System (ADS)

    Stelmash, Stephen; Münsterer, Thomas; Kramper, Patrick; Samuelis, Christian; Bühler, Daniel; Wegner, Matthias; Sheth, Sagar

    2015-06-01

    The paper discusses recent results of flight tests performed with the Airbus Defence and Space ladar system at Yuma Proving Grounds. The ladar under test was the SferiSense® system which is in operational use as an in-flight obstacle warning and avoidance system on the NH90 transport helicopter. Just minor modifications were done on the sensor firmware to optimize its performance in brownout. Also a new filtering algorithm fitted to segment dust artefacts out of the collected 3D data in real-time was employed. The results proved that this ladar sensor is capable to detect obstacles through brownout dust clouds with a depth extending up to 300 meters from the landing helicopter.

  8. Mapping tropical biodiversity using spectroscopic imagery : characterization of structural and chemical diversity with 3-D radiative transfer modeling

    NASA Astrophysics Data System (ADS)

    Feret, J. B.; Gastellu-Etchegorry, J. P.; Lefèvre-Fonollosa, M. J.; Proisy, C.; Asner, G. P.

    2014-12-01

    The accelerating loss of biodiversity is a major environmental trend. Tropical ecosystems are particularly threatened due to climate change, invasive species, farming and natural resources exploitation. Recent advances in remote sensing of biodiversity confirmed the potential of high spatial resolution spectroscopic imagery for species identification and biodiversity mapping. Such information bridges the scale-gap between small-scale, highly detailed field studies and large-scale, low-resolution satellite observations. In order to produce fine-scale resolution maps of canopy alpha-diversity and beta-diversity of the Peruvian Amazonian forest, we designed, applied and validated a method based on spectral variation hypothesis to CAO AToMS (Carnegie Airborne Observatory Airborne Taxonomic Mapping System) images, acquired from 2011 to 2013. There is a need to understand on a quantitative basis the physical processes leading to this spectral variability. This spectral variability mainly depends on canopy chemistry, structure, and sensor's characteristics. 3D radiative transfer modeling provides a powerful framework for the study of the relative influence of each of these factors in dense and complex canopies. We simulated series of spectroscopic images with the 3D radiative model DART, with variability gradients in terms of leaf chemistry, individual tree structure, spatial and spectral resolution, and applied methods for biodiversity mapping. This sensitivity study allowed us to determine the relative influence of these factors on the radiometric signal acquired by different types of sensors. Such study is particularly important to define the domain of validity of our approach, to refine requirements for the instrumental specifications, and to help preparing hyperspectral spatial missions to be launched at the horizon 2015-2025 (EnMAP, PRISMA, HISUI, SHALOM, HYSPIRI, HYPXIM). Simulations in preparation include topographic variations in order to estimate the robustness

  9. Combining Public Domain and Professional Panoramic Imagery for the Accurate and Dense 3d Reconstruction of the Destroyed Bel Temple in Palmyra

    NASA Astrophysics Data System (ADS)

    Wahbeh, W.; Nebiker, S.; Fangi, G.

    2016-06-01

    This paper exploits the potential of dense multi-image 3d reconstruction of destroyed cultural heritage monuments by either using public domain touristic imagery only or by combining the public domain imagery with professional panoramic imagery. The focus of our work is placed on the reconstruction of the temple of Bel, one of the Syrian heritage monuments, which was destroyed in September 2015 by the so called "Islamic State". The great temple of Bel is considered as one of the most important religious buildings of the 1st century AD in the East with a unique design. The investigations and the reconstruction were carried out using two types of imagery. The first are freely available generic touristic photos collected from the web. The second are panoramic images captured in 2010 for documenting those monuments. In the paper we present a 3d reconstruction workflow for both types of imagery using state-of-the art dense image matching software, addressing the non-trivial challenges of combining uncalibrated public domain imagery with panoramic images with very wide base-lines. We subsequently investigate the aspects of accuracy and completeness obtainable from the public domain touristic images alone and from the combination with spherical panoramas. We furthermore discuss the challenges of co-registering the weakly connected 3d point cloud fragments resulting from the limited coverage of the touristic photos. We then describe an approach using spherical photogrammetry as a virtual topographic survey allowing the co-registration of a detailed and accurate single 3d model of the temple interior and exterior.

  10. Fusion of LADAR with SAR for precision strike

    SciTech Connect

    Cress, D.H.; Muguira, M.R.

    1995-03-01

    This paper presents a concept for fusing 3-dimensional image reconnaissance data with LADAR imagery for aim point refinement. The approach is applicable to fixed or quasi-fixed targets. Quasi-fixed targets are targets that are not expected to be moved between the time of reconnaissance and the time of target engagement. The 3-dimensional image data is presumed to come from standoff reconnaissance assets tens to hundreds of kilometers from the target area or acquisitions prior to hostilities. Examples are synthetic aperture radar (SAR) or stereoprocessed satellite imagery. SAR can be used to generate a 3-dimensional map of the surface through processing of data acquired with conventional SAR acquired using two closely spaced, parallel reconnaissance paths, either airborne or satellite based. Alternatively, a specialized airborne SAR having two receiving antennas may be used for data acquisition. The data sets used in this analysis are: (1) LADAR data acquired using a Hughes-Danbury system flown over a portion of Kirtland AFB during the period September 15--16, 1993; (2) two pass interferometric SAR data flown over a terrain-dominated area of Kirtland AFB; (3) 3-dimensional mapping of an urban-dominated area of the Sandia National Laboratories and adjacent cultural area extracted from aerial photography by Vexcel Corporation; (4) LADAR data acquired at Eglin AFB under Wright Laboratory`s Advanced Technology Ladar System (ATLAS) program using a 60 {mu}J, 75 KHz Co{sub 2} laser; and (5) two pass interferometric SAR data generated by Sandia`s STRIP DCS (Data Collection System) radar corresponding to the ATLAS LADAR data. The cultural data set was used in the urban area rather than SAR because high quality interferometric SAR data were not available for the urban-type area.

  11. Research on key technologies of LADAR echo signal simulator

    NASA Astrophysics Data System (ADS)

    Xu, Rui; Shi, Rui; Ye, Jiansen; Wang, Xin; Li, Zhuo

    2015-10-01

    LADAR echo signal simulator is one of the most significant components of hardware-in-the-loop (HWIL) simulation systems for LADAR, which is designed to simulate the LADAR return signal in laboratory conditions. The device can provide the laser echo signal of target and background for imaging LADAR systems to test whether it is of good performance. Some key technologies are investigated in this paper. Firstly, the 3D model of typical target is built, and transformed to the data of the target echo signal based on ranging equation and targets reflection characteristics. Then, system model and time series model of LADAR echo signal simulator are established. Some influential factors which could induce fixed delay error and random delay error on the simulated return signals are analyzed. In the simulation system, the signal propagating delay of circuits and the response time of pulsed lasers are belong to fixed delay error. The counting error of digital delay generator, the jitter of system clock and the desynchronized between trigger signal and clock signal are a part of random delay error. Furthermore, these system insertion delays are analyzed quantitatively, and the noisy data are obtained. The target echo signals are got by superimposing of the noisy data and the pure target echo signal. In order to overcome these disadvantageous factors, a method of adjusting the timing diagram of the simulation system is proposed. Finally, the simulated echo signals are processed by using a detection algorithm to complete the 3D model reconstruction of object. The simulation results reveal that the range resolution can be better than 8 cm.

  12. Optical phased-array ladar.

    PubMed

    Montoya, Juan; Sanchez-Rubio, Antonio; Hatch, Robert; Payson, Harold

    2014-11-01

    We demonstrate a ladar with 0.5 m class range resolution obtained by integrating a continuous-wave optical phased-array transmitter with a Geiger-mode avalanche photodiode receiver array. In contrast with conventional ladar systems, an array of continuous-wave sources is used to effectively pulse illuminate a target by electro-optically steering far-field fringes. From the reference frame of a point in the far field, a steered fringe appears as a pulse. Range information is thus obtained by measuring the arrival time of a pulse return from a target to a receiver pixel. This ladar system offers a number of benefits, including broad spectral coverage, high efficiency, small size, power scalability, and versatility.

  13. 3D Visualisation and Artistic Imagery to Enhance Interest in "Hidden Environments"--New Approaches to Soil Science

    ERIC Educational Resources Information Center

    Gilford, J.; Falconer, R. E.; Wade, R.; Scott-Brown, K. C.

    2014-01-01

    Interactive Virtual Environments (VEs) have the potential to increase student interest in soil science. Accordingly a bespoke "soil atlas" was created using Java3D as an interactive 3D VE, to show soil information in the context of (and as affected by) the over-lying landscape. To display the below-ground soil characteristics, four sets…

  14. Generation of 3D Model for Urban area using Ikonos and Cartosat-1 Satellite Imageries with RS and GIS Techniques

    NASA Astrophysics Data System (ADS)

    Rajpriya, N. R.; Vyas, A.; Sharma, S. A.

    2014-11-01

    Urban design is a subject that is concerned with the shape, the surface and its physical arrangement of all kinds of urban elements. Although urban design is a practice process and needs much detailed and multi-dimensional description. 3D city models based spatial analysis gives the possibility of solving these problems. Ahmedabad is third fastest growing cities in the world with large amount of development in infrastructure and planning. The fabric of the city is changing and expanding at the same time, which creates need of 3d visualization of the city to develop a sustainable planning for the city. These areas have to be monitored and mapped on a regular basis and satellite remote sensing images provide a valuable and irreplaceable source for urban monitoring. With this, the derivation of structural urban types or the mapping of urban biotopes becomes possible. The present study focused at development of technique for 3D modeling of buildings for urban area analysis and to implement encoding standards prescribed in "OGC City GML" for urban features. An attempt has been to develop a 3D city model with level of details 1 (LOD 1) for part of city of Ahmedabad in State of Gujarat, India. It shows the capability to monitor urbanization in 2D and 3D.

  15. Optimization of space borne imaging ladar sensor for asteroid studies using parameter design

    NASA Astrophysics Data System (ADS)

    Wheel, Peter J.; Dobbs, Michael E.; Sharp, William E.

    2002-10-01

    Imaging LADAR is a hybrid technology that offers the ability to measure basic physical and morphological characteristics (topography, rotational state, and density) of a small body from a single fast flyby, without requiring months in orbit. In addition, the imaging LADAR provides key flight navigation information including range, altitude, hazard/target avoidance, and closed-loop landing/fly-by navigation information. The Near Laser Ranger demonstrated many of these capabilities as part of the NEAR mission. The imaging LADAR scales the concept of a laser ranger into a full 3D imager. Imaging LADAR systems combine laser illumination of the target (which means that imaging is independent of solar illumination and the image SNR is controlled by the observer), with laser ranging and imaging (producing high resolution 3D images in a fraction of the time necessary for a passive imager). The technical concept described below alters the traditional design space (dominated by pulsed LADAR systems) with the introduction of a pseudo-noise (PN) coded continuous wave (CW) laser system which allows for variable range resolution mapping and leverages enormous commercial investments in high power, long-life lasers for telecommunications.

  16. Geological interpretation and analysis of surface based, spatially referenced planetary imagery data using PRoGIS 2.0 and Pro3D.

    NASA Astrophysics Data System (ADS)

    Barnes, R.; Gupta, S.; Giordano, M.; Morley, J. G.; Muller, J. P.; Tao, Y.; Sprinks, J.; Traxler, C.; Hesina, G.; Ortner, T.; Sander, K.; Nauschnegg, B.; Paar, G.; Willner, K.; Pajdla, T.

    2015-10-01

    We apply the capabilities of the geospatial environment PRoGIS 2.0 and the real time rendering viewer PRo3D to geological analysis of NASA's Mars Exploration Rover-B (MER-B Opportunity rover) and Mars Science Laboratory (MSL Curiosity rover) datasets. Short baseline and serendipitous long baseline stereo Pancam rover imagery are used to create 3D point clouds which can be combined with super-resolution images derived from Mars Reconnaissance Orbiter HiRISE orbital data, andsuper-resolution outcrop images derived from MER Pancam, as well as hand-lens scale images for geology and outcrop characterization at all scales. Data within the PRoViDE database are presented and accessed through the PRoGIS interface. Simple geological measurement tools are implemented within the PRoGIS and PRo3D web software to accurately measure the dip and strike of bedding in outcrops, create detailed stratigraphic logs for correlation between the areas investigated, and to develop realistic 3D models for the characterization of planetary surface processes. Annotation tools are being developed to aid discussion and dissemination of the observations within the planetary science community.

  17. Deep space LADAR, phase 1

    NASA Astrophysics Data System (ADS)

    Frey, Randy W.; Rawlins, Greg; Zepkin, Neil; Bohlin, John

    1989-03-01

    A pseudo-ranging laser radar (PRLADAR) concept is proposed to provide extended range capability to tracking LADAR systems meeting the long-range requirements of SDI mission scenarios such as the SIE midcourse program. The project will investigate the payoff of several transmitter modulation techniques and a feasibility demonstration using a breadboard implementation of a new receiver concept called the Phase Multiplexed Correlator (PMC) will be accomplished. The PRLADAR concept has specific application to spaceborne LADAR tracking missions where increased CNR/SNR performance gained by the proposed technique may reduce the laser power and/or optical aperture requirement for a given mission. The reduction in power/aperture has similar cost reduction advantages in commercial ranging applications. A successful Phase 1 program will lay the groundwork for a quick reaction upgrade to the AMOS/LASE system in support of near term SIE measurement objectives.

  18. 3D visualisation and artistic imagery to enhance interest in `hidden environments' - new approaches to soil science

    NASA Astrophysics Data System (ADS)

    Gilford, J.; Falconer, R. E.; Wade, R.; Scott-Brown, K. C.

    2014-09-01

    Interactive Virtual Environments (VEs) have the potential to increase student interest in soil science. Accordingly a bespoke 'soil atlas' was created using Java3D as an interactive 3D VE, to show soil information in the context of (and as affected by) the over-lying landscape. To display the below-ground soil characteristics, four sets of artistic illustrations were produced, each set showing the effects of soil organic-matter density and water content on fungal density, to determine potential for visualisations and interactivity in stimulating interest in soil and soil illustrations, interest being an important factor in facilitating learning. The illustrations were created using 3D modelling packages, and a wide range of styles were produced. This allowed a preliminary study of the relative merits of different artistic styles, scientific-credibility, scale, abstraction and 'realism' (e.g. photo-realism or realism of forms), and any relationship between these and the level of interest indicated by the study participants in the soil visualisations and VE. The study found significant differences in mean interest ratings for different soil illustration styles, as well as in the perception of scientific-credibility of these styles, albeit for both measures there was considerable difference of attitude between participants about particular styles. There was also found to be a highly significant positive correlation between participants rating styles highly for interest and highly for scientific-credibility. There was furthermore a particularly high interest rating among participants for seeing temporal soil processes illustrated/animated, suggesting this as a particularly promising method for further stimulating interest in soil illustrations and soil itself.

  19. Research into a Single-aperture Light Field Camera System to Obtain Passive Ground-based 3D Imagery of LEO Objects

    NASA Astrophysics Data System (ADS)

    Bechis, K.; Pitruzzello, A.

    2014-09-01

    This presentation describes our ongoing research into using a ground-based light field camera to obtain passive, single-aperture 3D imagery of LEO objects. Light field cameras are an emerging and rapidly evolving technology for passive 3D imaging with a single optical sensor. The cameras use an array of lenslets placed in front of the camera focal plane, which provides angle of arrival information for light rays originating from across the target, allowing range to target and 3D image to be obtained from a single image using monocular optics. The technology, which has been commercially available for less than four years, has the potential to replace dual-sensor systems such as stereo cameras, dual radar-optical systems, and optical-LIDAR fused systems, thus reducing size, weight, cost, and complexity. We have developed a prototype system for passive ranging and 3D imaging using a commercial light field camera and custom light field image processing algorithms. Our light field camera system has been demonstrated for ground-target surveillance and threat detection applications, and this paper presents results of our research thus far into applying this technology to the 3D imaging of LEO objects. The prototype 3D imaging camera system developed by Northrop Grumman uses a Raytrix R5 C2GigE light field camera connected to a Windows computer with an nVidia graphics processing unit (GPU). The system has a frame rate of 30 Hz, and a software control interface allows for automated camera triggering and light field image acquisition to disk. Custom image processing software then performs the following steps: (1) image refocusing, (2) change detection, (3) range finding, and (4) 3D reconstruction. In Step (1), a series of 2D images are generated from each light field image; the 2D images can be refocused at up to 100 different depths. Currently, steps (1) through (3) are automated, while step (4) requires some user interaction. A key requirement for light field camera

  20. Europeana and 3D

    NASA Astrophysics Data System (ADS)

    Pletinckx, D.

    2011-09-01

    The current 3D hype creates a lot of interest in 3D. People go to 3D movies, but are we ready to use 3D in our homes, in our offices, in our communication? Are we ready to deliver real 3D to a general public and use interactive 3D in a meaningful way to enjoy, learn, communicate? The CARARE project is realising this for the moment in the domain of monuments and archaeology, so that real 3D of archaeological sites and European monuments will be available to the general public by 2012. There are several aspects to this endeavour. First of all is the technical aspect of flawlessly delivering 3D content over all platforms and operating systems, without installing software. We have currently a working solution in PDF, but HTML5 will probably be the future. Secondly, there is still little knowledge on how to create 3D learning objects, 3D tourist information or 3D scholarly communication. We are still in a prototype phase when it comes to integrate 3D objects in physical or virtual museums. Nevertheless, Europeana has a tremendous potential as a multi-facetted virtual museum. Finally, 3D has a large potential to act as a hub of information, linking to related 2D imagery, texts, video, sound. We describe how to create such rich, explorable 3D objects that can be used intuitively by the generic Europeana user and what metadata is needed to support the semantic linking.

  1. Ladar-based IED detection

    NASA Astrophysics Data System (ADS)

    Engström, Philip; Larsson, Hâkan; Letalick, Dietmar

    2014-05-01

    An improvised explosive device (IED) is a bomb constructed and deployed in a non-standard manor. Improvised means that the bomb maker took whatever he could get his hands on, making it very hard to predict and detect. Nevertheless, the matters in which the IED's are deployed and used, for example as roadside bombs, follow certain patterns. One possible approach for early warning is to record the surroundings when it is safe and use this as reference data for change detection. In this paper a LADAR-based system for IED detection is presented. The idea is to measure the area in front of the vehicle when driving and comparing this to the previously recorded reference data. By detecting new, missing or changed objects the system can make the driver aware of probable threats.

  2. Characterization of articulated vehicles using ladar seekers

    NASA Astrophysics Data System (ADS)

    Wellfare, Michael R.; Norris-Zachery, Karen

    1997-08-01

    Many vehicle targets of interest to military automatic target recognition (ATR) possess articulating components: that is, they have components that change position relative to the main body. Many vehicles also have multiple configurations wherein one or more devices or objects may be added to enhance specific military or logistical capabilities. As the expected target set for military ATR becomes more comprehensive, many additional articulations and optional components must be handled. Mobile air defense units often include moving radar antennae as well as turreted guns and missile launchers. Surface-to-surface missile launchers may be encountered with or without missiles, and with the launch rails raised or lowered. Engineers and countermine vehicles have a tremendous number of possible configurations and even conventional battle tanks may very items such as external reactive armor, long- range tanks, turret azimuth, and gun elevation. These changes pose a significant barrier to the target identification process since they greatly increase the range of possible target signatures. When combined with variations already encountered due to target aspect changes, an extremely large number of possible signatures is formed. Conventional algorithms cannot process so many possibilities effectively, so in response, the matching process is often made less selective. This degrades identification performance, increase false alarm rates, and increases data requirements for algorithm testing and training. By explicitly involving articulation in the detection and identification stages of an ATR algorithm, more precise matching constraints can be applied, and better selectivity can be achieve. Additional benefits include the measurement of the position and orientation of articulated components, which often has tactical significance. In this paper, the result of a study investigating the impact of target articulation in ATR for military vehicles are presented. 3D ladar signature

  3. Real-time range generation for ladar hardware-in-the-loop testing

    NASA Astrophysics Data System (ADS)

    Olson, Eric M.; Coker, Charles F.

    1996-05-01

    Real-time closed loop simulation of LADAR seekers in a hardware-in-the-loop facility can reduce program risk and cost. This paper discusses an implementation of real-time range imagery generated in a synthetic environment at the Kinetic Kill Vehicle Hardware-in-the Loop facility at Eglin AFB, for the stimulation of LADAR seekers and algorithms. The computer hardware platform used was a Silicon Graphics Incorporated Onyx Reality Engine. This computer contains graphics hardware, and is optimized for generating visible or infrared imagery in real-time. A by-produce of the rendering process, in the form of a depth buffer, is generated from all objects in view during its rendering process. The depth buffer is an array of integer values that contributes to the proper rendering of overlapping objects and can be converted to range values using a mathematical formula. This paper presents an optimized software approach to the generation of the scenes, calculation of the range values, and outputting the range data for a LADAR seeker.

  4. LADAR scene projector for hardware-in-the-loop testing

    NASA Astrophysics Data System (ADS)

    Cornell, Michael C.; Naumann, Charles B.; Stockbridge, Robert G.; Snyder, Donald R.

    2002-07-01

    Future types of direct detection LADAR seekers will employ focal plane arrays in their receivers. Existing LADAR scene projection technology cannot meet the needs of testing these types of seekers in a Hardware-in-the-Loop environment. It is desired that the simulated LADAR return signals generated by the projection hardware be representative of the complex targets and background of a real LADAR image. A LADAR scene projector has been developed that is capable of meeting these demanding test needs. It can project scenes of simulated 2D LADAR return signals without scanning. In addition, each pixel in the projection can be represented by a 'complex' optical waveform, which can be delivered with sub-nanosecond precision. Finally, the modular nature of the projector allows it to be configured to operate at different wavelengths. This paper describes the LADAR Scene Projector and its full capabilities.

  5. Progress on MEMS-scanned ladar

    NASA Astrophysics Data System (ADS)

    Stann, Barry L.; Dammann, John F.; Giza, Mark M.

    2016-05-01

    The Army Research Laboratory (ARL) has continued to research a short-range ladar imager for use on small unmanned ground vehicles (UGV) and recently small unmanned air vehicles (UAV). The current ladar brassboard is based on a micro-electro-mechanical system (MEMS) mirror coupled to a low-cost pulsed erbium fiber laser. It has a 5-6 Hz frame rate, an image size of 256 (h) x 128 (v) pixels, a 42º x 21º field of regard, 35 m range, eyesafe operation, and 40 cm range resolution with provisions for super-resolution. Experience with driving experiments on small ground robots and efforts to extend the use of the ladar to UAV applications has encouraged work to improve the ladar's performance. The data acquisition system can now capture range data from the three return pulses in a pixel (that is first, last, and largest return), and information such as elapsed time, operating parameters, and data from an inertial navigation system. We will mention the addition and performance of subsystems to obtain eye-safety certification. To meet the enhanced range requirement for the UAV application, we describe a new receiver circuit that improves the signal-to-noise (SNR) several-fold over the existing design. Complementing this work, we discuss research to build a low-capacitance large area detector that may enable even further improvement in receiver SNR. Finally, we outline progress to build a breadboard ladar to demonstrate increased range to 160 m. If successful, this ladar will be integrated with a color camera and inertial navigation system to build a data collection package to determine imaging performance for a small UAV.

  6. Multi-dimensional, non-contact metrology using trilateration and high resolution FMCW ladar.

    PubMed

    Mateo, Ana Baselga; Barber, Zeb W

    2015-07-01

    Here we propose, describe, and provide experimental proof-of-concept demonstrations of a multidimensional, non-contact-length metrology system design based on high resolution (millimeter to sub-100 micron) frequency modulated continuous wave (FMCW) ladar and trilateration based on length measurements from multiple, optical fiber-connected transmitters. With an accurate FMCW ladar source, the trilateration-based design provides 3D resolution inherently independent of standoff range and allows self-calibration to provide flexible setup of a field system. A proof-of-concept experimental demonstration was performed using a highly stabilized, 2 THz bandwidth chirped laser source, two emitters, and one scanning emitter/receiver providing 1D surface profiles (2D metrology) of diffuse targets. The measured coordinate precision of <200 microns was determined to be limited by laser speckle issues caused by diffuse scattering of the targets. PMID:26193132

  7. A Comparative Study between Frequency-Modulated Continous Wave LADAR and Linear LiDAR

    NASA Astrophysics Data System (ADS)

    Massaro, R. D.; Anderson, J. E.; Nelson, J. D.; Edwards, J. D.

    2014-11-01

    Topographic Light Detection and Ranging (LiDAR) technology has advanced greatly in the past decade. Pulse repetition rates of terrestrial and airborne systems havemultiplied thus vastly increasing data acquisition rates. Geiger-mode and FLASH LiDAR have also become far more mature technologies. However, a new and relatively unknown technology is maturing rapidly: Frequency-Modulated Continuous Wave Laser Detection and Ranging (FMCW-LADAR). Possessing attributes more akin to modern radar systems, FMCWLADAR has the ability to more finely resolve objects separated by very small ranges. For tactical military applications (as described here), this can be a real advantage over single frequency, direct-detect systems. In fact, FMCW-LADAR can range resolve objects at 10-7 to 10-6 meter scales. FMCW-LADAR can also detect objects at greater range with less power. In this study, a FMCWLADAR instrument and traditional LiDAR instrument are compared. The co-located terrestrial scanning instruments were set up to perform simultaneous 3-D measurements of the given scene. Several targets were placed in the scene to expose the difference in the range resolution capabilities of the two instruments. The scans were performed at or nearly the same horizontal and vertical angular resolutions. It is demonstrated that the FMCW-LADAR surpasses the perfomance of the linear mode LiDAR scanner in terms of range resolution. Some results showing the maximum range acquisition are discussed but this was not studied in detail as the scanners' laser powers differed by a small amount. Applications and implications of this technology are also discussed.

  8. Implementing torsional-mode Doppler ladar.

    PubMed

    Fluckiger, David U

    2002-08-20

    Laguerre-Gaussian laser modes carry orbital angular momentum as a consequence of their helical-phase front screw dislocation. This torsional beam structure interacts with rotating targets, changing the orbital angular momentum (azimuthal Doppler) of the scattered beam because angular momentum is a conserved quantity. I show how to measure this change independently from the usual longitudinal momentum (normal Doppler shift) and derive the apropos coherent mixing efficiencies for monostatic, truncated Laguerre and Gaussian-mode ladar antenna patterns. PMID:12206220

  9. Implementing torsional-mode Doppler ladar

    NASA Astrophysics Data System (ADS)

    Fluckiger, David U.

    2002-08-01

    Laguerre-Gaussian laser modes carry orbital angular momentum as a consequence of their helical-phase front screw dislocation. This torsional beam structure interacts with rotating targets, changing the orbital angular momentum (azimuthal Doppler) of the scattered beam because angular momentum is a conserved quantity. I show how to measure this change independently from the usual longitudinal momentum (normal Doppler shift) and derive the apropos coherent mixing efficiencies for monostatic, truncated Laguerre and Gaussian-mode ladar antenna patterns.

  10. AMCOM RDEC ladar HWIL simulation system development

    NASA Astrophysics Data System (ADS)

    Kim, Hajin J.; Mobley, Scottie B.; Buford, James A., Jr.

    2003-09-01

    Hardware-in-the-loop (HWIL) testing has, for many years, been an integral part of the modeling and simulation efforts at the U.S. Army Aviation and Missile Command"s (AMCOM) Aviation and Missile Research, Engineering, and Development Center (AMRDEC). AMCOM"s history includes the development, characterization, and implementation of several unique technologies for the creation of synthetic environments in the visible, infrared, and radio frequency spectral regions and AMCOM has continued significant efforts in these areas. This paper describes recent advancements at AMCOM"s Advanced Simulation Center (ASC) and concentrates on Ladar HWIL simulation system development.

  11. Foliage discrimination using a rotating ladar

    NASA Technical Reports Server (NTRS)

    Castano, A.; Matthies, L.

    2003-01-01

    We present a real time algorithm that detects foliage using range from a rotating laser. Objects not classified as foliage are conservatively labeled as non-driving obstacles. In contrast to related work that uses range statistics to classify objects, we exploit the expected localities and continuities of an obstacle, in both space and time. Also, instead of attempting to find a single accurate discriminating factor for every ladar return, we hypothesize the class of some few returns and then spread the confidence (and classification) to other returns using the locality constraints. The Urbie robot is presently using this algorithm to descriminate drivable grass from obstacles during outdoor autonomous navigation tasks.

  12. New High-Resolution 3D Imagery of Fault Deformation and Segmentation of the San Onofre and San Mateo Trends in the Inner California Borderlands

    NASA Astrophysics Data System (ADS)

    Holmes, J. J.; Driscoll, N. W.; Kent, G. M.; Bormann, J. M.; Harding, A. J.

    2015-12-01

    The Inner California Borderlands (ICB) is situated off the coast of southern California and northern Baja. The structural and geomorphic characteristics of the area record a middle Oligocene transition from subduction to microplate capture along the California coast. Marine stratigraphic evidence shows large-scale extension and rotation overprinted by modern strike-slip deformation. Geodetic and geologic observations indicate that approximately 6-8 mm/yr of Pacific-North American relative plate motion is accommodated by offshore strike-slip faulting in the ICB. The farthest inshore fault system, the Newport-Inglewood Rose Canyon (NIRC) fault complex is a dextral strike-slip system that extends primarily offshore approximately 120 km from San Diego to the San Joaquin Hills near Newport Beach, California. Based on trenching and well data, the NIRC fault system Holocene slip rate is 1.5-2.0 mm/yr to the south and 0.5-1.0 mm/yr along its northern extent. An earthquake rupturing the entire length of the system could produce an Mw 7.0 earthquake or larger. West of the main segments of the NIRC fault complex are the San Mateo and San Onofre fault trends along the continental slope. Previous work concluded that these were part of a strike-slip system that eventually merged with the NIRC complex. Others have interpreted these trends as deformation associated with the Oceanside Blind Thrust fault purported to underlie most of the region. In late 2013, we acquired the first high-resolution 3D P-Cable seismic surveys (3.125 m bin resolution) of the San Mateo and San Onofre trends as part of the Southern California Regional Fault Mapping project aboard the R/V New Horizon. Analysis of these volumes provides important new insights and constraints on the fault segmentation and transfer of deformation. Based on the new 3D sparker seismic data, our preferred interpretation for the San Mateo and San Onofre fault trends is they are transpressional features associated with westward

  13. 1541nm GmAPD LADAR system

    NASA Astrophysics Data System (ADS)

    Kutteruf, Mary R.; Lebow, Paul

    2014-06-01

    The single photon sensitivity of Geiger-mode avalanche photo diodes (GmAPDs) has facilitated the development of LADAR systems that operate at longer stand-off distances, require lower laser pulse powers and are capable of imaging through a partial obscuration. In this paper, we describe a GmAPD LADAR system which operates at the eye-safe wavelength of 1541 nm. The longer wavelength should enhance system covertness and improve haze penetration compared to systems using 1064 nm lasers. The system is comprised of a COTS 1541 nm erbium fiber laser producing 4 ns pulses at 80 kHz to 450 kHz and a COTS camera with a focal plane of 32x32 InGaAs GmAPDs band-gap optimized for 1550 nm. Laboratory characterization methodology and results are discussed. We show that accurate modeling of the system response, allows us to achieve a depth resolution which is limited by the width of the camera's time bin (.25 ns or 1.5 inches) rather than by the duration of the laser pulse (4 ns or 2 ft.). In the presence of obscuration, the depth discrimination is degraded to 6 inches but is still significantly better than that dictated by the laser pulse duration. We conclude with a discussion of future work.

  14. New High-Resolution 3D Seismic Imagery of Deformation and Fault Architecture Along Newport-Inglewood/Rose Canyon Fault in the Inner California Borderlands

    NASA Astrophysics Data System (ADS)

    Holmes, J. J.; Bormann, J. M.; Driscoll, N. W.; Kent, G.; Harding, A. J.; Wesnousky, S. G.

    2014-12-01

    The tectonic deformation and geomorphology of the Inner California Borderlands (ICB) records the transition from a convergent plate margin to a predominantly dextral strike-slip system. Geodetic measurements of plate boundary deformation onshore indicate that approximately 15%, or 6-8 mm/yr, of the total Pacific-North American relative plate motion is accommodated by faults offshore. The largest near-shore fault system, the Newport-Inglewood/Rose Canyon (NI/RC) fault complex, has a Holocene slip rate estimate of 1.5-2.0 mm/yr, according to onshore trenching, and current models suggest the potential to produce an Mw 7.0+ earthquake. The fault zone extends approximately 120 km, initiating from the south near downtown San Diego and striking northwards with a constraining bend north of Mt. Soledad in La Jolla and continuing northwestward along the continental shelf, eventually stepping onshore at Newport Beach, California. In late 2013, we completed the first high-resolution 3D seismic survey (3.125 m bins) of the NI/RC fault offshore of San Onofre as part of the Southern California Regional Fault Mapping project. We present new constraints on fault geometry and segmentation of the fault system that may play a role in limiting the extent of future earthquake ruptures. In addition, slip rate estimates using piercing points such as offset channels will be explored. These new observations will allow us to investigate recent deformation and strain transfer along the NI/RC fault system.

  15. 3D rapid mapping

    NASA Astrophysics Data System (ADS)

    Isaksson, Folke; Borg, Johan; Haglund, Leif

    2008-04-01

    In this paper the performance of passive range measurement imaging using stereo technique in real time applications is described. Stereo vision uses multiple images to get depth resolution in a similar way as Synthetic Aperture Radar (SAR) uses multiple measurements to obtain better spatial resolution. This technique has been used in photogrammetry for a long time but it will be shown that it is now possible to do the calculations, with carefully designed image processing algorithms, in e.g. a PC in real time. In order to get high resolution and quantitative data in the stereo estimation a mathematical camera model is used. The parameters to the camera model are settled in a calibration rig or in the case of a moving camera the scene itself can be used for calibration of most of the parameters. After calibration an ordinary TV camera has an angular resolution like a theodolite, but to a much lower price. The paper will present results from high resolution 3D imagery from air to ground. The 3D-results from stereo calculation of image pairs are stitched together into a large database to form a 3D-model of the area covered.

  16. Metrological characterization of 3D imaging devices

    NASA Astrophysics Data System (ADS)

    Guidi, G.

    2013-04-01

    Manufacturers often express the performance of a 3D imaging device in various non-uniform ways for the lack of internationally recognized standard requirements for metrological parameters able to identify the capability of capturing a real scene. For this reason several national and international organizations in the last ten years have been developing protocols for verifying such performance. Ranging from VDI/VDE 2634, published by the Association of German Engineers and oriented to the world of mechanical 3D measurements (triangulation-based devices), to the ASTM technical committee E57, working also on laser systems based on direct range detection (TOF, Phase Shift, FM-CW, flash LADAR), this paper shows the state of the art about the characterization of active range devices, with special emphasis on measurement uncertainty, accuracy and resolution. Most of these protocols are based on special objects whose shape and size are certified with a known level of accuracy. By capturing the 3D shape of such objects with a range device, a comparison between the measured points and the theoretical shape they should represent is possible. The actual deviations can be directly analyzed or some derived parameters can be obtained (e.g. angles between planes, distances between barycenters of spheres rigidly connected, frequency domain parameters, etc.). This paper shows theoretical aspects and experimental results of some novel characterization methods applied to different categories of active 3D imaging devices based on both principles of triangulation and direct range detection.

  17. AMRDEC's HWIL synthetic environment development efforts for LADAR sensors

    NASA Astrophysics Data System (ADS)

    Kim, Hajin J.; Cornell, Michael C.; Naumann, Charles B.

    2004-08-01

    Hardware-in-the-loop (HWIL) testing has been an integral part of the modeling and simulation efforts at the U.S. Army Aviation and Missile Research, Engineering, and Development Center (AMRDEC). AMRDEC's history includes the development and implementation of several unique technologies for producing synthetic environments in the visible, infrared, MMW and RF regions. With the emerging sensor/electronics technology, LADAR sensors are becoming more viable option as an integral part of weapon systems, and AMRDEC has been expending efforts to develop the capabilities for testing LADAR sensors in a HWIL environment. There are several areas of challenges in LADAR HWIL testing, since the simulation requirements for the electronics and computation are stressing combinations of the passive image and active sensor HWIL testing. There have been several key areas where advancements have been made to address the challenges in developing a synthetic environment for the LADAR sensor testing. In this paper, we will present the latest results from the LADAR projector development and test efforts at AMRDEC's Advanced Simulation Center (ASC).

  18. Construction of multi-functional open modulized Matlab simulation toolbox for imaging ladar system

    NASA Astrophysics Data System (ADS)

    Wu, Long; Zhao, Yuan; Tang, Meng; He, Jiang; Zhang, Yong

    2011-06-01

    Ladar system simulation is to simulate the ladar models using computer simulation technology in order to predict the performance of the ladar system. This paper presents the developments of laser imaging radar simulation for domestic and overseas studies and the studies of computer simulation on ladar system with different application requests. The LadarSim and FOI-LadarSIM simulation facilities of Utah State University and Swedish Defence Research Agency are introduced in details. This paper presents the low level of simulation scale, un-unified design and applications of domestic researches in imaging ladar system simulation, which are mostly to achieve simple function simulation based on ranging equations for ladar systems. Design of laser imaging radar simulation with open and modularized structure is proposed to design unified modules for ladar system, laser emitter, atmosphere models, target models, signal receiver, parameters setting and system controller. Unified Matlab toolbox and standard control modules have been built with regulated input and output of the functions, and the communication protocols between hardware modules. A simulation based on ICCD gain-modulated imaging ladar system for a space shuttle is made based on the toolbox. The simulation result shows that the models and parameter settings of the Matlab toolbox are able to simulate the actual detection process precisely. The unified control module and pre-defined parameter settings simplify the simulation of imaging ladar detection. Its open structures enable the toolbox to be modified for specialized requests. The modulization gives simulations flexibility.

  19. Ladar range image denoising by a nonlocal probability statistics algorithm

    NASA Astrophysics Data System (ADS)

    Xia, Zhi-Wei; Li, Qi; Xiong, Zhi-Peng; Wang, Qi

    2013-01-01

    According to the characteristic of range images of coherent ladar and the basis of nonlocal means (NLM), a nonlocal probability statistics (NLPS) algorithm is proposed in this paper. The difference is that NLM performs denoising using the mean of the conditional probability distribution function (PDF) while NLPS using the maximum of the marginal PDF. In the algorithm, similar blocks are found out by the operation of block matching and form a group. Pixels in the group are analyzed by probability statistics and the gray value with maximum probability is used as the estimated value of the current pixel. The simulated range images of coherent ladar with different carrier-to-noise ratio and real range image of coherent ladar with 8 gray-scales are denoised by this algorithm, and the results are compared with those of median filter, multitemplate order mean filter, NLM, median nonlocal mean filter and its incorporation of anatomical side information, and unsupervised information-theoretic adaptive filter. The range abnormality noise and Gaussian noise in range image of coherent ladar are effectively suppressed by NLPS.

  20. ALLFlight: detection of moving objects in IR and ladar images

    NASA Astrophysics Data System (ADS)

    Doehler, H.-U.; Peinecke, Niklas; Lueken, Thomas; Schmerwitz, Sven

    2013-05-01

    Supporting a helicopter pilot during landing and takeoff in degraded visual environment (DVE) is one of the challenges within DLR's project ALLFlight (Assisted Low Level Flight and Landing on Unprepared Landing Sites). Different types of sensors (TV, Infrared, mmW radar and laser radar) are mounted onto DLR's research helicopter FHS (flying helicopter simulator) for gathering different sensor data of the surrounding world. A high performance computer cluster architecture acquires and fuses all the information to get one single comprehensive description of the outside situation. While both TV and IR cameras deliver images with frame rates of 25 Hz or 30 Hz, Ladar and mmW radar provide georeferenced sensor data with only 2 Hz or even less. Therefore, it takes several seconds to detect or even track potential moving obstacle candidates in mmW or Ladar sequences. Especially if the helicopter is flying with higher speed, it is very important to minimize the detection time of obstacles in order to initiate a re-planning of the helicopter's mission timely. Applying feature extraction algorithms on IR images in combination with data fusion algorithms of extracted features and Ladar data can decrease the detection time appreciably. Based on real data from flight tests, the paper describes applied feature extraction methods for moving object detection, as well as data fusion techniques for combining features from TV/IR and Ladar data.

  1. Anomaly detection in clutter using spectrally enhanced LADAR

    NASA Astrophysics Data System (ADS)

    Chhabra, Puneet S.; Wallace, Andrew M.; Hopgood, James R.

    2015-05-01

    Discrete return (DR) Laser Detection and Ranging (Ladar) systems provide a series of echoes that reflect from objects in a scene. These can be first, last or multi-echo returns. In contrast, Full-Waveform (FW)-Ladar systems measure the intensity of light reflected from objects continuously over a period of time. In a camflouaged scenario, e.g., objects hidden behind dense foliage, a FW-Ladar penetrates such foliage and returns a sequence of echoes including buried faint echoes. The aim of this paper is to learn local-patterns of co-occurring echoes characterised by their measured spectra. A deviation from such patterns defines an abnormal event in a forest/tree depth profile. As far as the authors know, neither DR or FW-Ladar, along with several spectral measurements, has not been applied to anomaly detection. This work presents an algorithm that allows detection of spectral and temporal anomalies in FW-Multi Spectral Ladar (FW-MSL) data samples. An anomaly is defined as a full waveform temporal and spectral signature that does not conform to a prior expectation, represented using a learnt subspace (dictionary) and set of coefficients that capture co-occurring local-patterns using an overlapping temporal window. A modified optimization scheme is proposed for subspace learning based on stochastic approximations. The objective function is augmented with a discriminative term that represents the subspace's separability properties and supports anomaly characterisation. The algorithm detects several man-made objects and anomalous spectra hidden in a dense clutter of vegetation and also allows tree species classification.

  2. Limitations of Geiger-mode arrays for Flash LADAR applications

    NASA Astrophysics Data System (ADS)

    Williams, George M., Jr.

    2010-04-01

    It is shown through physics-based Monte Carlo simulations of avalanche photodiode (APD) LADAR receivers that under typical operating scenarios, Geiger-mode APD (GmAPD) flash LADAR receivers may often be ineffective. These results are corroborated by analysis of the signal photon detection efficiency and signal-to-noise ratio metrics. Due to their ability to detect only one pulse per laser shot, the target detection efficiency of GmAPD receivers, as measured by target signal events detected compared to those present at the receiver's optical aperture, is shown to be highly particular and respond nonlinearly to the specific LADAR conditions including range, laser power, detector efficiency, and target occlusion, which causes the GmAPD target detection capabilities to vary unpredictably over standard mission conditions. In the detection of partially occluded targets, GmAPD LADAR receivers perform optimally within only a narrow operating window of range, detector efficiency, and laser power; outside this window performance degrades sharply. Operating at both short and long standoff ranges, GmAPD receivers most often cannot detect partially occluded targets, and with an increased number of detector dark noise events, e.g. resulting from exposure to ionizing radiation, the probability that a GmAPD device is armed and able to detect target signal returns approaches zero. Even when multiple pulses are accumulated or contrived operational scenarios are employed, and even in weak-signal scenarios, GmAPDs most often perform inefficiently in their detection of target signal events at the aperture. It is concluded that the inability of the GmAPD to detect target signal present at the receiver's aperture may lead to a loss of operational capability, may have undesired implications for the equivalent optical aperture, laser power, and/or system complexity, and may incur other costs deleterious to operational efficacy.

  3. Interactive 3D Mars Visualization

    NASA Technical Reports Server (NTRS)

    Powell, Mark W.

    2012-01-01

    The Interactive 3D Mars Visualization system provides high-performance, immersive visualization of satellite and surface vehicle imagery of Mars. The software can be used in mission operations to provide the most accurate position information for the Mars rovers to date. When integrated into the mission data pipeline, this system allows mission planners to view the location of the rover on Mars to 0.01-meter accuracy with respect to satellite imagery, with dynamic updates to incorporate the latest position information. Given this information so early in the planning process, rover drivers are able to plan more accurate drive activities for the rover than ever before, increasing the execution of science activities significantly. Scientifically, this 3D mapping information puts all of the science analyses to date into geologic context on a daily basis instead of weeks or months, as was the norm prior to this contribution. This allows the science planners to judge the efficacy of their previously executed science observations much more efficiently, and achieve greater science return as a result. The Interactive 3D Mars surface view is a Mars terrain browsing software interface that encompasses the entire region of exploration for a Mars surface exploration mission. The view is interactive, allowing the user to pan in any direction by clicking and dragging, or to zoom in or out by scrolling the mouse or touchpad. This set currently includes tools for selecting a point of interest, and a ruler tool for displaying the distance between and positions of two points of interest. The mapping information can be harvested and shared through ubiquitous online mapping tools like Google Mars, NASA WorldWind, and Worldwide Telescope.

  4. Ladar System Identifies Obstacles Partly Hidden by Grass

    NASA Technical Reports Server (NTRS)

    Castano, Andres

    2003-01-01

    A ladar-based system now undergoing development is intended to enable an autonomous mobile robot in an outdoor environment to avoid moving toward trees, large rocks, and other obstacles that are partly hidden by tall grass. The design of the system incorporates the assumption that the robot is capable of moving through grass and provides for discrimination between grass and obstacles on the basis of geometric properties extracted from ladar readings as described below. The system (see figure) includes a ladar system that projects a range-measuring pulsed laser beam that has a small angular width of radians and is capable of measuring distances of reflective objects from a minimum of dmin to a maximum of dmax. The system is equipped with a rotating mirror that scans the beam through a relatively wide angular range of in a horizontal plane at a suitable small height above the ground. Successive scans are performed at time intervals of seconds. During each scan, the laser beam is fired at relatively small angular intervals of radians to make range measurements, so that the total number of range measurements acquired in a scan is Ne = / .

  5. Target recognition for ladar range image using slice image

    NASA Astrophysics Data System (ADS)

    Xia, Wenze; Han, Shaokun; Wang, Liang

    2015-12-01

    A shape descriptor and a complete shape-based recognition system using slice images as geometric feature descriptor for ladar range images are introduced. A slice image is a two-dimensional image generated by three-dimensional Hough transform and the corresponding mathematical transformation. The system consists of two processes, the model library construction and recognition. In the model library construction process, a series of range images are obtained after the model object is sampled at preset attitude angles. Then, all the range images are converted into slice images. The number of slice images is reduced by clustering analysis and finding a representation to reduce the size of the model library. In the recognition process, the slice image of the scene is compared with the slice image in the model library. The recognition results depend on the comparison. Simulated ladar range images are used to analyze the recognition and misjudgment rates, and comparison between the slice image representation method and moment invariants representation method is performed. The experimental results show that whether in conditions without noise or with ladar noise, the system has a high recognition rate and low misjudgment rate. The comparison experiment demonstrates that the slice image has better representation ability than moment invariants.

  6. 3d-3d correspondence revisited

    NASA Astrophysics Data System (ADS)

    Chung, Hee-Joong; Dimofte, Tudor; Gukov, Sergei; Sułkowski, Piotr

    2016-04-01

    In fivebrane compactifications on 3-manifolds, we point out the importance of all flat connections in the proper definition of the effective 3d {N}=2 theory. The Lagrangians of some theories with the desired properties can be constructed with the help of homological knot invariants that categorify colored Jones polynomials. Higgsing the full 3d theories constructed this way recovers theories found previously by Dimofte-Gaiotto-Gukov. We also consider the cutting and gluing of 3-manifolds along smooth boundaries and the role played by all flat connections in this operation.

  7. 3d-3d correspondence revisited

    DOE PAGES

    Chung, Hee -Joong; Dimofte, Tudor; Gukov, Sergei; Sułkowski, Piotr

    2016-04-21

    In fivebrane compactifications on 3-manifolds, we point out the importance of all flat connections in the proper definition of the effective 3d N = 2 theory. The Lagrangians of some theories with the desired properties can be constructed with the help of homological knot invariants that categorify colored Jones polynomials. Higgsing the full 3d theories constructed this way recovers theories found previously by Dimofte-Gaiotto-Gukov. As a result, we also consider the cutting and gluing of 3-manifolds along smooth boundaries and the role played by all flat connections in this operation.

  8. A 3-D Look at Post-Tropical Cyclone Hermine

    NASA Video Gallery

    This 3-D flyby animation of GPM imagery shows Post-Tropical Storm Hermine on Sept. 6. Rain was falling at a rate of over 1.1 inches (27 mm) per hour between the Atlantic coast and Hermine's center ...

  9. Ladar scene projector for a hardware-in-the-loop simulation system.

    PubMed

    Xu, Rui; Wang, Xin; Tian, Yi; Li, Zhuo

    2016-07-20

    In order to test a direct-detection ladar in a hardware-in-the-loop simulation system, a ladar scene projector is proposed. A model based on the ladar range equation is developed to calculate the profile of the ladar return signal. The influences of both the atmosphere and the target's surface properties are considered. The insertion delays of different channels of the ladar scene projector are investigated and compensated for. A target range image with 108 pixels is generated. The simulation range is from 0 to 15 km, the range resolution is 1.04 m, the range error is 1.28 cm, and the peak-valley error for different channels is 15 cm. PMID:27463932

  10. 3D and Education

    NASA Astrophysics Data System (ADS)

    Meulien Ohlmann, Odile

    2013-02-01

    Today the industry offers a chain of 3D products. Learning to "read" and to "create in 3D" becomes an issue of education of primary importance. 25 years professional experience in France, the United States and Germany, Odile Meulien set up a personal method of initiation to 3D creation that entails the spatial/temporal experience of the holographic visual. She will present some different tools and techniques used for this learning, their advantages and disadvantages, programs and issues of educational policies, constraints and expectations related to the development of new techniques for 3D imaging. Although the creation of display holograms is very much reduced compared to the creation of the 90ies, the holographic concept is spreading in all scientific, social, and artistic activities of our present time. She will also raise many questions: What means 3D? Is it communication? Is it perception? How the seeing and none seeing is interferes? What else has to be taken in consideration to communicate in 3D? How to handle the non visible relations of moving objects with subjects? Does this transform our model of exchange with others? What kind of interaction this has with our everyday life? Then come more practical questions: How to learn creating 3D visualization, to learn 3D grammar, 3D language, 3D thinking? What for? At what level? In which matter? for whom?

  11. Positional Awareness Map 3D (PAM3D)

    NASA Technical Reports Server (NTRS)

    Hoffman, Monica; Allen, Earl L.; Yount, John W.; Norcross, April Louise

    2012-01-01

    The Western Aeronautical Test Range of the National Aeronautics and Space Administration s Dryden Flight Research Center needed to address the aging software and hardware of its current situational awareness display application, the Global Real-Time Interactive Map (GRIM). GRIM was initially developed in the late 1980s and executes on older PC architectures using a Linux operating system that is no longer supported. Additionally, the software is difficult to maintain due to its complexity and loss of developer knowledge. It was decided that a replacement application must be developed or acquired in the near future. The replacement must provide the functionality of the original system, the ability to monitor test flight vehicles in real-time, and add improvements such as high resolution imagery and true 3-dimensional capability. This paper will discuss the process of determining the best approach to replace GRIM, and the functionality and capabilities of the first release of the Positional Awareness Map 3D.

  12. 3D Imaging.

    ERIC Educational Resources Information Center

    Hastings, S. K.

    2002-01-01

    Discusses 3 D imaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3 D imaging for libraries and museums. (LRW)

  13. Range resolution improvement of eyesafe ladar testbed (ELT) measurements using sparse signal deconvolution

    NASA Astrophysics Data System (ADS)

    Budge, Scott E.; Gunther, Jacob H.

    2014-06-01

    The Eyesafe Ladar Test-bed (ELT) is an experimental ladar system with the capability of digitizing return laser pulse waveforms at 2 GHz. These waveforms can then be exploited off-line in the laboratory to develop signal processing techniques for noise reduction, range resolution improvement, and range discrimination between two surfaces of similar range interrogated by a single laser pulse. This paper presents the results of experiments with new deconvolution algorithms with the hoped-for gains of improving the range discrimination of the ladar system. The sparsity of ladar returns is exploited to solve the deconvolution problem in two steps. The first step is to estimate a point target response using a database of measured calibration data. This basic target response is used to construct a dictionary of target responses with different delays/ranges. Using this dictionary ladar returns from a wide variety of surface configurations can be synthesized by taking linear combinations. A sparse linear combination matches the physical reality that ladar returns consist of the overlapping of only a few pulses. The dictionary construction process is a pre-processing step that is performed only once. The deconvolution step is performed by minimizing the error between the measured ladar return and the dictionary model while constraining the coefficient vector to be sparse. Other constraints such as the non-negativity of the coefficients are also applied. The results of the proposed technique are presented in the paper and are shown to compare favorably with previously investigated deconvolution techniques.

  14. Use of 3D laser radar for navigation of unmanned aerial and ground vehicles in urban and indoor environments

    NASA Astrophysics Data System (ADS)

    Uijt de Haag, Maarten; Venable, Don; Smearcheck, Mark

    2007-04-01

    This paper discusses the integration of Inertial measurements with measurements from a three-dimensional (3D) imaging sensor for position and attitude determination of unmanned aerial vehicles (UAV) and autonomous ground vehicles (AGV) in urban or indoor environments. To enable operation of UAVs and AGVs at any time in any environment a Precision Navigation, Attitude, and Time (PNAT) capability is required that is robust and not solely dependent on the Global Positioning System (GPS). In urban and indoor environments a GPS position capability may not only be unavailable due to shadowing, significant signal attenuation or multipath, but also due to intentional denial or deception. Although deep integration of GPS and Inertial Measurement Unit (IMU) data may prove to be a viable solution an alternative method is being discussed in this paper. The alternative solution is based on 3D imaging sensor technologies such as Flash Ladar (Laser Radar). Flash Ladar technology consists of a modulated laser emitter coupled with a focal plane array detector and the required optics. Like a conventional camera this sensor creates an "image" of the environment, but producing a 2D image where each pixel has associated intensity vales the flash Ladar generates an image where each pixel has an associated range and intensity value. Integration of flash Ladar with the attitude from the IMU allows creation of a 3-D scene. Current low-cost Flash Ladar technology is capable of greater than 100 x 100 pixel resolution with 5 mm depth resolution at a 30 Hz frame rate. The proposed algorithm first converts the 3D imaging sensor measurements to a point cloud of the 3D, next, significant environmental features such as planar features (walls), line features or point features (corners) are extracted and associated from one 3D imaging sensor frame to the next. Finally, characteristics of these features such as the normal or direction vectors are used to compute the platform position and attitude

  15. Advances in linear and area HgCdTe APD arrays for eyesafe LADAR sensors

    NASA Astrophysics Data System (ADS)

    Jack, Michael D.; Asbrock, James F.; Anderson, C.; Bailey, Steven L.; Chapman, George; Gordon, E.; Herning, P. E.; Kalisher, Murray H.; Kosai, Kim; Liquori, V.; Randall, Valerie; Rosbeck, Joseph P.; Sen, Sanghamitra; Wetzel, P.; Halmos, Maurice J.; Trotta, Patrick A.; Hunter, Andrew T.; Jensen, John E.; de Lyon, Terence J.; Johnson, W.; Walker, B.; Trussel, Ward; Hutchinson, Andy; Balcerak, Raymond S.

    2001-11-01

    HgCdTe APDs and APD arrays offer unique advantages for high-performance eyesafe LADAR sensors. These include: operation at room temperature, low-excess noise, high gain, high-quantum efficiency at eyesafe wavelengths, GHz bandwidth, and high-packing density. The utility of these benefits for systems are being demonstrated for both linear and area array sensors. Raytheon has fabricated 32 element linear APD arrays utilizing liquid phase epitaxy (LPE), and packaged and integrating these arrays with low-noise amplifiers. Typical better APDs configured as 50-micron square pixels and fabricated utilizing RIE, have demonstrated high fill factors, low crosstalk, excellent uniformity, low dark currents, and noise equivalent power (NEP) from 1-2 nW. Two units have been delivered to NVESD, assembled with range extraction electronics, and integrated into the CELRAP laser radar system. Tests on these sensors in July and October 2000 have demonstrated excellent functionality, detection of 1-cm wires, and range imaging. Work is presently underway under DARPA's 3-D imaging Sensor Program to extend this excellent performance to area arrays. High-density arrays have been fabricated using LPE and molecular beam epitaxy (MBE). HgCdTe APD arrays have been made in 5 X 5, 10 X 10 and larger formats. Initial data shows excellent typical better APD performance with unmultiplied dark current < 10 nA; and NEP < 2.0 nW at a gain of 10.

  16. Photon counting ladar work at FOI, Sweden

    NASA Astrophysics Data System (ADS)

    Steinvall, Ove; Sjöqvist, Lars; Henriksson, Markus

    2012-06-01

    Photon counting techniques using direct detection has recently gained considerable interest within the laser radar community. The high sensitivity is of special importance to achieve high area coverage in surveillance and mapping applications and long range with compact systems for imaging, profiling and ranging. New short pulse lasers including the super continuum laser is of interest for active spectral imaging. A special technique in photon counting is the "time correlated single photon counting" (TCSPC). This can be utilized together with short pulse (ps) lasers to achieve very high range resolution and accuracy (mm level). Low average power lasers in the mW range enables covert operation with respect to present laser warning technology. By analyzing the return waveform range and shape information from the target can be extracted. By scanning the beam high resolution 3D images are obtained. At FOI we have studied the TCSPC with respect to range profiling and imaging. Limitations due to low SNR and dwell times are studied in conjunction with varying daylight background and atmospheric turbulence. Examples of measurements will be presented and discussed with respect to some system applications.

  17. Signal processing for imaging and mapping ladar

    NASA Astrophysics Data System (ADS)

    Grönwall, Christina; Tolt, Gustav

    2011-11-01

    The new generation laser-based FLASH 3D imaging sensors enable data collection at video rate. This opens up for realtime data analysis but also set demands on the signal processing. In this paper the possibilities and challenges with this new data type are discussed. The commonly used focal plane array based detectors produce range estimates that vary with the target's surface reflectance and target range, and our experience is that the built-in signal processing may not compensate fully for that. We propose a simple adjustment that can be used even if some sensor parameters are not known. The cost for the instantaneous image collection is, compared to scanning laser radar systems, lower range accuracy. By gathering range information from several frames the geometrical information of the target can be obtained. We also present an approach of how range data can be used to remove foreground clutter in front of a target. Further, we illustrate how range data enables target classification in near real-time and that the results can be improved if several frames are co-registered. Examples using data from forest and maritime scenes are shown.

  18. Comb-calibrated frequency-modulated continuous-wave ladar for absolute distance measurements.

    PubMed

    Baumann, Esther; Giorgetta, Fabrizio R; Coddington, Ian; Sinclair, Laura C; Knabe, Kevin; Swann, William C; Newbury, Nathan R

    2013-06-15

    We demonstrate a comb-calibrated frequency-modulated continuous-wave laser detection and ranging (FMCW ladar) system for absolute distance measurements. The FMCW ladar uses a compact external cavity laser that is swept quasi-sinusoidally over 1 THz at a 1 kHz rate. The system simultaneously records the heterodyne FMCW ladar signal and the instantaneous laser frequency at sweep rates up to 3400 THz/s, as measured against a free-running frequency comb (femtosecond fiber laser). Demodulation of the ladar signal against the instantaneous laser frequency yields the range to the target with 1 ms update rates, bandwidth-limited 130 μm resolution and a ~100 nm accuracy that is directly linked to the counted repetition rate of the comb. The precision is <100 nm at the 1 ms update rate and reaches ~6 nm for a 100 ms average. PMID:23938965

  19. Surface identification from multiband LADAR reflectance with varied incidence angle via database mapping.

    PubMed

    Guiang, Chona; Jin, Xuemin; Levine, Robert Y

    2015-02-10

    Incident angle dependencies of LADAR reflection depend on bulk material reflectivity and surface texture properties that can be exploited for surface identification. In this paper, surface identification via multiband LADAR reflected radiance is assessed using the nonconventional exploitation factors data system database. A statistics-based dimension reduction algorithm, stochastic neighborhood embedding (t-SNE), is used to separate the data clouds resulting from the monostatic LADAR reflected radiance and corresponding band ratios. The application of t-SNE to multiband reflected radiance effectively separates the data clouds, making surface identification via multiband LADAR reflectance possible in the presence of unknown incident angle dependencies and uncertainties. It is demonstrated that, for both the multiband monostatic reflected radiance and band ratios, the application of t-SNE mapping yields a significant improvement in surface identification from measurements with unknown or varied incident angles.

  20. Ground vehicle based ladar for standoff detection of road-side hazards

    NASA Astrophysics Data System (ADS)

    Hollinger, Jim; Close, Ryan

    2015-05-01

    In recent years, the number of commercially available LADAR (also referred to as LIDAR) systems have grown with the increased interest in ground vehicle robotics and aided navigation/collision avoidance in various industries. With this increased demand the cost of these systems has dropped and their capabilities have increased. As a result of this trend, LADAR systems are becoming a cost effective sensor to use in a number of applications of interest to the US Army. One such application is the standoff detection of road-side hazards from ground vehicles. This paper will discuss detection of road-side hazards partially concealed by light to medium vegetation. Current algorithms using commercially available LADAR systems for detecting these targets will be presented, along with results from relevant data sets. Additionally, optimization of commercial LADAR sensors and/or fusion with Radar will be discussed as ways of increasing detection ability.

  1. Radiochromic 3D Detectors

    NASA Astrophysics Data System (ADS)

    Oldham, Mark

    2015-01-01

    Radiochromic materials exhibit a colour change when exposed to ionising radiation. Radiochromic film has been used for clinical dosimetry for many years and increasingly so recently, as films of higher sensitivities have become available. The two principle advantages of radiochromic dosimetry include greater tissue equivalence (radiologically) and the lack of requirement for development of the colour change. In a radiochromic material, the colour change arises direct from ionising interactions affecting dye molecules, without requiring any latent chemical, optical or thermal development, with important implications for increased accuracy and convenience. It is only relatively recently however, that 3D radiochromic dosimetry has become possible. In this article we review recent developments and the current state-of-the-art of 3D radiochromic dosimetry, and the potential for a more comprehensive solution for the verification of complex radiation therapy treatments, and 3D dose measurement in general.

  2. 3-D Seismic Interpretation

    NASA Astrophysics Data System (ADS)

    Moore, Gregory F.

    2009-05-01

    This volume is a brief introduction aimed at those who wish to gain a basic and relatively quick understanding of the interpretation of three-dimensional (3-D) seismic reflection data. The book is well written, clearly illustrated, and easy to follow. Enough elementary mathematics are presented for a basic understanding of seismic methods, but more complex mathematical derivations are avoided. References are listed for readers interested in more advanced explanations. After a brief introduction, the book logically begins with a succinct chapter on modern 3-D seismic data acquisition and processing. Standard 3-D acquisition methods are presented, and an appendix expands on more recent acquisition techniques, such as multiple-azimuth and wide-azimuth acquisition. Although this chapter covers the basics of standard time processing quite well, there is only a single sentence about prestack depth imaging, and anisotropic processing is not mentioned at all, even though both techniques are now becoming standard.

  3. Bootstrapping 3D fermions

    DOE PAGES

    Iliesiu, Luca; Kos, Filip; Poland, David; Pufu, Silviu S.; Simmons-Duffin, David; Yacoby, Ran

    2016-03-17

    We study the conformal bootstrap for a 4-point function of fermions <ψψψψ> in 3D. We first introduce an embedding formalism for 3D spinors and compute the conformal blocks appearing in fermion 4-point functions. Using these results, we find general bounds on the dimensions of operators appearing in the ψ × ψ OPE, and also on the central charge CT. We observe features in our bounds that coincide with scaling dimensions in the GrossNeveu models at large N. Finally, we also speculate that other features could coincide with a fermionic CFT containing no relevant scalar operators.

  4. Exploring interaction with 3D volumetric displays

    NASA Astrophysics Data System (ADS)

    Grossman, Tovi; Wigdor, Daniel; Balakrishnan, Ravin

    2005-03-01

    Volumetric displays generate true volumetric 3D images by actually illuminating points in 3D space. As a result, viewing their contents is similar to viewing physical objects in the real world. These displays provide a 360 degree field of view, and do not require the user to wear hardware such as shutter glasses or head-trackers. These properties make them a promising alternative to traditional display systems for viewing imagery in 3D. Because these displays have only recently been made available commercially (e.g., www.actuality-systems.com), their current use tends to be limited to non-interactive output-only display devices. To take full advantage of the unique features of these displays, however, it would be desirable if the 3D data being displayed could be directly interacted with and manipulated. We investigate interaction techniques for volumetric display interfaces, through the development of an interactive 3D geometric model building application. While this application area itself presents many interesting challenges, our focus is on the interaction techniques that are likely generalizable to interactive applications for other domains. We explore a very direct style of interaction where the user interacts with the virtual data using direct finger manipulations on and around the enclosure surrounding the displayed 3D volumetric image.

  5. High-resolution 3D imaging laser radar flight test experiments

    NASA Astrophysics Data System (ADS)

    Marino, Richard M.; Davis, W. R.; Rich, G. C.; McLaughlin, J. L.; Lee, E. I.; Stanley, B. M.; Burnside, J. W.; Rowe, G. S.; Hatch, R. E.; Square, T. E.; Skelly, L. J.; O'Brien, M.; Vasile, A.; Heinrichs, R. M.

    2005-05-01

    Situation awareness and accurate Target Identification (TID) are critical requirements for successful battle management. Ground vehicles can be detected, tracked, and in some cases imaged using airborne or space-borne microwave radar. Obscurants such as camouflage net and/or tree canopy foliage can degrade the performance of such radars. Foliage can be penetrated with long wavelength microwave radar, but generally at the expense of imaging resolution. The goals of the DARPA Jigsaw program include the development and demonstration of high-resolution 3-D imaging laser radar (ladar) ensor technology and systems that can be used from airborne platforms to image and identify military ground vehicles that may be hiding under camouflage or foliage such as tree canopy. With DARPA support, MIT Lincoln Laboratory has developed a rugged and compact 3-D imaging ladar system that has successfully demonstrated the feasibility and utility of this application. The sensor system has been integrated into a UH-1 helicopter for winter and summer flight campaigns. The sensor operates day or night and produces high-resolution 3-D spatial images using short laser pulses and a focal plane array of Geiger-mode avalanche photo-diode (APD) detectors with independent digital time-of-flight counting circuits at each pixel. The sensor technology includes Lincoln Laboratory developments of the microchip laser and novel focal plane arrays. The microchip laser is a passively Q-switched solid-state frequency-doubled Nd:YAG laser transmitting short laser pulses (300 ps FWHM) at 16 kilohertz pulse rate and at 532 nm wavelength. The single photon detection efficiency has been measured to be > 20 % using these 32x32 Silicon Geiger-mode APDs at room temperature. The APD saturates while providing a gain of typically > 106. The pulse out of the detector is used to stop a 500 MHz digital clock register integrated within the focal-plane array at each pixel. Using the detector in this binary response mode

  6. Venus in 3D

    NASA Astrophysics Data System (ADS)

    Plaut, J. J.

    1993-08-01

    Stereographic images of the surface of Venus which enable geologists to reconstruct the details of the planet's evolution are discussed. The 120-meter resolution of these 3D images make it possible to construct digital topographic maps from which precise measurements can be made of the heights, depths, slopes, and volumes of geologic structures.

  7. 3D reservoir visualization

    SciTech Connect

    Van, B.T.; Pajon, J.L.; Joseph, P. )

    1991-11-01

    This paper shows how some simple 3D computer graphics tools can be combined to provide efficient software for visualizing and analyzing data obtained from reservoir simulators and geological simulations. The animation and interactive capabilities of the software quickly provide a deep understanding of the fluid-flow behavior and an accurate idea of the internal architecture of a reservoir.

  8. Three-dimensional (3D) stereoscopic X windows

    NASA Astrophysics Data System (ADS)

    Safier, Scott A.; Siegel, Mel

    1995-03-01

    All known technologies for displaying 3D-stereoscopic images are more or less incompatible with the X Window System. Applications that seek to be portable must support the 3D-display paradigms of multiple hardware implementations of 3D-stereoscopy. We have succeeded in modifying the functionality of X to construct generic tools for displaying 3D-stereoscopic imagery. Our approach allows for experimentation with visualization techniques and techniques for interacting with these synthetic worlds. Our methodology inherits the extensibility and portability of X. We have demonstrated its applicability in two display hardware paradigms that are specifically discussed.

  9. Pulse laser imaging amplifier for advanced ladar systems

    NASA Astrophysics Data System (ADS)

    Khizhnyak, Anatoliy; Markov, Vladimir; Tomov, Ivan; Murrell, David

    2016-05-01

    Security measures sometimes require persistent surveillance of government, military and public areas Borders, bridges, sport arenas, airports and others are often surveilled with low-cost cameras. Their low-light performance can be enhanced with laser illuminators; however various operational scenarios may require a low-intensity laser illumination with the object-scattered light intensity lower than the sensitivity of the Ladar image detector. This paper discusses a novel type of high-gain optical image amplifier. The approach enables time-synchronization of the incoming and amplifying signals with accuracy <= 1 ns. The technique allows the incoming signal to be amplified without the need to match the input spectrum to the cavity modes. Instead, the incoming signal is accepted within the spectral band of the amplifier. We have gauged experimentally the performance of the amplifier with a 40 dB gain and an angle of view 20 mrad.

  10. Concepts using optical MEMS array for ladar scene projection

    NASA Astrophysics Data System (ADS)

    Smith, J. Lynn

    2003-09-01

    Scene projection for HITL testing of LADAR seekers is unique because the 3rd dimension is time delay. Advancement in AFRL for electronic delay and pulse shaping circuits, VCSEL emitters, fiber optic and associated scene generation is underway, and technology hand-off to test facilities is expected eventually. However, size and cost currently projected behooves cost mitigation through further innovation in system design, incorporating new developments, cooperation, and leveraging of dual-purpose technology. Therefore a concept is offered which greatly reduces the number (thus cost) of pulse shaping circuits and enables the projector to be installed on the mobile arm of a flight motion simulator table without fiber optic cables. The concept calls for an optical MEMS (micro-electromechanical system) steerable micro-mirror array. IFOV"s are a cluster of four micro-mirrors, each of which steers through a unique angle to a selected light source with the appropriate delay and waveform basis. An array of such sources promotes angle-to-delay mapping. Separate pulse waveform basis circuits for each scene IFOV are not required because a single set of basis functions is broadcast to all MEMS elements simultaneously. Waveform delivery to spatial filtering and collimation optics is addressed by angular selection at the MEMS array. Emphasis is on technology in existence or under development by the government, its contractors and the telecommunications industry. Values for components are first assumed as those that are easily available. Concept adequacy and upgrades are then discussed. In conclusion an opto-mechanical scan option ranks as the best light source for near-term MEMS-based projector testing of both flash and scan LADAR seekers.

  11. Taming supersymmetric defects in 3d-3d correspondence

    NASA Astrophysics Data System (ADS)

    Gang, Dongmin; Kim, Nakwoo; Romo, Mauricio; Yamazaki, Masahito

    2016-07-01

    We study knots in 3d Chern-Simons theory with complex gauge group {SL}(N,{{C}}), in the context of its relation with 3d { N }=2 theory (the so-called 3d-3d correspondence). The defect has either co-dimension 2 or co-dimension 4 inside the 6d (2,0) theory, which is compactified on a 3-manifold \\hat{M}. We identify such defects in various corners of the 3d-3d correspondence, namely in 3d {SL}(N,{{C}}) CS theory, in 3d { N }=2 theory, in 5d { N }=2 super Yang-Mills theory, and in the M-theory holographic dual. We can make quantitative checks of the 3d-3d correspondence by computing partition functions at each of these theories. This Letter is a companion to a longer paper [1], which contains more details and more results.

  12. 3D Audio System

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Ames Research Center research into virtual reality led to the development of the Convolvotron, a high speed digital audio processing system that delivers three-dimensional sound over headphones. It consists of a two-card set designed for use with a personal computer. The Convolvotron's primary application is presentation of 3D audio signals over headphones. Four independent sound sources are filtered with large time-varying filters that compensate for motion. The perceived location of the sound remains constant. Possible applications are in air traffic control towers or airplane cockpits, hearing and perception research and virtual reality development.

  13. Multi Sensor Data Integration for AN Accurate 3d Model Generation

    NASA Astrophysics Data System (ADS)

    Chhatkuli, S.; Satoh, T.; Tachibana, K.

    2015-05-01

    The aim of this paper is to introduce a novel technique of data integration between two different data sets, i.e. laser scanned RGB point cloud and oblique imageries derived 3D model, to create a 3D model with more details and better accuracy. In general, aerial imageries are used to create a 3D city model. Aerial imageries produce an overall decent 3D city models and generally suit to generate 3D model of building roof and some non-complex terrain. However, the automatically generated 3D model, from aerial imageries, generally suffers from the lack of accuracy in deriving the 3D model of road under the bridges, details under tree canopy, isolated trees, etc. Moreover, the automatically generated 3D model from aerial imageries also suffers from undulated road surfaces, non-conforming building shapes, loss of minute details like street furniture, etc. in many cases. On the other hand, laser scanned data and images taken from mobile vehicle platform can produce more detailed 3D road model, street furniture model, 3D model of details under bridge, etc. However, laser scanned data and images from mobile vehicle are not suitable to acquire detailed 3D model of tall buildings, roof tops, and so forth. Our proposed approach to integrate multi sensor data compensated each other's weakness and helped to create a very detailed 3D model with better accuracy. Moreover, the additional details like isolated trees, street furniture, etc. which were missing in the original 3D model derived from aerial imageries could also be integrated in the final model automatically. During the process, the noise in the laser scanned data for example people, vehicles etc. on the road were also automatically removed. Hence, even though the two dataset were acquired in different time period the integrated data set or the final 3D model was generally noise free and without unnecessary details.

  14. Ladar scene generation techniques for hardware-in-the-loop testing

    NASA Astrophysics Data System (ADS)

    Coker, Jason S.; Coker, Charles F.; Bergin, Thomas P.

    1999-07-01

    LADAR (Laser Detection and Ranging) as its name implies uses laser-ranging technology to provide information regarding target and/or background signatures. When fielded in systems, LADAR can provide ranging information to on board algorithms that in turn may utilize the information to analyze target type and range. Real-time closed loop simulation of LADAR seekers in a hardware-in-the-loop (HWIL) facility can be used to provide a nondestructive testing environment to evaluate a system's capability and therefore reduce program risk and cost. However, in LADAR systems many factors can influence the quality of the data obtained, and thus have a significant impact on algorithm performance. It is important therefore to take these factors into consideration when attempting to simulate LADAR data for Digital or HWIL testing. Some of the factors that will be considered in this paper include items such as weak or noisy detectors, multi-return, and weapon body dynamics. Various computer techniques that may be employed to simulate these factors will be analyzed to determine their merit in use for real-time simulations.

  15. MAP3D: a media processor approach for high-end 3D graphics

    NASA Astrophysics Data System (ADS)

    Darsa, Lucia; Stadnicki, Steven; Basoglu, Chris

    1999-12-01

    Equator Technologies, Inc. has used a software-first approach to produce several programmable and advanced VLIW processor architectures that have the flexibility to run both traditional systems tasks and an array of media-rich applications. For example, Equator's MAP1000A is the world's fastest single-chip programmable signal and image processor targeted for digital consumer and office automation markets. The Equator MAP3D is a proposal for the architecture of the next generation of the Equator MAP family. The MAP3D is designed to achieve high-end 3D performance and a variety of customizable special effects by combining special graphics features with high performance floating-point and media processor architecture. As a programmable media processor, it offers the advantages of a completely configurable 3D pipeline--allowing developers to experiment with different algorithms and to tailor their pipeline to achieve the highest performance for a particular application. With the support of Equator's advanced C compiler and toolkit, MAP3D programs can be written in a high-level language. This allows the compiler to successfully find and exploit any parallelism in a programmer's code, thus decreasing the time to market of a given applications. The ability to run an operating system makes it possible to run concurrent applications in the MAP3D chip, such as video decoding while executing the 3D pipelines, so that integration of applications is easily achieved--using real-time decoded imagery for texturing 3D objects, for instance. This novel architecture enables an affordable, integrated solution for high performance 3D graphics.

  16. Counter-sniper 3D laser radar

    NASA Astrophysics Data System (ADS)

    Shepherd, Orr; LePage, Andrew J.; Wijntjes, Geert J.; Zehnpfennig, Theodore F.; Sackos, John T.; Nellums, Robert O.

    1999-01-01

    Visidyne, Inc., teaming with Sandia National Laboratories, has developed the preliminary design for an innovative scannerless 3-D laser radar capable of acquiring, tracking, and determining the coordinates of small caliber projectiles in flight with sufficient precision, so their origin can be established by back projecting their tracks to their source. The design takes advantage of the relatively large effective cross-section of a bullet at optical wavelengths. Kay to its implementation is the use of efficient, high- power laser diode arrays for illuminators and an imaging laser receiver using a unique CCD imager design, that acquires the information to establish x, y (angle-angle) and range coordinates for each bullet at very high frame rates. The detection process achieves a high degree of discrimination by using the optical signature of the bullet, solar background mitigation, and track detection. Field measurements and computer simulations have been used to provide the basis for a preliminary design of a robust bullet tracker, the Counter Sniper 3-D Laser Radar. Experimental data showing 3-D test imagery acquired by a lidar with architecture similar to that of the proposed Counter Sniper 3-D Lidar are presented. A proposed Phase II development would yield an innovative, compact, and highly efficient bullet-tracking laser radar. Such a device would meet the needs of not only the military, but also federal, state, and local law enforcement organizations.

  17. Ground target detection based on discrete cosine transform and Rényi entropy for imaging ladar

    NASA Astrophysics Data System (ADS)

    Xu, Yuannan; Chen, Weili; Li, Junwei; Dong, Yanbing

    2016-01-01

    The discrete cosine transform (DCT) due to its excellent properties that the images can be represented in spatial/spatial-frequency domains, has been applied in sequence data analysis and image fusion. For intensity and range images of ladar, through the DCT using one dimension window, the statistical property of Rényi entropy for images is studied. We also analyzed the change of Rényi entropy's statistical property in the ladar intensity and range images when the man-made objects appear. From this foundation, a novel method for generating saliency map based on DCT and Rényi entropy is proposed. After that, ground target detection is completed when the saliency map is segmented using a simple and convenient threshold method. For the ladar intensity and range images, experimental results show the proposed method can effectively detect the military vehicles from complex earth background with low false alarm.

  18. Imagery Integration Team

    NASA Technical Reports Server (NTRS)

    Calhoun, Tracy; Melendrez, Dave

    2014-01-01

    -of-a-kind imagery assets and skill sets, such as ground-based fixed and tracking cameras, crew-in the-loop imaging applications, and the integration of custom or commercial-off-the-shelf sensors onboard spacecraft. For spaceflight applications, the Integration 2 Team leverages modeling, analytical, and scientific resources along with decades of experience and lessons learned to assist the customer in optimizing engineering imagery acquisition and management schemes for any phase of flight - launch, ascent, on-orbit, descent, and landing. The Integration 2 Team guides the customer in using NASA's world-class imagery analysis teams, which specialize in overcoming inherent challenges associated with spaceflight imagery sets. Precision motion tracking, two-dimensional (2D) and three-dimensional (3D) photogrammetry, image stabilization, 3D modeling of imagery data, lighting assessment, and vehicle fiducial marking assessments are available. During a mission or test, the Integration 2 Team provides oversight of imagery operations to verify fulfillment of imagery requirements. The team oversees the collection, screening, and analysis of imagery to build a set of imagery findings. It integrates and corroborates the imagery findings with other mission data sets, generating executive summaries to support time-critical mission decisions.

  19. Imaging signal-to-noise ratio of synthetic aperture ladar

    NASA Astrophysics Data System (ADS)

    Liu, Liren

    2015-09-01

    On the basis of the Poisson photocurrent statistics in the photon-limited heterodyne detection, in this paper, the signal-to-noise ratios in the receiver in the time domain and on the focused 1-D image and 2-D image in the space domain are derived for both the down-looking and side-looking synthetic aperture imaging ladars using PIN or APD photodiodes. The major shot noises in the down-looking SAIL and the side-looking SAIL are, respectively, from the dark current of photodiode and the local beam current. It is found that the ratio of 1-D image SNR to receiver SNR is proportional to the number of resolution elements in the cross direction of travel and the ratio of 2-D image SNR to 1-D image SNR is proportional to the number of resolution elements in the travel direction. And the sensitivity, the effect of Fourier transform of sampled signal, and the influence of time response of detection circuit are discussed, too. The study will help to correctly design a SAIL system.

  20. Prominent rocks - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Many prominent rocks near the Sagan Memorial Station are featured in this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. Wedge is at lower left; Shark, Half-Dome, and Pumpkin are at center. Flat Top, about four inches high, is at lower right. The horizon in the distance is one to two kilometers away.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  1. 'Diamond' in 3-D

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This 3-D, microscopic imager mosaic of a target area on a rock called 'Diamond Jenness' was taken after NASA's Mars Exploration Rover Opportunity ground into the surface with its rock abrasion tool for a second time.

    Opportunity has bored nearly a dozen holes into the inner walls of 'Endurance Crater.' On sols 177 and 178 (July 23 and July 24, 2004), the rover worked double-duty on Diamond Jenness. Surface debris and the bumpy shape of the rock resulted in a shallow and irregular hole, only about 2 millimeters (0.08 inch) deep. The final depth was not enough to remove all the bumps and leave a neat hole with a smooth floor. This extremely shallow depression was then examined by the rover's alpha particle X-ray spectrometer.

    On Sol 178, Opportunity's 'robotic rodent' dined on Diamond Jenness once again, grinding almost an additional 5 millimeters (about 0.2 inch). The rover then applied its Moessbauer spectrometer to the deepened hole. This double dose of Diamond Jenness enabled the science team to examine the rock at varying layers. Results from those grindings are currently being analyzed.

    The image mosaic is about 6 centimeters (2.4 inches) across.

  2. Martian terrain - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    This area of terrain near the Sagan Memorial Station was taken on Sol 3 by the Imager for Mars Pathfinder (IMP). 3D glasses are necessary to identify surface detail.

    The IMP is a stereo imaging system with color capability provided by 24 selectable filters -- twelve filters per 'eye.' It stands 1.8 meters above the Martian surface, and has a resolution of two millimeters at a range of two meters.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  3. 3-D Perspective Pasadena, California

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This perspective view shows the western part of the city of Pasadena, California, looking north towards the San Gabriel Mountains. Portions of the cities of Altadena and La Canada, Flintridge are also shown. The image was created from three datasets: the Shuttle Radar Topography Mission (SRTM) supplied the elevation data; Landsat data from November 11, 1986 provided the land surface color (not the sky) and U.S. Geological Survey digital aerial photography provides the image detail. The Rose Bowl, surrounded by a golf course, is the circular feature at the bottom center of the image. The Jet Propulsion Laboratory is the cluster of large buildings north of the Rose Bowl at the base of the mountains. A large landfill, Scholl Canyon, is the smooth area in the lower left corner of the scene. This image shows the power of combining data from different sources to create planning tools to study problems that affect large urban areas. In addition to the well-known earthquake hazards, Southern California is affected by a natural cycle of fire and mudflows. Wildfires strip the mountains of vegetation, increasing the hazards from flooding and mudflows for several years afterwards. Data such as shown on this image can be used to predict both how wildfires will spread over the terrain and also how mudflows will be channeled down the canyons. The Shuttle Radar Topography Mission (SRTM), launched on February 11, 2000, uses the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. The mission was designed to collect three dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, an additional C-band imaging antenna and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration (NASA), the National Imagery and Mapping Agency

  4. A 3D Cloud-Construction Algorithm for the EarthCARE Satellite Mission

    NASA Technical Reports Server (NTRS)

    Barker, H. W.; Jerg, M. P.; Wehr, T.; Kato, S.; Donovan, D. P.; Hogan, R. J.

    2011-01-01

    This article presents and assesses an algorithm that constructs 3D distributions of cloud from passive satellite imagery and collocated 2D nadir profiles of cloud properties inferred synergistically from lidar, cloud radar and imager data.

  5. Noise filtering techniques for photon-counting ladar data

    NASA Astrophysics Data System (ADS)

    Magruder, Lori A.; Wharton, Michael E., III; Stout, Kevin D.; Neuenschwander, Amy L.

    2012-06-01

    Many of the recent small, low power ladar systems provide detection sensitivities on the photon(s) level for altimetry applications. These "photon-counting" instruments, many times, are the operational solution to high altitude or space based platforms where low signal strength and size limitations must be accommodated. Despite the many existing algorithms for lidar data product generation, there remains a void in techniques available for handling the increased noise level in the photon-counting measurements as the larger analog systems do not exhibit such low SNR. Solar background noise poses a significant challenge to accurately extract surface features from the data. Thus, filtering is required prior to implementation of other post-processing efforts. This paper presents several methodologies for noise filtering photoncounting data. Techniques include modified Canny Edge Detection, PDF-based signal extraction, and localized statistical analysis. The Canny Edge detection identifies features in a rasterized data product using a Gaussian filter and gradient calculation to extract signal photons. PDF-based analysis matches local probability density functions with the aggregate, thereby extracting probable signal points. The localized statistical method assigns thresholding values based on a weighted local mean of angular variances. These approaches have demonstrated the ability to remove noise and subsequently provide accurate surface (ground/canopy) determination. The results presented here are based on analysis of multiple data sets acquired with the high altitude NASA MABEL system and photon-counting data supplied by Sigma Space Inc. configured to simulate the NASA upcoming ICESat-2 mission instrument expected data product.

  6. Integration and demonstration of MEMS-scanned LADAR for robotic navigation

    NASA Astrophysics Data System (ADS)

    Stann, Barry L.; Dammann, John F.; Del Giorno, Mark; DiBerardino, Charles; Giza, Mark M.; Powers, Michael A.; Uzunovic, Nenad

    2014-06-01

    LADAR is among the pre-eminent sensor modalities for autonomous vehicle navigation. Size, weight, power and cost constraints impose significant practical limitations on perception systems intended for small ground robots. In recent years, the Army Research Laboratory (ARL) developed a LADAR architecture based on a MEMS mirror scanner that fundamentally improves the trade-offs between these limitations and sensor capability. We describe how the characteristics of a highly developed prototype correspond to and satisfy the requirements of autonomous navigation and the experimental scenarios of the ARL Robotics Collaborative Technology Alliance (RCTA) program. In particular, the long maximum and short minimum range capability of the ARL MEMS LADAR makes it remarkably suitable for a wide variety of scenarios from building mapping to the manipulation of objects at close range, including dexterous manipulation with robotic arms. A prototype system was applied to a small (approximately 50 kg) unmanned robotic vehicle as the primary mobility perception sensor. We present the results of a field test where the perception information supplied by the LADAR system successfully accomplished the experimental objectives of an Integrated Research Assessment (IRA).

  7. Linear Mode Photon Counting LADAR Camera Development for the Ultra-Sensitive Detector Program

    NASA Astrophysics Data System (ADS)

    Jack, M.; Bailey, S.; Edwards, J.; Burkholder, R.; Liu, K.; Asbrock, J.; Randall, V.; Chapman, G.; Riker, J.

    Advanced LADAR receivers enable high accuracy identification of targets at ranges beyond standard EOIR sensors. Increased sensitivity of these receivers will enable reductions in laser power, hence more affordable, smaller sensors as well as much longer range of detection. Raytheon has made a recent breakthrough in LADAR architecture by combining very low noise ~ 30 electron front end amplifiers with moderate gain >60 Avalanche Photodiodes. The combination of these enables detection of laser pulse returns containing as few as one photon up to 1000s of photons. Because a lower APD gain is utilized the sensor operation differs dramatically from traditional "geiger mode APD" LADARs. Linear mode photon counting LADAR offers advantages including: determination of intensity as well as time of arrival, nanosecond recovery times and discrimination between radiation events and signals. In our talk we will review the basic amplifier and APD component performance, the front end architecture, the demonstration of single photon detection using a simple 4 x 4 SCA and the design of a fully integrated photon counting camera under development in support of the Ultra-Sensitive Detector (USD) program sponsored by the Air Force Research Laboratory at Kirtland AFB, NM. Work Supported in Part by AFRL - Contract # FA8632-05-C-2454 Dr. Jim Riker Program Manager.

  8. Case study: The Avengers 3D: cinematic techniques and digitally created 3D

    NASA Astrophysics Data System (ADS)

    Clark, Graham D.

    2013-03-01

    Marvel's THE AVENGERS was the third film Stereo D collaborated on with Marvel; it was a summation of our artistic development of what Digitally Created 3D and Stereo D's artists and toolsets affords Marvel's filmmakers; the ability to shape stereographic space to support the film and story, in a way that balances human perception and live photography. We took our artistic lead from the cinematic intentions of Marvel, the Director Joss Whedon, and Director of Photography Seamus McGarvey. In the digital creation of a 3D film from a 2D image capture, recommendations to the filmmakers cinematic techniques are offered by Stereo D at each step from pre-production onwards, through set, into post. As the footage arrives at our facility we respond in depth to the cinematic qualities of the imagery in context of the edit and story, with the guidance of the Directors and Studio, creating stereoscopic imagery. Our involvement in The Avengers was early in production, after reading the script we had the opportunity and honor to meet and work with the Director Joss Whedon, and DP Seamus McGarvey on set, and into post. We presented what is obvious to such great filmmakers in the ways of cinematic techniques as they related to the standard depth cues and story points we would use to evaluate depth for their film. Our hope was any cinematic habits that supported better 3D would be emphasized. In searching for a 3D statement for the studio and filmmakers we arrived at a stereographic style that allowed for comfort and maximum visual engagement to the viewer.

  9. 3D Elevation Program—Virtual USA in 3D

    USGS Publications Warehouse

    Lukas, Vicki; Stoker, J.M.

    2016-01-01

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) uses a laser system called ‘lidar’ (light detection and ranging) to create a virtual reality map of the Nation that is very accurate. 3D maps have many uses with new uses being discovered all the time.  

  10. 3D Elevation Program—Virtual USA in 3D

    USGS Publications Warehouse

    Lukas, Vicki; Stoker, J.M.

    2016-04-14

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) uses a laser system called ‘lidar’ (light detection and ranging) to create a virtual reality map of the Nation that is very accurate. 3D maps have many uses with new uses being discovered all the time.  

  11. IFSAR processing for 3D target reconstruction

    NASA Astrophysics Data System (ADS)

    Austin, Christian D.; Moses, Randolph L.

    2005-05-01

    In this paper we investigate the use of interferometric synthetic aperture radar (IFSAR) processing for the 3D reconstruction of radar targets. A major source of reconstruction error is induced by multiple scattering responses in a resolution cell, giving rise to height errors. We present a model for multiple scattering centers and analyze the errors that result using traditional IFSAR height estimation. We present a simple geometric model that characterizes the height error and suggests tests for detecting or reducing this error. We consider the use of image magnitude difference as a test statistic to detect multiple scattering responses in a resolution cell, and we analyze the resulting height error reduction and hypothesis test performance using this statistic. Finally, we consider phase linearity test statistics when three or more IFSAR images are available. Examples using synthetic Xpatch backhoe imagery are presented.

  12. World Wind 3D Earth Viewing

    NASA Technical Reports Server (NTRS)

    Hogan, Patrick; Maxwell, Christopher; Kim, Randolph; Gaskins, Tom

    2007-01-01

    World Wind allows users to zoom from satellite altitude down to any place on Earth, leveraging high-resolution LandSat imagery and SRTM (Shuttle Radar Topography Mission) elevation data to experience Earth in visually rich 3D. In addition to Earth, World Wind can also visualize other planets, and there are already comprehensive data sets for Mars and the Earth's moon, which are as easily accessible as those of Earth. There have been more than 20 million downloads to date, and the software is being used heavily by the Department of Defense due to the code s ability to be extended and the evolution of the code courtesy of NASA and the user community. Primary features include the dynamic access to public domain imagery and its ease of use. All one needs to control World Wind is a two-button mouse. Additional guides and features can be accessed through a simplified menu. A JAVA version will be available soon. Navigation is automated with single clicks of a mouse, or by typing in any location to automatically zoom in to see it. The World Wind install package contains the necessary requirements such as the .NET runtime and managed DirectX library. World Wind can display combinations of data from a variety of sources, including Blue Marble, LandSat 7, SRTM, NASA Scientific Visualization Studio, GLOBE, and much more. A thorough list of features, the user manual, a key chart, and screen shots are available at http://worldwind.arc.nasa.gov.

  13. 3D visual presentation of shoulder joint motion.

    PubMed

    Totterman, S; Tamez-Pena, J; Kwok, E; Strang, J; Smith, J; Rubens, D; Parker, K

    1998-01-01

    The 3D visual presentation of biodynamic events of human joints is a challenging task. Although the 3D reconstruction of high contrast structures from CT data has been widely explored, then there is much less experience in reconstructing the small low contrast soft tissue structures from inhomogeneous and sometimes noisy MR data. Further, there are no algorithms for tracking the motion of moving anatomic structures through MR data. We represent a comprehensive approach to 3D musculoskeletal imagery that addresses these challenges. Specific imaging protocols, segmentation algorithms and rendering techniques are developed and applied to render complex 3D musculoskeletal systems for their 4D visual presentation. Applications of our approach include analysis of rotational motion of the shoulder, the knee flexion, and other complex musculoskeletal motions, and the development of interactive virtual human joints.

  14. Market study: 3-D eyetracker

    NASA Technical Reports Server (NTRS)

    1977-01-01

    A market study of a proposed version of a 3-D eyetracker for initial use at NASA's Ames Research Center was made. The commercialization potential of a simplified, less expensive 3-D eyetracker was ascertained. Primary focus on present and potential users of eyetrackers, as well as present and potential manufacturers has provided an effective means of analyzing the prospects for commercialization.

  15. 3D World Building System

    ScienceCinema

    None

    2016-07-12

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  16. 3D World Building System

    SciTech Connect

    2013-10-30

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  17. LLNL-Earth3D

    SciTech Connect

    2013-10-01

    Earth3D is a computer code designed to allow fast calculation of seismic rays and travel times through a 3D model of the Earth. LLNL is using this for earthquake location and global tomography efforts and such codes are of great interest to the Earth Science community.

  18. [3-D ultrasound in gastroenterology].

    PubMed

    Zoller, W G; Liess, H

    1994-06-01

    Three-dimensional (3D) sonography represents a development of noninvasive diagnostic imaging by real-time two-dimensional (2D) sonography. The use of transparent rotating scans, comparable to a block of glass, generates a 3D effect. The objective of the present study was to optimate 3D presentation of abdominal findings. Additional investigations were made with a new volumetric program to determine the volume of selected findings of the liver. The results were compared with the estimated volumes of 2D sonography and 2D computer tomography (CT). For the processing of 3D images, typical parameter constellations were found for the different findings, which facilitated processing of 3D images. In more than 75% of the cases examined we found an optimal 3D presentation of sonographic findings with respect to the evaluation criteria developed by us for the 3D imaging of processed data. Great differences were found for the estimated volumes of the findings of the liver concerning the three different techniques applied. 3D ultrasound represents a valuable method to judge morphological appearance in abdominal findings. The possibility of volumetric measurements enlarges its potential diagnostic significance. Further clinical investigations are necessary to find out if definite differentiation between benign and malign findings is possible.

  19. Euro3D Science Conference

    NASA Astrophysics Data System (ADS)

    Walsh, J. R.

    2004-02-01

    The Euro3D RTN is an EU funded Research Training Network to foster the exploitation of 3D spectroscopy in Europe. 3D spectroscopy is a general term for spectroscopy of an area of the sky and derives its name from its two spatial + one spectral dimensions. There are an increasing number of instruments which use integral field devices to achieve spectroscopy of an area of the sky, either using lens arrays, optical fibres or image slicers, to pack spectra of multiple pixels on the sky (``spaxels'') onto a 2D detector. On account of the large volume of data and the special methods required to reduce and analyse 3D data, there are only a few centres of expertise and these are mostly involved with instrument developments. There is a perceived lack of expertise in 3D spectroscopy spread though the astronomical community and its use in the armoury of the observational astronomer is viewed as being highly specialised. For precisely this reason the Euro3D RTN was proposed to train young researchers in this area and develop user tools to widen the experience with this particular type of data in Europe. The Euro3D RTN is coordinated by Martin M. Roth (Astrophysikalisches Institut Potsdam) and has been running since July 2002. The first Euro3D science conference was held in Cambridge, UK from 22 to 23 May 2003. The main emphasis of the conference was, in keeping with the RTN, to expose the work of the young post-docs who are funded by the RTN. In addition the team members from the eleven European institutes involved in Euro3D also presented instrumental and observational developments. The conference was organized by Andy Bunker and held at the Institute of Astronomy. There were over thirty participants and 26 talks covered the whole range of application of 3D techniques. The science ranged from Galactic planetary nebulae and globular clusters to kinematics of nearby galaxies out to objects at high redshift. Several talks were devoted to reporting recent observations with newly

  20. 3D printing in dentistry.

    PubMed

    Dawood, A; Marti Marti, B; Sauret-Jackson, V; Darwood, A

    2015-12-01

    3D printing has been hailed as a disruptive technology which will change manufacturing. Used in aerospace, defence, art and design, 3D printing is becoming a subject of great interest in surgery. The technology has a particular resonance with dentistry, and with advances in 3D imaging and modelling technologies such as cone beam computed tomography and intraoral scanning, and with the relatively long history of the use of CAD CAM technologies in dentistry, it will become of increasing importance. Uses of 3D printing include the production of drill guides for dental implants, the production of physical models for prosthodontics, orthodontics and surgery, the manufacture of dental, craniomaxillofacial and orthopaedic implants, and the fabrication of copings and frameworks for implant and dental restorations. This paper reviews the types of 3D printing technologies available and their various applications in dentistry and in maxillofacial surgery. PMID:26657435

  1. PLOT3D user's manual

    NASA Technical Reports Server (NTRS)

    Walatka, Pamela P.; Buning, Pieter G.; Pierce, Larry; Elson, Patricia A.

    1990-01-01

    PLOT3D is a computer graphics program designed to visualize the grids and solutions of computational fluid dynamics. Seventy-four functions are available. Versions are available for many systems. PLOT3D can handle multiple grids with a million or more grid points, and can produce varieties of model renderings, such as wireframe or flat shaded. Output from PLOT3D can be used in animation programs. The first part of this manual is a tutorial that takes the reader, keystroke by keystroke, through a PLOT3D session. The second part of the manual contains reference chapters, including the helpfile, data file formats, advice on changing PLOT3D, and sample command files.

  2. 3D printing in dentistry.

    PubMed

    Dawood, A; Marti Marti, B; Sauret-Jackson, V; Darwood, A

    2015-12-01

    3D printing has been hailed as a disruptive technology which will change manufacturing. Used in aerospace, defence, art and design, 3D printing is becoming a subject of great interest in surgery. The technology has a particular resonance with dentistry, and with advances in 3D imaging and modelling technologies such as cone beam computed tomography and intraoral scanning, and with the relatively long history of the use of CAD CAM technologies in dentistry, it will become of increasing importance. Uses of 3D printing include the production of drill guides for dental implants, the production of physical models for prosthodontics, orthodontics and surgery, the manufacture of dental, craniomaxillofacial and orthopaedic implants, and the fabrication of copings and frameworks for implant and dental restorations. This paper reviews the types of 3D printing technologies available and their various applications in dentistry and in maxillofacial surgery.

  3. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  4. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  5. 3D wavefront image formation for NIITEK GPR

    NASA Astrophysics Data System (ADS)

    Soumekh, Mehrdad; Ton, Tuan; Howard, Pete

    2009-05-01

    The U.S. Department of Defense Humanitarian Demining (HD) Research and Development Program focuses on developing, testing, demonstrating, and validating new technology for immediate use in humanitarian demining operations around the globe. Beginning in the late 1990's, the U.S. Army Countermine Division funded the development of the NIITEK ground penetrating radar (GPR) for detection of anti-tank (AT) landmines. This work is concerned with signal processing algorithms to suppress sources of artifacts in the NIITEK GPR, and formation of three-dimensional (3D) imagery from the resultant data. We first show that the NIITEK GPR data correspond to a 3D Synthetic Aperture Radar (SAR) database. An adaptive filtering method is utilized to suppress ground return and self-induced resonance (SIR) signals that are generated by the interaction of the radar-carrying platform and the transmitted radar signal. We examine signal processing methods to improve the fidelity of imagery for this 3D SAR system using pre-processing methods that suppress Doppler aliasing as well as other side lobe leakage artifacts that are introduced by the radar radiation pattern. The algorithm, known as digital spotlighting, imposes a filtering scheme on the azimuth-compressed SAR data, and manipulates the resultant spectral data to achieve a higher PRF to suppress the Doppler aliasing. We also present the 3D version of the Fourier-based wavefront reconstruction, a computationally-efficient and approximation-free SAR imaging method, for image formation with the NIITEK 3D SAR database.

  6. 3D Geomodeling of the Venezuelan Andes

    NASA Astrophysics Data System (ADS)

    Monod, B.; Dhont, D.; Hervouet, Y.; Backé, G.; Klarica, S.; Choy, J. E.

    2010-12-01

    The crustal structure of the Venezuelan Andes is investigated thanks to a geomodel. The method integrates surface structural data, remote sensing imagery, crustal scale balanced cross-sections, earthquake locations and focal mechanism solutions to reconstruct fault surfaces at the scale of the mountain belt into a 3D environment. The model proves to be essential for understanding the basic processes of both the orogenic float and the tectonic escape involved in the Plio-Quaternary evolution of the orogen. The reconstruction of the Bocono and Valera faults reveals the 3D shape of the Trujillo block whose geometry can be compared to a boat bow floating over a mid-crustal detachment horizon emerging at the Bocono-Valera triple junction. Motion of the Trujillo block is accompanied by a generalized extension in the upper crust accommodated by normal faults with listric geometries such as for the Motatan, Momboy and Tuñame faults. Extension may be related to the lateral spreading of the upper crust, suggesting that gravity forces play an important role in the escape process.

  7. Unassisted 3D camera calibration

    NASA Astrophysics Data System (ADS)

    Atanassov, Kalin; Ramachandra, Vikas; Nash, James; Goma, Sergio R.

    2012-03-01

    With the rapid growth of 3D technology, 3D image capture has become a critical part of the 3D feature set on mobile phones. 3D image quality is affected by the scene geometry as well as on-the-device processing. An automatic 3D system usually assumes known camera poses accomplished by factory calibration using a special chart. In real life settings, pose parameters estimated by factory calibration can be negatively impacted by movements of the lens barrel due to shaking, focusing, or camera drop. If any of these factors displaces the optical axes of either or both cameras, vertical disparity might exceed the maximum tolerable margin and the 3D user may experience eye strain or headaches. To make 3D capture more practical, one needs to consider unassisted (on arbitrary scenes) calibration. In this paper, we propose an algorithm that relies on detection and matching of keypoints between left and right images. Frames containing erroneous matches, along with frames with insufficiently rich keypoint constellations, are detected and discarded. Roll, pitch yaw , and scale differences between left and right frames are then estimated. The algorithm performance is evaluated in terms of the remaining vertical disparity as compared to the maximum tolerable vertical disparity.

  8. The 3D laser radar vision processor system

    NASA Technical Reports Server (NTRS)

    Sebok, T. M.

    1990-01-01

    Loral Defense Systems (LDS) developed a 3D Laser Radar Vision Processor system capable of detecting, classifying, and identifying small mobile targets as well as larger fixed targets using three dimensional laser radar imagery for use with a robotic type system. This processor system is designed to interface with the NASA Johnson Space Center in-house Extra Vehicular Activity (EVA) Retriever robot program and provide to it needed information so it can fetch and grasp targets in a space-type scenario.

  9. Spatially resolved 3D noise

    NASA Astrophysics Data System (ADS)

    Haefner, David P.; Preece, Bradley L.; Doe, Joshua M.; Burks, Stephen D.

    2016-05-01

    When evaluated with a spatially uniform irradiance, an imaging sensor exhibits both spatial and temporal variations, which can be described as a three-dimensional (3D) random process considered as noise. In the 1990s, NVESD engineers developed an approximation to the 3D power spectral density (PSD) for noise in imaging systems known as 3D noise. In this correspondence, we describe how the confidence intervals for the 3D noise measurement allows for determination of the sampling necessary to reach a desired precision. We then apply that knowledge to create a smaller cube that can be evaluated spatially across the 2D image giving the noise as a function of position. The method presented here allows for both defective pixel identification and implements the finite sampling correction matrix. In support of the reproducible research effort, the Matlab functions associated with this work can be found on the Mathworks file exchange [1].

  10. Autofocus for 3D imaging

    NASA Astrophysics Data System (ADS)

    Lee-Elkin, Forest

    2008-04-01

    Three dimensional (3D) autofocus remains a significant challenge for the development of practical 3D multipass radar imaging. The current 2D radar autofocus methods are not readily extendable across sensor passes. We propose a general framework that allows a class of data adaptive solutions for 3D auto-focus across passes with minimal constraints on the scene contents. The key enabling assumption is that portions of the scene are sparse in elevation which reduces the number of free variables and results in a system that is simultaneously solved for scatterer heights and autofocus parameters. The proposed method extends 2-pass interferometric synthetic aperture radar (IFSAR) methods to an arbitrary number of passes allowing the consideration of scattering from multiple height locations. A specific case from the proposed autofocus framework is solved and demonstrates autofocus and coherent multipass 3D estimation across the 8 passes of the "Gotcha Volumetric SAR Data Set" X-Band radar data.

  11. Accepting the T3D

    SciTech Connect

    Rich, D.O.; Pope, S.C.; DeLapp, J.G.

    1994-10-01

    In April, a 128 PE Cray T3D was installed at Los Alamos National Laboratory`s Advanced Computing Laboratory as part of the DOE`s High-Performance Parallel Processor Program (H4P). In conjunction with CRI, the authors implemented a 30 day acceptance test. The test was constructed in part to help them understand the strengths and weaknesses of the T3D. In this paper, they briefly describe the H4P and its goals. They discuss the design and implementation of the T3D acceptance test and detail issues that arose during the test. They conclude with a set of system requirements that must be addressed as the T3D system evolves.

  12. Combinatorial 3D Mechanical Metamaterials

    NASA Astrophysics Data System (ADS)

    Coulais, Corentin; Teomy, Eial; de Reus, Koen; Shokef, Yair; van Hecke, Martin

    2015-03-01

    We present a class of elastic structures which exhibit 3D-folding motion. Our structures consist of cubic lattices of anisotropic unit cells that can be tiled in a complex combinatorial fashion. We design and 3d-print this complex ordered mechanism, in which we combine elastic hinges and defects to tailor the mechanics of the material. Finally, we use this large design space to encode smart functionalities such as surface patterning and multistability.

  13. Atmospheric aerosol and molecular backscatter imaging effects on direct detection LADAR

    NASA Astrophysics Data System (ADS)

    Youmans, Douglas G.

    2015-05-01

    Backscatter from atmospheric aerosols and molecular nitrogen and oxygen causes "clutter" noise in direct detection ladar applications operating within the atmosphere. The backscatter clutter is more pronounced in multiple pulse, high PRF ladars where pulse-averaging is used to increase operating range. As more and more pulses are added to the wavetrain the backscatter increases. We analyze the imaging of a transmitted Gaussian laser-mode multi-pulse wave-train scatteried off of aerosols and molecules at the focal plane including angular-slew rate resulting from optical tracking, angular lead-angle, and bistatic-optics spatial separation. The defocused backscatter images, from those pulses closest to the receiver, are analyzed using a simple geometrical optics approximation. Methods for estimating the aerosol number density versus altitude and the volume backscatter coefficient of the aerosols are also discussed.

  14. Bound on range precision for shot-noise limited ladar systems.

    PubMed

    Johnson, Steven; Cain, Stephen

    2008-10-01

    The precision of ladar range measurements is limited by noise. The fundamental source of noise in a laser signal is the random time between photon arrivals. This phenomenon, called shot noise, is modeled as a Poisson random process. Other noise sources in the system are also modeled as Poisson processes. Under the Poisson-noise assumption, the Cramer-Rao lower bound (CRLB) on range measurements is derived. This bound on the variance of any unbiased range estimate is greater than the CRLB derived by assuming Gaussian noise of equal variance. Finally, it is shown that, for a ladar capable of dividing a fixed amount of energy into multiple laser pulses, the range precision is maximized when all energy is transmitted in a single pulse.

  15. Precision and accuracy testing of FMCW ladar-based length metrology.

    PubMed

    Mateo, Ana Baselga; Barber, Zeb W

    2015-07-01

    The calibration and traceability of high-resolution frequency modulated continuous wave (FMCW) ladar sources is a requirement for their use in length and volume metrology. We report the calibration of FMCW ladar length measurement systems by use of spectroscopy of molecular frequency references HCN (C-band) or CO (L-band) to calibrate the chirp rate of the FMCW sources. Propagating the stated uncertainties from the molecular calibrations provided by NIST and measurement errors provide an estimated uncertainty of a few ppm for the FMCW system. As a test of this calibration, a displacement measurement interferometer with a laser wavelength close to that of our FMCW system was built to make comparisons of the relative precision and accuracy. The comparisons performed show <10  ppm agreement, which was within the combined estimated uncertainties of the FMCW system and interferometer. PMID:26193146

  16. Advances in ground vehicle-based LADAR for standoff detection of road-side hazards

    NASA Astrophysics Data System (ADS)

    Hollinger, Jim; Vessey, Alyssa; Close, Ryan; Middleton, Seth; Williams, Kathryn; Rupp, Ronald; Nguyen, Son

    2016-05-01

    Commercial sensor technology has the potential to bring cost-effective sensors to a number of U.S. Army applications. By using sensors built for a widespread of commercial application, such as the automotive market, the Army can decrease costs of future systems while increasing overall capabilities. Additional sensors operating in alternate and orthogonal modalities can also be leveraged to gain a broader spectrum measurement of the environment. Leveraging multiple phenomenologies can reduce false alarms and make detection algorithms more robust to varied concealment materials. In this paper, this approach is applied to the detection of roadside hazards partially concealed by light-to-medium vegetation. This paper will present advances in detection algorithms using a ground vehicle-based commercial LADAR system. The benefits of augmenting a LADAR with millimeter-wave automotive radar and results from relevant data sets are also discussed.

  17. Optical imaging process based on two-dimensional Fourier transform for synthetic aperture imaging ladar

    NASA Astrophysics Data System (ADS)

    Sun, Zhiwei; Zhi, Ya'nan; Liu, Liren; Sun, Jianfeng; Zhou, Yu; Hou, Peipei

    2013-09-01

    The synthetic aperture imaging ladar (SAIL) systems typically generate large amounts of data difficult to compress with digital method. This paper presents an optical SAIL processor based on compensation of quadratic phase of echo in azimuth direction and two dimensional Fourier transform. The optical processor mainly consists of one phase-only liquid crystal spatial modulator(LCSLM) to load the phase data of target echo and one cylindrical lens to compensate the quadratic phase and one spherical lens to fulfill the task of two dimensional Fourier transform. We show the imaging processing result of practical target echo obtained by a synthetic aperture imaging ladar demonstrator. The optical processor is compact and lightweight and could provide inherent parallel and the speed-of-light computing capability, it has a promising application future especially in onboard and satellite borne SAIL systems.

  18. Cramer-Rao lower bound on range error for LADARs with Geiger-mode avalanche photodiodes.

    PubMed

    Johnson, Steven E

    2010-08-20

    The Cramer-Rao lower bound (CRLB) on range error is calculated for laser detection and ranging (LADAR) systems using Geiger-mode avalanche photodiodes (GMAPDs) to detect reflected laser pulses. For the cases considered, the GMAPD range error CRLB is greater than the CRLB for a photon-counting device. It is also shown that the GMAPD range error CRLB is minimized when the mean energy in the received laser pulse is finite. Given typical LADAR system parameters, a Gaussian-envelope received pulse, and a noise detection rate of less than 4 MHz, the GMAPD range error CRLB is minimized when the quantum efficiency times the mean number of received laser pulse photons is between 2.2 and 2.3. PMID:20733630

  19. Worldwide uncertainty assessments of ladar and radar signal-to-noise ratio performance for diverse low altitude atmospheric environments

    NASA Astrophysics Data System (ADS)

    Fiorino, Steven T.; Bartell, Richard J.; Krizo, Matthew J.; Caylor, Gregory; Moore, Kenneth P.; Harris, Thomas R.; Cusumano, Salvatore J.

    2010-06-01

    In this study of atmospheric effects on laser ranging and detection (ladar) and radar systems, the parameter space is explored using the Air Force Institute of Technology Center for Directed Energy's (AFIT/CDE) High Energy Laser End-to-End Operational Simulation (HELEEOS) parametric one-on-one engagement level model. The expected performance of ladar systems is assessed at a representative wavelength of 1.557 µm at a number of widely dispersed land and maritime locations worldwide. Radar system performance is assessed at 95 GHz and 250 GHz. Scenarios evaluated include both down looking oblique and vertical engagement geometries over ranges up to 3000 meters in which clear air aerosols and thin layers of fog, locally heavy rain, and low stratus cloud types are expected to occur. Seasonal and boundary layer variations are considered to determine optimum employment techniques to exploit or defeat the environmental conditions. Each atmospheric particulate/obscurant/hydrometeor is evaluated based on its wavelength-dependent forward and off-axis scattering characteristics and absorption effects on system interrogation. Results are presented in the form of worldwide plots of notional signal to noise ratio. The ladar and 95 GHz system types exhibit similar SNR performance for forward oblique clear air operation. 1.557 µm ladar performs well for vertical geometries in the presence of ground fog, but has no near-horizontal performance under such meteorological conditions. It also has no performance if low altitude stratus is present. 95 GHz performs well for both the fog and stratus layer cases, for both vertical and forward oblique geometries. The 250 GHz radar system is heavily impacted by water vapor absorption in all scenarios studied; however it is not as strongly affected by clouds and fog as the 1.557 µm ladar. Locally heavy rain will severely limit ladar system performance at these wavelengths. However, under heavy rain conditions ladar outperforms both radar

  20. LASTRAC.3d: Transition Prediction in 3D Boundary Layers

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan

    2004-01-01

    Langley Stability and Transition Analysis Code (LASTRAC) is a general-purpose, physics-based transition prediction code released by NASA for laminar flow control studies and transition research. This paper describes the LASTRAC extension to general three-dimensional (3D) boundary layers such as finite swept wings, cones, or bodies at an angle of attack. The stability problem is formulated by using a body-fitted nonorthogonal curvilinear coordinate system constructed on the body surface. The nonorthogonal coordinate system offers a variety of marching paths and spanwise waveforms. In the extreme case of an infinite swept wing boundary layer, marching with a nonorthogonal coordinate produces identical solutions to those obtained with an orthogonal coordinate system using the earlier release of LASTRAC. Several methods to formulate the 3D parabolized stability equations (PSE) are discussed. A surface-marching procedure akin to that for 3D boundary layer equations may be used to solve the 3D parabolized disturbance equations. On the other hand, the local line-marching PSE method, formulated as an easy extension from its 2D counterpart and capable of handling the spanwise mean flow and disturbance variation, offers an alternative. A linear stability theory or parabolized stability equations based N-factor analysis carried out along the streamline direction with a fixed wavelength and downstream-varying spanwise direction constitutes an efficient engineering approach to study instability wave evolution in a 3D boundary layer. The surface-marching PSE method enables a consistent treatment of the disturbance evolution along both streamwise and spanwise directions but requires more stringent initial conditions. Both PSE methods and the traditional LST approach are implemented in the LASTRAC.3d code. Several test cases for tapered or finite swept wings and cones at an angle of attack are discussed.

  1. From 3D view to 3D print

    NASA Astrophysics Data System (ADS)

    Dima, M.; Farisato, G.; Bergomi, M.; Viotto, V.; Magrin, D.; Greggio, D.; Farinato, J.; Marafatto, L.; Ragazzoni, R.; Piazza, D.

    2014-08-01

    In the last few years 3D printing is getting more and more popular and used in many fields going from manufacturing to industrial design, architecture, medical support and aerospace. 3D printing is an evolution of bi-dimensional printing, which allows to obtain a solid object from a 3D model, realized with a 3D modelling software. The final product is obtained using an additive process, in which successive layers of material are laid down one over the other. A 3D printer allows to realize, in a simple way, very complex shapes, which would be quite difficult to be produced with dedicated conventional facilities. Thanks to the fact that the 3D printing is obtained superposing one layer to the others, it doesn't need any particular work flow and it is sufficient to simply draw the model and send it to print. Many different kinds of 3D printers exist based on the technology and material used for layer deposition. A common material used by the toner is ABS plastics, which is a light and rigid thermoplastic polymer, whose peculiar mechanical properties make it diffusely used in several fields, like pipes production and cars interiors manufacturing. I used this technology to create a 1:1 scale model of the telescope which is the hardware core of the space small mission CHEOPS (CHaracterising ExOPlanets Satellite) by ESA, which aims to characterize EXOplanets via transits observations. The telescope has a Ritchey-Chrétien configuration with a 30cm aperture and the launch is foreseen in 2017. In this paper, I present the different phases for the realization of such a model, focusing onto pros and cons of this kind of technology. For example, because of the finite printable volume (10×10×12 inches in the x, y and z directions respectively), it has been necessary to split the largest parts of the instrument in smaller components to be then reassembled and post-processed. A further issue is the resolution of the printed material, which is expressed in terms of layers

  2. A LADAR bare earth extraction technique for diverse topography and complex scenes

    NASA Astrophysics Data System (ADS)

    Neuenschwander, Amy L.; Stevenson, Terry H.; Magruder, Lori A.

    2012-06-01

    Bare earth extraction is an important component to LADAR data analysis in terms of terrain classification. The challenge in providing accurate digital models is augmented when there is diverse topography within the data set or complex combinations of vegetation and built structures. A successful approach provides a flexible methodology (adaptable for topography and/or environment) that is capable of integrating multiple ladar point cloud data attributes. A newly developed approach (TE-SiP) uses a 2nd and 3rd order spatial derivative for each point in the DEM to determine sets of contiguous regions of similar elevation. Specifically, the derivative of the central point represents the curvature of the terrain at that position. Contiguous sets of high (positive or negative) values define sharp edges such as building edges or cliffs. This method is independent of the slope, such that very steep, but continuous topography still have relatively low curvature values and are preserved in the terrain classification. Next, a recursive segmentation method identifies unique features of homogeneity on the surface separated by areas of high curvature. An iterative selection process is used to eliminate regions containing buildings or vegetation from the terrain surface. This technique was tested on a variety of existing LADAR surveys, each with varying levels of topographic complexity. The results shown here include developed and forested regions in the Dominican Republic.

  3. YouDash3D: exploring stereoscopic 3D gaming for 3D movie theaters

    NASA Astrophysics Data System (ADS)

    Schild, Jonas; Seele, Sven; Masuch, Maic

    2012-03-01

    Along with the success of the digitally revived stereoscopic cinema, events beyond 3D movies become attractive for movie theater operators, i.e. interactive 3D games. In this paper, we present a case that explores possible challenges and solutions for interactive 3D games to be played by a movie theater audience. We analyze the setting and showcase current issues related to lighting and interaction. Our second focus is to provide gameplay mechanics that make special use of stereoscopy, especially depth-based game design. Based on these results, we present YouDash3D, a game prototype that explores public stereoscopic gameplay in a reduced kiosk setup. It features live 3D HD video stream of a professional stereo camera rig rendered in a real-time game scene. We use the effect to place the stereoscopic effigies of players into the digital game. The game showcases how stereoscopic vision can provide for a novel depth-based game mechanic. Projected trigger zones and distributed clusters of the audience video allow for easy adaptation to larger audiences and 3D movie theater gaming.

  4. Remote 3D Medical Consultation

    NASA Astrophysics Data System (ADS)

    Welch, Greg; Sonnenwald, Diane H.; Fuchs, Henry; Cairns, Bruce; Mayer-Patel, Ketan; Yang, Ruigang; State, Andrei; Towles, Herman; Ilie, Adrian; Krishnan, Srinivas; Söderholm, Hanna M.

    Two-dimensional (2D) video-based telemedical consultation has been explored widely in the past 15-20 years. Two issues that seem to arise in most relevant case studies are the difficulty associated with obtaining the desired 2D camera views, and poor depth perception. To address these problems we are exploring the use of a small array of cameras to synthesize a spatially continuous range of dynamic three-dimensional (3D) views of a remote environment and events. The 3D views can be sent across wired or wireless networks to remote viewers with fixed displays or mobile devices such as a personal digital assistant (PDA). The viewpoints could be specified manually or automatically via user head or PDA tracking, giving the remote viewer virtual head- or hand-slaved (PDA-based) remote cameras for mono or stereo viewing. We call this idea remote 3D medical consultation (3DMC). In this article we motivate and explain the vision for 3D medical consultation; we describe the relevant computer vision/graphics, display, and networking research; we present a proof-of-concept prototype system; and we present some early experimental results supporting the general hypothesis that 3D remote medical consultation could offer benefits over conventional 2D televideo.

  5. Speaking Volumes About 3-D

    NASA Technical Reports Server (NTRS)

    2002-01-01

    In 1999, Genex submitted a proposal to Stennis Space Center for a volumetric 3-D display technique that would provide multiple users with a 360-degree perspective to simultaneously view and analyze 3-D data. The futuristic capabilities of the VolumeViewer(R) have offered tremendous benefits to commercial users in the fields of medicine and surgery, air traffic control, pilot training and education, computer-aided design/computer-aided manufacturing, and military/battlefield management. The technology has also helped NASA to better analyze and assess the various data collected by its satellite and spacecraft sensors. Genex capitalized on its success with Stennis by introducing two separate products to the commercial market that incorporate key elements of the 3-D display technology designed under an SBIR contract. The company Rainbow 3D(R) imaging camera is a novel, three-dimensional surface profile measurement system that can obtain a full-frame 3-D image in less than 1 second. The third product is the 360-degree OmniEye(R) video system. Ideal for intrusion detection, surveillance, and situation management, this unique camera system offers a continuous, panoramic view of a scene in real time.

  6. 3D-Printed Microfluidics.

    PubMed

    Au, Anthony K; Huynh, Wilson; Horowitz, Lisa F; Folch, Albert

    2016-03-14

    The advent of soft lithography allowed for an unprecedented expansion in the field of microfluidics. However, the vast majority of PDMS microfluidic devices are still made with extensive manual labor, are tethered to bulky control systems, and have cumbersome user interfaces, which all render commercialization difficult. On the other hand, 3D printing has begun to embrace the range of sizes and materials that appeal to the developers of microfluidic devices. Prior to fabrication, a design is digitally built as a detailed 3D CAD file. The design can be assembled in modules by remotely collaborating teams, and its mechanical and fluidic behavior can be simulated using finite-element modeling. As structures are created by adding materials without the need for etching or dissolution, processing is environmentally friendly and economically efficient. We predict that in the next few years, 3D printing will replace most PDMS and plastic molding techniques in academia.

  7. 3D Computations and Experiments

    SciTech Connect

    Couch, R; Faux, D; Goto, D; Nikkel, D

    2004-04-05

    This project consists of two activities. Task A, Simulations and Measurements, combines all the material model development and associated numerical work with the materials-oriented experimental activities. The goal of this effort is to provide an improved understanding of dynamic material properties and to provide accurate numerical representations of those properties for use in analysis codes. Task B, ALE3D Development, involves general development activities in the ALE3D code with the focus of improving simulation capabilities for problems of mutual interest to DoD and DOE. Emphasis is on problems involving multi-phase flow, blast loading of structures and system safety/vulnerability studies.

  8. Making Inexpensive 3-D Models

    ERIC Educational Resources Information Center

    Manos, Harry

    2016-01-01

    Visual aids are important to student learning, and they help make the teacher's job easier. Keeping with the "TPT" theme of "The Art, Craft, and Science of Physics Teaching," the purpose of this article is to show how teachers, lacking equipment and funds, can construct a durable 3-D model reference frame and a model gravity…

  9. SNL3dFace

    2007-07-20

    This software distribution contains MATLAB and C++ code to enable identity verification using 3D images that may or may not contain a texture component. The code is organized to support system performance testing and system capability demonstration through the proper configuration of the available user interface. Using specific algorithm parameters the face recognition system has been demonstrated to achieve a 96.6% verification rate (Pd) at 0.001 false alarm rate. The system computes robust facial featuresmore » of a 3D normalized face using Principal Component Analysis (PCA) and Fisher Linear Discriminant Analysis (FLDA). A 3D normalized face is obtained by alighning each face, represented by a set of XYZ coordinated, to a scaled reference face using the Iterative Closest Point (ICP) algorithm. The scaled reference face is then deformed to the input face using an iterative framework with parameters that control the deformed surface regulation an rate of deformation. A variety of options are available to control the information that is encoded by the PCA. Such options include the XYZ coordinates, the difference of each XYZ coordinates from the reference, the Z coordinate, the intensity/texture values, etc. In addition to PCA/FLDA feature projection this software supports feature matching to obtain similarity matrices for performance analysis. In addition, this software supports visualization of the STL, MRD, 2D normalized, and PCA synthetic representations in a 3D environment.« less

  10. SNL3dFace

    SciTech Connect

    Russ, Trina; Koch, Mark; Koudelka, Melissa; Peters, Ralph; Little, Charles; Boehnen, Chris; Peters, Tanya

    2007-07-20

    This software distribution contains MATLAB and C++ code to enable identity verification using 3D images that may or may not contain a texture component. The code is organized to support system performance testing and system capability demonstration through the proper configuration of the available user interface. Using specific algorithm parameters the face recognition system has been demonstrated to achieve a 96.6% verification rate (Pd) at 0.001 false alarm rate. The system computes robust facial features of a 3D normalized face using Principal Component Analysis (PCA) and Fisher Linear Discriminant Analysis (FLDA). A 3D normalized face is obtained by alighning each face, represented by a set of XYZ coordinated, to a scaled reference face using the Iterative Closest Point (ICP) algorithm. The scaled reference face is then deformed to the input face using an iterative framework with parameters that control the deformed surface regulation an rate of deformation. A variety of options are available to control the information that is encoded by the PCA. Such options include the XYZ coordinates, the difference of each XYZ coordinates from the reference, the Z coordinate, the intensity/texture values, etc. In addition to PCA/FLDA feature projection this software supports feature matching to obtain similarity matrices for performance analysis. In addition, this software supports visualization of the STL, MRD, 2D normalized, and PCA synthetic representations in a 3D environment.

  11. 3D Printing: Exploring Capabilities

    ERIC Educational Resources Information Center

    Samuels, Kyle; Flowers, Jim

    2015-01-01

    As 3D printers become more affordable, schools are using them in increasing numbers. They fit well with the emphasis on product design in technology and engineering education, allowing students to create high-fidelity physical models to see and test different iterations in their product designs. They may also help students to "think in three…

  12. TACO3D. 3-D Finite Element Heat Transfer Code

    SciTech Connect

    Mason, W.E.

    1992-03-04

    TACO3D is a three-dimensional, finite-element program for heat transfer analysis. An extension of the two-dimensional TACO program, it can perform linear and nonlinear analyses and can be used to solve either transient or steady-state problems. The program accepts time-dependent or temperature-dependent material properties, and materials may be isotropic or orthotropic. A variety of time-dependent and temperature-dependent boundary conditions and loadings are available including temperature, flux, convection, and radiation boundary conditions and internal heat generation. Additional specialized features treat enclosure radiation, bulk nodes, and master/slave internal surface conditions (e.g., contact resistance). Data input via a free-field format is provided. A user subprogram feature allows for any type of functional representation of any independent variable. A profile (bandwidth) minimization option is available. The code is limited to implicit time integration for transient solutions. TACO3D has no general mesh generation capability. Rows of evenly-spaced nodes and rows of sequential elements may be generated, but the program relies on separate mesh generators for complex zoning. TACO3D does not have the ability to calculate view factors internally. Graphical representation of data in the form of time history and spatial plots is provided through links to the POSTACO and GRAPE postprocessor codes.

  13. Forensic 3D scene reconstruction

    NASA Astrophysics Data System (ADS)

    Little, Charles Q.; Small, Daniel E.; Peters, Ralph R.; Rigdon, J. B.

    2000-05-01

    Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a fieldable prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.

  14. 3D Printable Graphene Composite.

    PubMed

    Wei, Xiaojun; Li, Dong; Jiang, Wei; Gu, Zheming; Wang, Xiaojuan; Zhang, Zengxing; Sun, Zhengzong

    2015-07-08

    In human being's history, both the Iron Age and Silicon Age thrived after a matured massive processing technology was developed. Graphene is the most recent superior material which could potentially initialize another new material Age. However, while being exploited to its full extent, conventional processing methods fail to provide a link to today's personalization tide. New technology should be ushered in. Three-dimensional (3D) printing fills the missing linkage between graphene materials and the digital mainstream. Their alliance could generate additional stream to push the graphene revolution into a new phase. Here we demonstrate for the first time, a graphene composite, with a graphene loading up to 5.6 wt%, can be 3D printable into computer-designed models. The composite's linear thermal coefficient is below 75 ppm·°C(-1) from room temperature to its glass transition temperature (Tg), which is crucial to build minute thermal stress during the printing process.

  15. Forensic 3D Scene Reconstruction

    SciTech Connect

    LITTLE,CHARLES Q.; PETERS,RALPH R.; RIGDON,J. BRIAN; SMALL,DANIEL E.

    1999-10-12

    Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a feasible prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.

  16. 3D Printed Robotic Hand

    NASA Technical Reports Server (NTRS)

    Pizarro, Yaritzmar Rosario; Schuler, Jason M.; Lippitt, Thomas C.

    2013-01-01

    Dexterous robotic hands are changing the way robots and humans interact and use common tools. Unfortunately, the complexity of the joints and actuations drive up the manufacturing cost. Some cutting edge and commercially available rapid prototyping machines now have the ability to print multiple materials and even combine these materials in the same job. A 3D model of a robotic hand was designed using Creo Parametric 2.0. Combining "hard" and "soft" materials, the model was printed on the Object Connex350 3D printer with the purpose of resembling as much as possible the human appearance and mobility of a real hand while needing no assembly. After printing the prototype, strings where installed as actuators to test mobility. Based on printing materials, the manufacturing cost of the hand was $167, significantly lower than other robotic hands without the actuators since they have more complex assembly processes.

  17. 3D light scanning macrography.

    PubMed

    Huber, D; Keller, M; Robert, D

    2001-08-01

    The technique of 3D light scanning macrography permits the non-invasive surface scanning of small specimens at magnifications up to 200x. Obviating both the problem of limited depth of field inherent to conventional close-up macrophotography and the metallic coating required by scanning electron microscopy, 3D light scanning macrography provides three-dimensional digital images of intact specimens without the loss of colour, texture and transparency information. This newly developed technique offers a versatile, portable and cost-efficient method for the non-invasive digital and photographic documentation of small objects. Computer controlled device operation and digital image acquisition facilitate fast and accurate quantitative morphometric investigations, and the technique offers a broad field of research and educational applications in biological, medical and materials sciences. PMID:11489078

  18. [Real time 3D echocardiography

    NASA Technical Reports Server (NTRS)

    Bauer, F.; Shiota, T.; Thomas, J. D.

    2001-01-01

    Three-dimensional representation of the heart is an old concern. Usually, 3D reconstruction of the cardiac mass is made by successive acquisition of 2D sections, the spatial localisation and orientation of which require complex guiding systems. More recently, the concept of volumetric acquisition has been introduced. A matricial emitter-receiver probe complex with parallel data processing provides instantaneous of a pyramidal 64 degrees x 64 degrees volume. The image is restituted in real time and is composed of 3 planes (planes B and C) which can be displaced in all spatial directions at any time during acquisition. The flexibility of this system of acquisition allows volume and mass measurement with greater accuracy and reproducibility, limiting inter-observer variability. Free navigation of the planes of investigation allows reconstruction for qualitative and quantitative analysis of valvular heart disease and other pathologies. Although real time 3D echocardiography is ready for clinical usage, some improvements are still necessary to improve its conviviality. Then real time 3D echocardiography could be the essential tool for understanding, diagnosis and management of patients.

  19. [Real time 3D echocardiography].

    PubMed

    Bauer, F; Shiota, T; Thomas, J D

    2001-07-01

    Three-dimensional representation of the heart is an old concern. Usually, 3D reconstruction of the cardiac mass is made by successive acquisition of 2D sections, the spatial localisation and orientation of which require complex guiding systems. More recently, the concept of volumetric acquisition has been introduced. A matricial emitter-receiver probe complex with parallel data processing provides instantaneous of a pyramidal 64 degrees x 64 degrees volume. The image is restituted in real time and is composed of 3 planes (planes B and C) which can be displaced in all spatial directions at any time during acquisition. The flexibility of this system of acquisition allows volume and mass measurement with greater accuracy and reproducibility, limiting inter-observer variability. Free navigation of the planes of investigation allows reconstruction for qualitative and quantitative analysis of valvular heart disease and other pathologies. Although real time 3D echocardiography is ready for clinical usage, some improvements are still necessary to improve its conviviality. Then real time 3D echocardiography could be the essential tool for understanding, diagnosis and management of patients. PMID:11494630

  20. A wide angle search technique for a 10.6 micron ladar. [scanning radar using Q switched carbon dioxide laser

    NASA Technical Reports Server (NTRS)

    Levinson, S.; Adelman, S.; Lowrey, D. O.

    1975-01-01

    A ladar (laser radar) sensor designed around a pulsed passively Q-switched CO2 laser, capable of a efficient and rapid scans with a narrow beam over a wide field of view, is considered for surveillance and tracking applications in space. The output is a train of narrow pulses with a controllable pulse repetition rate. A resonant vibrating mirror in back of a classical Gregorian telescope, and a plane pointing mirror in front for beam steering, are used in scanning. Scan pulse sequences are described and illustrated. The 10.6 micron ladar set is under consideration as baseline sensor for various space rendezvous and docking applications.

  1. Developing Spatial Reasoning Through 3D Representations of the Universe

    NASA Astrophysics Data System (ADS)

    Summers, F.; Eisenhamer, B.; McCallister, D.

    2013-12-01

    Mental models of astronomical objects are often greatly hampered by the flat two-dimensional representation of pictures from telescopes. Lacking experience with the true structures in much of the imagery, there is no basis for anything but the default interpretation of a picture postcard. Using astronomical data and scientific visualizations, our professional development session allows teachers and their students to develop their spatial reasoning while forming more accurate and richer mental models. Examples employed in this session include star positions and constellations, morphologies of both normal and interacting galaxies, shapes of planetary nebulae, and three dimensional structures in star forming regions. Participants examine, imagine, predict, and confront the 3D interpretation of well-known 2D imagery using authentic data from NASA, the Hubble Space Telescope, and other scientific sources. The session's cross-disciplinary nature includes science, math, and artistic reasoning while addressing common cosmic misconceptions. Stars of the Orion Constellation seen in 3D explodes the popular misconception that stars in a constellation are all at the same distance. A scientific visualization of two galaxies colliding provides a 3D comparison for Hubble images of interacting galaxies.

  2. DYNA3D. Explicit 3-d Hydrodynamic FEM Program

    SciTech Connect

    Whirley, R.G.; Englemann, B.E. )

    1993-11-30

    DYNA3D is an explicit, three-dimensional, finite element program for analyzing the large deformation dynamic response of inelastic solids and structures. DYNA3D contains 30 material models and 10 equations of state (EOS) to cover a wide range of material behavior. The material models implemented are: elastic, orthotropic elastic, kinematic/isotropic plasticity, thermoelastoplastic, soil and crushable foam, linear viscoelastic, Blatz-Ko rubber, high explosive burn, hydrodynamic without deviatoric stresses, elastoplastic hydrodynamic, temperature-dependent elastoplastic, isotropic elastoplastic, isotropic elastoplastic with failure, soil and crushable foam with failure, Johnson/Cook plasticity model, pseudo TENSOR geological model, elastoplastic with fracture, power law isotropic plasticity, strain rate dependent plasticity, rigid, thermal orthotropic, composite damage model, thermal orthotropic with 12 curves, piecewise linear isotropic plasticity, inviscid two invariant geologic cap, orthotropic crushable model, Moonsy-Rivlin rubber, resultant plasticity, closed form update shell plasticity, and Frazer-Nash rubber model. The hydrodynamic material models determine only the deviatoric stresses. Pressure is determined by one of 10 equations of state including linear polynomial, JWL high explosive, Sack Tuesday high explosive, Gruneisen, ratio of polynomials, linear polynomial with energy deposition, ignition and growth of reaction in HE, tabulated compaction, tabulated, and TENSOR pore collapse. DYNA3D generates three binary output databases. One contains information for complete states at infrequent intervals; 50 to 100 states is typical. The second contains information for a subset of nodes and elements at frequent intervals; 1,000 to 10,000 states is typical. The last contains interface data for contact surfaces.

  3. Space Partitioning for Privacy Enabled 3D City Models

    NASA Astrophysics Data System (ADS)

    Filippovska, Y.; Wichmann, A.; Kada, M.

    2016-10-01

    Due to recent technological progress, data capturing and processing of highly detailed (3D) data has become extensive. And despite all prospects of potential uses, data that includes personal living spaces and public buildings can also be considered as a serious intrusion into people's privacy and a threat to security. It becomes especially critical if data is visible by the general public. Thus, a compromise is needed between open access to data and privacy requirements which can be very different for each application. As privacy is a complex and versatile topic, the focus of this work particularly lies on the visualization of 3D urban data sets. For the purpose of privacy enabled visualizations of 3D city models, we propose to partition the (living) spaces into privacy regions, each featuring its own level of anonymity. Within each region, the depicted 2D and 3D geometry and imagery is anonymized with cartographic generalization techniques. The underlying spatial partitioning is realized as a 2D map generated as a straight skeleton of the open space between buildings. The resulting privacy cells are then merged according to the privacy requirements associated with each building to form larger regions, their borderlines smoothed, and transition zones established between privacy regions to have a harmonious visual appearance. It is exemplarily demonstrated how the proposed method generates privacy enabled 3D city models.

  4. 3D Soil Images Structure Quantification using Relative Entropy

    NASA Astrophysics Data System (ADS)

    Tarquis, A. M.; Gonzalez-Nieto, P. L.; Bird, N. R. A.

    2012-04-01

    Soil voids manifest the cumulative effect of local pedogenic processes and ultimately influence soil behavior - especially as it pertains to aeration and hydrophysical properties. Because of the relatively weak attenuation of X-rays by air, compared with liquids or solids, non-disruptive CT scanning has become a very attractive tool for generating three-dimensional imagery of soil voids. One of the main steps involved in this analysis is the thresholding required to transform the original (greyscale) images into the type of binary representation (e.g., pores in white, solids in black) needed for fractal analysis or simulation with Lattice-Boltzmann models (Baveye et al., 2010). The objective of the current work is to apply an innovative approach to quantifying soil voids and pore networks in original X-ray CT imagery using Relative Entropy (Bird et al., 2006; Tarquis et al., 2008). These will be illustrated using typical imagery representing contrasting soil structures. Particular attention will be given to the need to consider the full 3D context of the CT imagery, as well as scaling issues, in the application and interpretation of this index.

  5. GPU-Accelerated Denoising in 3D (GD3D)

    2013-10-01

    The raw computational power GPU Accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. This software addresses two facets of this promising application: what tuning is necessary to achieve optimal performance on a modern GPU? And what parameters yield the best denoising results in practice? To answer the first question, the software performs an autotuning step to empirically determine optimal memory blocking on the GPU. To answer themore » second, it performs a sweep of algorithm parameters to determine the combination that best reduces the mean squared error relative to a noiseless reference image.« less

  6. Magmatic Systems in 3-D

    NASA Astrophysics Data System (ADS)

    Kent, G. M.; Harding, A. J.; Babcock, J. M.; Orcutt, J. A.; Bazin, S.; Singh, S.; Detrick, R. S.; Canales, J. P.; Carbotte, S. M.; Diebold, J.

    2002-12-01

    Multichannel seismic (MCS) images of crustal magma chambers are ideal targets for advanced visualization techniques. In the mid-ocean ridge environment, reflections originating at the melt-lens are well separated from other reflection boundaries, such as the seafloor, layer 2A and Moho, which enables the effective use of transparency filters. 3-D visualization of seismic reflectivity falls into two broad categories: volume and surface rendering. Volumetric-based visualization is an extremely powerful approach for the rapid exploration of very dense 3-D datasets. These 3-D datasets are divided into volume elements or voxels, which are individually color coded depending on the assigned datum value; the user can define an opacity filter to reject plotting certain voxels. This transparency allows the user to peer into the data volume, enabling an easy identification of patterns or relationships that might have geologic merit. Multiple image volumes can be co-registered to look at correlations between two different data types (e.g., amplitude variation with offsets studies), in a manner analogous to draping attributes onto a surface. In contrast, surface visualization of seismic reflectivity usually involves producing "fence" diagrams of 2-D seismic profiles that are complemented with seafloor topography, along with point class data, draped lines and vectors (e.g. fault scarps, earthquake locations and plate-motions). The overlying seafloor can be made partially transparent or see-through, enabling 3-D correlations between seafloor structure and seismic reflectivity. Exploration of 3-D datasets requires additional thought when constructing and manipulating these complex objects. As numbers of visual objects grow in a particular scene, there is a tendency to mask overlapping objects; this clutter can be managed through the effective use of total or partial transparency (i.e., alpha-channel). In this way, the co-variation between different datasets can be investigated

  7. Spatial 3D display based on DMD and swept-volume technology

    NASA Astrophysics Data System (ADS)

    Xing, Jianfang; Gong, Huajun; Pan, Wenping; Yue, Jian; Shen, Chunlin

    2011-08-01

    Display devices play important roles in the interaction between human and digital world of computer. Building devices which can display 3-D images in true 3-D space has aroused researchers' concern for many years. In this paper, we develop a novel spatial display by projecting 2D profile slices of the 3-D models in rapid succession onto a synchronous rotating double bladed helical screen periodically. It is a high speed light-addressed system base on Texas Instruments TM(TI TM) Digital Mirror Device TM (DMD TM) technology, and high frame fresh rate is achieved by accurate control over DMD micro-mirrors. When the rotation frequency of the screen higher than critical flicker fusion frequency, the stroboscopic time-varying slices are fused into a whole flicker-free 3-D spatial imagery because of persistence of vision. The display generate volume-fill 3-D imagery consist of an array of voxels that can be seen hovering in the swept volume. The design and manufacturing of prototype is performed. It has a resolution of 1024x768x132 voxels at a volume refresh rate of 10 Hz. The 3-D imagery occupies real physical space about 203 cm3, each voxel scatter visible light from the position in which it appears. It provides full parallax, not only enable 3-D imagery to be viewed without any eye wears or headsets, but also support "look around" function. Different viewers from practically any orientation can see different sides of the imagery, as if people watch sculptures.

  8. LADAR performance simulations with a high spectral resolution atmospheric transmittance and radiance model: LEEDR

    NASA Astrophysics Data System (ADS)

    Roth, Benjamin D.; Fiorino, Steven T.

    2012-06-01

    In this study of atmospheric effects on Geiger Mode laser ranging and detection (LADAR), the parameter space is explored primarily using the Air Force Institute of Technology Center for Directed Energy's (AFIT/CDE) Laser Environmental Effects Definition and Reference (LEEDR) code. The expected performance of LADAR systems is assessed at operationally representative wavelengths of 1.064, 1.56 and 2.039 μm at a number of locations worldwide. Signal attenuation and background noise are characterized using LEEDR. These results are compared to standard atmosphere and Fast Atmospheric Signature Code (FASCODE) assessments. Scenarios evaluated are based on air-toground engagements including both down looking oblique and vertical geometries in which anticipated clear air aerosols are expected to occur. Engagement geometry variations are considered to determine optimum employment techniques to exploit or defeat the environmental conditions. Results, presented primarily in the form of worldwide plots of notional signal to noise ratios, show a significant climate dependence, but large variances between climatological and standard atmosphere assessments. An overall average absolute mean difference ratio of 1.03 is found when climatological signal-to-noise ratios at 40 locations are compared to their equivalent standard atmosphere assessment. Atmospheric transmission is shown to not always correlate with signal-to-noise ratios between different atmosphere profiles. Allowing aerosols to swell with relative humidity proves to be significant especially for up looking geometries reducing the signal-to-noise ratio several orders of magnitude. Turbulence blurring effects that impact tracking and imaging show that the LADAR system has little capability at a 50km range yet the turbulence has little impact at a 3km range.

  9. Image quality analysis and improvement of Ladar reflective tomography for space object recognition

    NASA Astrophysics Data System (ADS)

    Wang, Jin-cheng; Zhou, Shi-wei; Shi, Liang; Hu, Yi-Hua; Wang, Yong

    2016-01-01

    Some problems in the application of Ladar reflective tomography for space object recognition are studied in this work. An analytic target model is adopted to investigate the image reconstruction properties with limited relative angle range, which are useful to verify the target shape from the incomplete image, analyze the shadowing effect of the target and design the satellite payloads against recognition via reflective tomography approach. We proposed an iterative maximum likelihood method basing on Bayesian theory, which can effectively compress the pulse width and greatly improve the image resolution of incoherent LRT system without loss of signal to noise ratio.

  10. What Lies Ahead (3-D)

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This 3-D cylindrical-perspective mosaic taken by the navigation camera on the Mars Exploration Rover Spirit on sol 82 shows the view south of the large crater dubbed 'Bonneville.' The rover will travel toward the Columbia Hills, seen here at the upper left. The rock dubbed 'Mazatzal' and the hole the rover drilled in to it can be seen at the lower left. The rover's position is referred to as 'Site 22, Position 32.' This image was geometrically corrected to make the horizon appear flat.

  11. Making Inexpensive 3-D Models

    NASA Astrophysics Data System (ADS)

    Manos, Harry

    2016-03-01

    Visual aids are important to student learning, and they help make the teacher's job easier. Keeping with the TPT theme of "The Art, Craft, and Science of Physics Teaching," the purpose of this article is to show how teachers, lacking equipment and funds, can construct a durable 3-D model reference frame and a model gravity well tailored to specific class lessons. Most of the supplies are readily available in the home or at school: rubbing alcohol, a rag, two colors of spray paint, art brushes, and masking tape. The cost of these supplies, if you don't have them, is less than 20.

  12. 3D Printed Shelby Cobra

    SciTech Connect

    Love, Lonnie

    2015-01-09

    ORNL's newly printed 3D Shelby Cobra was showcased at the 2015 NAIAS in Detroit. This "laboratory on wheels" uses the Shelby Cobra design, celebrating the 50th anniversary of this model and honoring the first vehicle to be voted a national monument. The Shelby was printed at the Department of Energy’s Manufacturing Demonstration Facility at ORNL using the BAAM (Big Area Additive Manufacturing) machine and is intended as a “plug-n-play” laboratory on wheels. The Shelby will allow research and development of integrated components to be tested and enhanced in real time, improving the use of sustainable, digital manufacturing solutions in the automotive industry.

  13. 3-D HYDRODYNAMIC MODELING IN A GEOSPATIAL FRAMEWORK

    SciTech Connect

    Bollinger, J; Alfred Garrett, A; Larry Koffman, L; David Hayes, D

    2006-08-24

    3-D hydrodynamic models are used by the Savannah River National Laboratory (SRNL) to simulate the transport of thermal and radionuclide discharges in coastal estuary systems. Development of such models requires accurate bathymetry, coastline, and boundary condition data in conjunction with the ability to rapidly discretize model domains and interpolate the required geospatial data onto the domain. To facilitate rapid and accurate hydrodynamic model development, SRNL has developed a pre- and post-processor application in a geospatial framework to automate the creation of models using existing data. This automated capability allows development of very detailed models to maximize exploitation of available surface water radionuclide sample data and thermal imagery.

  14. Ideal Positions: 3D Sonography, Medical Visuality, Popular Culture.

    PubMed

    Seiber, Tim

    2016-03-01

    As digital technologies are integrated into medical environments, they continue to transform the experience of contemporary health care. Importantly, medicine is increasingly visual. In the history of sonography, visibility has played an important role in accessing fetal bodies for diagnostic and entertainment purposes. With the advent of three-dimensional (3D) rendering, sonography presents the fetus visually as already a child. The aesthetics of this process and the resulting imagery, made possible in digital networks, discloses important changes in the relationship between technology and biology, reproductive health and political debates, and biotechnology and culture.

  15. 3D acoustic atmospheric tomography

    NASA Astrophysics Data System (ADS)

    Rogers, Kevin; Finn, Anthony

    2014-10-01

    This paper presents a method for tomographically reconstructing spatially varying 3D atmospheric temperature profiles and wind velocity fields based. Measurements of the acoustic signature measured onboard a small Unmanned Aerial Vehicle (UAV) are compared to ground-based observations of the same signals. The frequency-shifted signal variations are then used to estimate the acoustic propagation delay between the UAV and the ground microphones, which are also affected by atmospheric temperature and wind speed vectors along each sound ray path. The wind and temperature profiles are modelled as the weighted sum of Radial Basis Functions (RBFs), which also allow local meteorological measurements made at the UAV and ground receivers to supplement any acoustic observations. Tomography is used to provide a full 3D reconstruction/visualisation of the observed atmosphere. The technique offers observational mobility under direct user control and the capacity to monitor hazardous atmospheric environments, otherwise not justifiable on the basis of cost or risk. This paper summarises the tomographic technique and reports on the results of simulations and initial field trials. The technique has practical applications for atmospheric research, sound propagation studies, boundary layer meteorology, air pollution measurements, analysis of wind shear, and wind farm surveys.

  16. Gravitation in 3D Spacetime

    NASA Astrophysics Data System (ADS)

    Laubenstein, John; Cockream, Kandi

    2009-05-01

    3D spacetime was developed by the IWPD Scale Metrics (SM) team using a coordinate system that translates n dimensions to n-1. 4-vectors are expressed in 3D along with a scaling factor representing time. Time is not orthogonal to the three spatial dimensions, but rather in alignment with an object's axis-of-motion. We have defined this effect as the object's ``orientation'' (X). The SM orientation (X) is equivalent to the orientation of the 4-velocity vector positioned tangent to its worldline, where X-1=θ+1 and θ is the angle of the 4-vector relative to the axis-of -motion. Both 4-vectors and SM appear to represent valid conceptualizations of the relationship between space and time. Why entertain SM? Scale Metrics gravity is quantized and may suggest a path for the full unification of gravitation with quantum theory. SM has been tested against current observation and is in agreement with the age of the universe, suggests a physical relationship between dark energy and dark matter, is in agreement with the accelerating expansion rate of the universe, contributes to the understanding of the fine-structure constant and provides a physical explanation of relativistic effects.

  17. 3D printed bionic ears.

    PubMed

    Mannoor, Manu S; Jiang, Ziwen; James, Teena; Kong, Yong Lin; Malatesta, Karen A; Soboyejo, Winston O; Verma, Naveen; Gracias, David H; McAlpine, Michael C

    2013-06-12

    The ability to three-dimensionally interweave biological tissue with functional electronics could enable the creation of bionic organs possessing enhanced functionalities over their human counterparts. Conventional electronic devices are inherently two-dimensional, preventing seamless multidimensional integration with synthetic biology, as the processes and materials are very different. Here, we present a novel strategy for overcoming these difficulties via additive manufacturing of biological cells with structural and nanoparticle derived electronic elements. As a proof of concept, we generated a bionic ear via 3D printing of a cell-seeded hydrogel matrix in the anatomic geometry of a human ear, along with an intertwined conducting polymer consisting of infused silver nanoparticles. This allowed for in vitro culturing of cartilage tissue around an inductive coil antenna in the ear, which subsequently enables readout of inductively-coupled signals from cochlea-shaped electrodes. The printed ear exhibits enhanced auditory sensing for radio frequency reception, and complementary left and right ears can listen to stereo audio music. Overall, our approach suggests a means to intricately merge biologic and nanoelectronic functionalities via 3D printing.

  18. 3D medical thermography device

    NASA Astrophysics Data System (ADS)

    Moghadam, Peyman

    2015-05-01

    In this paper, a novel handheld 3D medical thermography system is introduced. The proposed system consists of a thermal-infrared camera, a color camera and a depth camera rigidly attached in close proximity and mounted on an ergonomic handle. As a practitioner holding the device smoothly moves it around the human body parts, the proposed system generates and builds up a precise 3D thermogram model by incorporating information from each new measurement in real-time. The data is acquired in motion, thus it provides multiple points of view. When processed, these multiple points of view are adaptively combined by taking into account the reliability of each individual measurement which can vary due to a variety of factors such as angle of incidence, distance between the device and the subject and environmental sensor data or other factors influencing a confidence of the thermal-infrared data when captured. Finally, several case studies are presented to support the usability and performance of the proposed system.

  19. 3D printed bionic ears.

    PubMed

    Mannoor, Manu S; Jiang, Ziwen; James, Teena; Kong, Yong Lin; Malatesta, Karen A; Soboyejo, Winston O; Verma, Naveen; Gracias, David H; McAlpine, Michael C

    2013-06-12

    The ability to three-dimensionally interweave biological tissue with functional electronics could enable the creation of bionic organs possessing enhanced functionalities over their human counterparts. Conventional electronic devices are inherently two-dimensional, preventing seamless multidimensional integration with synthetic biology, as the processes and materials are very different. Here, we present a novel strategy for overcoming these difficulties via additive manufacturing of biological cells with structural and nanoparticle derived electronic elements. As a proof of concept, we generated a bionic ear via 3D printing of a cell-seeded hydrogel matrix in the anatomic geometry of a human ear, along with an intertwined conducting polymer consisting of infused silver nanoparticles. This allowed for in vitro culturing of cartilage tissue around an inductive coil antenna in the ear, which subsequently enables readout of inductively-coupled signals from cochlea-shaped electrodes. The printed ear exhibits enhanced auditory sensing for radio frequency reception, and complementary left and right ears can listen to stereo audio music. Overall, our approach suggests a means to intricately merge biologic and nanoelectronic functionalities via 3D printing. PMID:23635097

  20. 3D Printable Graphene Composite

    PubMed Central

    Wei, Xiaojun; Li, Dong; Jiang, Wei; Gu, Zheming; Wang, Xiaojuan; Zhang, Zengxing; Sun, Zhengzong

    2015-01-01

    In human being’s history, both the Iron Age and Silicon Age thrived after a matured massive processing technology was developed. Graphene is the most recent superior material which could potentially initialize another new material Age. However, while being exploited to its full extent, conventional processing methods fail to provide a link to today’s personalization tide. New technology should be ushered in. Three-dimensional (3D) printing fills the missing linkage between graphene materials and the digital mainstream. Their alliance could generate additional stream to push the graphene revolution into a new phase. Here we demonstrate for the first time, a graphene composite, with a graphene loading up to 5.6 wt%, can be 3D printable into computer-designed models. The composite’s linear thermal coefficient is below 75 ppm·°C−1 from room temperature to its glass transition temperature (Tg), which is crucial to build minute thermal stress during the printing process. PMID:26153673

  1. 3D Printable Graphene Composite

    NASA Astrophysics Data System (ADS)

    Wei, Xiaojun; Li, Dong; Jiang, Wei; Gu, Zheming; Wang, Xiaojuan; Zhang, Zengxing; Sun, Zhengzong

    2015-07-01

    In human being’s history, both the Iron Age and Silicon Age thrived after a matured massive processing technology was developed. Graphene is the most recent superior material which could potentially initialize another new material Age. However, while being exploited to its full extent, conventional processing methods fail to provide a link to today’s personalization tide. New technology should be ushered in. Three-dimensional (3D) printing fills the missing linkage between graphene materials and the digital mainstream. Their alliance could generate additional stream to push the graphene revolution into a new phase. Here we demonstrate for the first time, a graphene composite, with a graphene loading up to 5.6 wt%, can be 3D printable into computer-designed models. The composite’s linear thermal coefficient is below 75 ppm·°C-1 from room temperature to its glass transition temperature (Tg), which is crucial to build minute thermal stress during the printing process.

  2. LOTT RANCH 3D PROJECT

    SciTech Connect

    Larry Lawrence; Bruce Miller

    2004-09-01

    The Lott Ranch 3D seismic prospect located in Garza County, Texas is a project initiated in September of 1991 by the J.M. Huber Corp., a petroleum exploration and production company. By today's standards the 126 square mile project does not seem monumental, however at the time it was conceived it was the most intensive land 3D project ever attempted. Acquisition began in September of 1991 utilizing GEO-SEISMIC, INC., a seismic data contractor. The field parameters were selected by J.M. Huber, and were of a radical design. The recording instruments used were GeoCor IV amplifiers designed by Geosystems Inc., which record the data in signed bit format. It would not have been practical, if not impossible, to have processed the entire raw volume with the tools available at that time. The end result was a dataset that was thought to have little utility due to difficulties in processing the field data. In 1997, Yates Energy Corp. located in Roswell, New Mexico, formed a partnership to further develop the project. Through discussions and meetings with Pinnacle Seismic, it was determined that the original Lott Ranch 3D volume could be vastly improved upon reprocessing. Pinnacle Seismic had shown the viability of improving field-summed signed bit data on smaller 2D and 3D projects. Yates contracted Pinnacle Seismic Ltd. to perform the reprocessing. This project was initiated with high resolution being a priority. Much of the potential resolution was lost through the initial summing of the field data. Modern computers that are now being utilized have tremendous speed and storage capacities that were cost prohibitive when this data was initially processed. Software updates and capabilities offer a variety of quality control and statics resolution, which are pertinent to the Lott Ranch project. The reprocessing effort was very successful. The resulting processed data-set was then interpreted using modern PC-based interpretation and mapping software. Production data, log data

  3. Evaluation of terrestrial photogrammetric point clouds derived from thermal imagery

    NASA Astrophysics Data System (ADS)

    Metcalf, Jeremy P.; Olsen, Richard C.

    2016-05-01

    Computer vision and photogrammetric techniques have been widely applied to digital imagery producing high density 3D point clouds. Using thermal imagery as input, the same techniques can be applied to infrared data to produce point clouds in 3D space, providing surface temperature information. The work presented here is an evaluation of the accuracy of 3D reconstruction of point clouds produced using thermal imagery. An urban scene was imaged over an area at the Naval Postgraduate School, Monterey, CA, viewing from above as with an airborne system. Terrestrial thermal and RGB imagery were collected from a rooftop overlooking the site using a FLIR SC8200 MWIR camera and a Canon T1i DSLR. In order to spatially align each dataset, ground control points were placed throughout the study area using Trimble R10 GNSS receivers operating in RTK mode. Each image dataset is processed to produce a dense point cloud for 3D evaluation.

  4. 3D segmentation and reconstruction of endobronchial ultrasound

    NASA Astrophysics Data System (ADS)

    Zang, Xiaonan; Breslav, Mikhail; Higgins, William E.

    2013-03-01

    State-of-the-art practice for lung-cancer staging bronchoscopy often draws upon a combination of endobronchial ultrasound (EBUS) and multidetector computed-tomography (MDCT) imaging. While EBUS offers real-time in vivo imaging of suspicious lesions and lymph nodes, its low signal-to-noise ratio and tendency to exhibit missing region-of-interest (ROI) boundaries complicate diagnostic tasks. Furthermore, past efforts did not incorporate automated analysis of EBUS images and a subsequent fusion of the EBUS and MDCT data. To address these issues, we propose near real-time automated methods for three-dimensional (3D) EBUS segmentation and reconstruction that generate a 3D ROI model along with ROI measurements. Results derived from phantom data and lung-cancer patients show the promise of the methods. In addition, we present a preliminary image-guided intervention (IGI) system example, whereby EBUS imagery is registered to a patient's MDCT chest scan.

  5. Low-power portable scanning imaging ladar system

    NASA Astrophysics Data System (ADS)

    Pyburn, Dana; Leon, Roberto; Haji-Saeed, B.; Sengupta, Sandip K.; Testorf, Markus; Kierstead, John; Khoury, Jehad; Woods, Charles L.; Lorenzo, Joseph

    2003-08-01

    We propose and are in the process of progressively implementing an improved architecture for a laser based system to acquire intensity and range images of hard targets in real-time. The system design emphasizes the use of low power laser sources in conjunction with optical preamplification of target return signals to maintain eye safety without incurring the associated performance penalty. The design leverages advanced fiber optic component technology developed for the commercial market to achieve compactness and low power consumption without the high costs and long lead times associated with custom military devices. All important system parameters are designed to be configured in the field, by the user, in software, allowing for adaptive reconfiguration for different missions and targets. Recently we have started our transition from the initial test bed, using a laser in the visible wavelength, into the final system with a 1550nm diode laser. Currently we are able to acquire and display 3-D false-color and gray-scale images, in the laboratory, at moderate frame rates in real-time. Commercial off-the-shelf data acquisition and signal processing software on a desktop computer equipped with commercial acquisition hardware is utilized. Significant improvements in both range and spatial resolution are expected in the near future.

  6. Simulation of a new 3D imaging sensor for identifying difficult military targets

    NASA Astrophysics Data System (ADS)

    Harvey, Christophe; Wood, Jonathan; Randall, Peter; Watson, Graham; Smith, Gordon

    2008-04-01

    This paper reports the successful application of automatic target recognition and identification (ATR/I) algorithms to simulated 3D imagery of 'difficult' military targets. QinetiQ and Selex S&AS are engaged in a joint programme to build a new 3D laser imaging sensor for UK MOD. The sensor is a 3D flash system giving an image containing range and intensity information suitable for targeting operations from fast jet platforms, and is currently being integrated with an ATR/I suite for demonstration and testing. The sensor has been extensively modelled and a set of high fidelity simulated imagery has been generated using the CAMEO-SIM scene generation software tool. These include a variety of different scenarios (varying range, platform altitude, target orientation and environments), and some 'difficult' targets such as concealed military vehicles. The ATR/I algorithms have been tested on this image set and their performance compared to 2D passive imagery from the airborne trials using a Wescam MX-15 infrared sensor and real-time ATR/I suite. This paper outlines the principles behind the sensor model and the methodology of 3D scene simulation. An overview of the 3D ATR/I programme and algorithms is presented, and the relative performance of the ATR/I against the simulated image set is reported. Comparisons are made to the performance of typical 2D sensors, confirming the benefits of 3D imaging for targeting applications.

  7. 3D Printing of Graphene Aerogels.

    PubMed

    Zhang, Qiangqiang; Zhang, Feng; Medarametla, Sai Pradeep; Li, Hui; Zhou, Chi; Lin, Dong

    2016-04-01

    3D printing of a graphene aerogel with true 3D overhang structures is highlighted. The aerogel is fabricated by combining drop-on-demand 3D printing and freeze casting. The water-based GO ink is ejected and freeze-cast into designed 3D structures. The lightweight (<10 mg cm(-3) ) 3D printed graphene aerogel presents superelastic and high electrical conduction.

  8. 3D Printing of Graphene Aerogels.

    PubMed

    Zhang, Qiangqiang; Zhang, Feng; Medarametla, Sai Pradeep; Li, Hui; Zhou, Chi; Lin, Dong

    2016-04-01

    3D printing of a graphene aerogel with true 3D overhang structures is highlighted. The aerogel is fabricated by combining drop-on-demand 3D printing and freeze casting. The water-based GO ink is ejected and freeze-cast into designed 3D structures. The lightweight (<10 mg cm(-3) ) 3D printed graphene aerogel presents superelastic and high electrical conduction. PMID:26861680

  9. Simulation of a Geiger-Mode Imaging LADAR System for Performance Assessment

    PubMed Central

    Kim, Seongjoon; Lee, Impyeong; Kwon, Yong Joon

    2013-01-01

    As LADAR systems applications gradually become more diverse, new types of systems are being developed. When developing new systems, simulation studies are an essential prerequisite. A simulator enables performance predictions and optimal system parameters at the design level, as well as providing sample data for developing and validating application algorithms. The purpose of the study is to propose a method for simulating a Geiger-mode imaging LADAR system. We develop simulation software to assess system performance and generate sample data for the applications. The simulation is based on three aspects of modeling—the geometry, radiometry and detection. The geometric model computes the ranges to the reflection points of the laser pulses. The radiometric model generates the return signals, including the noises. The detection model determines the flight times of the laser pulses based on the nature of the Geiger-mode detector. We generated sample data using the simulator with the system parameters and analyzed the detection performance by comparing the simulated points to the reference points. The proportion of the outliers in the simulated points reached 25.53%, indicating the need for efficient outlier elimination algorithms. In addition, the false alarm rate and dropout rate of the designed system were computed as 1.76% and 1.06%, respectively. PMID:23823970

  10. High power CO2 coherent ladar haven't quit the stage of military affairs

    NASA Astrophysics Data System (ADS)

    Zhang, Heyong

    2015-05-01

    The invention of the laser in 1960 created the possibility of using a source of coherent light as a transmitter for a laser radar (ladar). Coherent ladar shares many of the basic features of more common microwave radars. However, it is the extremely short operating wavelength of lasers that introduces new military applications, especially in the area of missile identification, space target tracking, remote rang finding, camouflage discrimination and toxic agent detection. Therefore, the most popular application field such as laser imaging and ranging were focused on CO2 laser in the last few decades. But during the development of solid state and fiber laser, some people said that the CO2 laser will be disappeared and will be replaced by the solid and fiber laser in the field of military and industry. The coherent CO2 laser radar will have the same destiny in the field of military affairs. However, to my opinion, the high power CO2 laser will be the most important laser source for laser radar and countermeasure in the future.

  11. ShowMe3D

    SciTech Connect

    Sinclair, Michael B

    2012-01-05

    ShowMe3D is a data visualization graphical user interface specifically designed for use with hyperspectral image obtained from the Hyperspectral Confocal Microscope. The program allows the user to select and display any single image from a three dimensional hyperspectral image stack. By moving a slider control, the user can easily move between images of the stack. The user can zoom into any region of the image. The user can select any pixel or region from the displayed image and display the fluorescence spectrum associated with that pixel or region. The user can define up to 3 spectral filters to apply to the hyperspectral image and view the image as it would appear from a filter-based confocal microscope. The user can also obtain statistics such as intensity average and variance from selected regions.

  12. 3D Elastic Wavefield Tomography

    NASA Astrophysics Data System (ADS)

    Guasch, L.; Warner, M.; Stekl, I.; Umpleby, A.; Shah, N.

    2010-12-01

    Wavefield tomography, or waveform inversion, aims to extract the maximum information from seismic data by matching trace by trace the response of the solid earth to seismic waves using numerical modelling tools. Its first formulation dates from the early 80's, when Albert Tarantola developed a solid theoretical basis that is still used today with little change. Due to computational limitations, the application of the method to 3D problems has been unaffordable until a few years ago, and then only under the acoustic approximation. Although acoustic wavefield tomography is widely used, a complete solution of the seismic inversion problem requires that we account properly for the physics of wave propagation, and so must include elastic effects. We have developed a 3D tomographic wavefield inversion code that incorporates the full elastic wave equation. The bottle neck of the different implementations is the forward modelling algorithm that generates the synthetic data to be compared with the field seismograms as well as the backpropagation of the residuals needed to form the direction update of the model parameters. Furthermore, one or two extra modelling runs are needed in order to calculate the step-length. Our approach uses a FD scheme explicit time-stepping by finite differences that are 4th order in space and 2nd order in time, which is a 3D version of the one developed by Jean Virieux in 1986. We chose the time domain because an explicit time scheme is much less demanding in terms of memory than its frequency domain analogue, although the discussion of wich domain is more efficient still remains open. We calculate the parameter gradients for Vp and Vs by correlating the normal and shear stress wavefields respectively. A straightforward application would lead to the storage of the wavefield at all grid points at each time-step. We tackled this problem using two different approaches. The first one makes better use of resources for small models of dimension equal

  13. Conducting Polymer 3D Microelectrodes

    PubMed Central

    Sasso, Luigi; Vazquez, Patricia; Vedarethinam, Indumathi; Castillo-León, Jaime; Emnéus, Jenny; Svendsen, Winnie E.

    2010-01-01

    Conducting polymer 3D microelectrodes have been fabricated for possible future neurological applications. A combination of micro-fabrication techniques and chemical polymerization methods has been used to create pillar electrodes in polyaniline and polypyrrole. The thin polymer films obtained showed uniformity and good adhesion to both horizontal and vertical surfaces. Electrodes in combination with metal/conducting polymer materials have been characterized by cyclic voltammetry and the presence of the conducting polymer film has shown to increase the electrochemical activity when compared with electrodes coated with only metal. An electrochemical characterization of gold/polypyrrole electrodes showed exceptional electrochemical behavior and activity. PC12 cells were finally cultured on the investigated materials as a preliminary biocompatibility assessment. These results show that the described electrodes are possibly suitable for future in-vitro neurological measurements. PMID:22163508

  14. ShowMe3D

    2012-01-05

    ShowMe3D is a data visualization graphical user interface specifically designed for use with hyperspectral image obtained from the Hyperspectral Confocal Microscope. The program allows the user to select and display any single image from a three dimensional hyperspectral image stack. By moving a slider control, the user can easily move between images of the stack. The user can zoom into any region of the image. The user can select any pixel or region from themore » displayed image and display the fluorescence spectrum associated with that pixel or region. The user can define up to 3 spectral filters to apply to the hyperspectral image and view the image as it would appear from a filter-based confocal microscope. The user can also obtain statistics such as intensity average and variance from selected regions.« less

  15. Supernova Remnant in 3-D

    NASA Technical Reports Server (NTRS)

    2009-01-01

    wavelengths. Since the amount of the wavelength shift is related to the speed of motion, one can determine how fast the debris are moving in either direction. Because Cas A is the result of an explosion, the stellar debris is expanding radially outwards from the explosion center. Using simple geometry, the scientists were able to construct a 3-D model using all of this information. A program called 3-D Slicer modified for astronomical use by the Astronomical Medicine Project at Harvard University in Cambridge, Mass. was used to display and manipulate the 3-D model. Commercial software was then used to create the 3-D fly-through.

    The blue filaments defining the blast wave were not mapped using the Doppler effect because they emit a different kind of light synchrotron radiation that does not emit light at discrete wavelengths, but rather in a broad continuum. The blue filaments are only a representation of the actual filaments observed at the blast wave.

    This visualization shows that there are two main components to this supernova remnant: a spherical component in the outer parts of the remnant and a flattened (disk-like) component in the inner region. The spherical component consists of the outer layer of the star that exploded, probably made of helium and carbon. These layers drove a spherical blast wave into the diffuse gas surrounding the star. The flattened component that astronomers were unable to map into 3-D prior to these Spitzer observations consists of the inner layers of the star. It is made from various heavier elements, not all shown in the visualization, such as oxygen, neon, silicon, sulphur, argon and iron.

    High-velocity plumes, or jets, of this material are shooting out from the explosion in the plane of the disk-like component mentioned above. Plumes of silicon appear in the northeast and southwest, while those of iron are seen in the southeast and north. These jets were already known and Doppler velocity measurements have been made for these

  16. Supernova Remnant in 3-D

    NASA Technical Reports Server (NTRS)

    2009-01-01

    wavelengths. Since the amount of the wavelength shift is related to the speed of motion, one can determine how fast the debris are moving in either direction. Because Cas A is the result of an explosion, the stellar debris is expanding radially outwards from the explosion center. Using simple geometry, the scientists were able to construct a 3-D model using all of this information. A program called 3-D Slicer modified for astronomical use by the Astronomical Medicine Project at Harvard University in Cambridge, Mass. was used to display and manipulate the 3-D model. Commercial software was then used to create the 3-D fly-through.

    The blue filaments defining the blast wave were not mapped using the Doppler effect because they emit a different kind of light synchrotron radiation that does not emit light at discrete wavelengths, but rather in a broad continuum. The blue filaments are only a representation of the actual filaments observed at the blast wave.

    This visualization shows that there are two main components to this supernova remnant: a spherical component in the outer parts of the remnant and a flattened (disk-like) component in the inner region. The spherical component consists of the outer layer of the star that exploded, probably made of helium and carbon. These layers drove a spherical blast wave into the diffuse gas surrounding the star. The flattened component that astronomers were unable to map into 3-D prior to these Spitzer observations consists of the inner layers of the star. It is made from various heavier elements, not all shown in the visualization, such as oxygen, neon, silicon, sulphur, argon and iron.

    High-velocity plumes, or jets, of this material are shooting out from the explosion in the plane of the disk-like component mentioned above. Plumes of silicon appear in the northeast and southwest, while those of iron are seen in the southeast and north. These jets were already known and Doppler velocity measurements have been made for these

  17. 3D Structure of Tillage Soils

    NASA Astrophysics Data System (ADS)

    González-Torre, Iván; Losada, Juan Carlos; Falconer, Ruth; Hapca, Simona; Tarquis, Ana M.

    2015-04-01

    Soil structure may be defined as the spatial arrangement of soil particles, aggregates and pores. The geometry of each one of these elements, as well as their spatial arrangement, has a great influence on the transport of fluids and solutes through the soil. Fractal/Multifractal methods have been increasingly applied to quantify soil structure thanks to the advances in computer technology (Tarquis et al., 2003). There is no doubt that computed tomography (CT) has provided an alternative for observing intact soil structure. These CT techniques reduce the physical impact to sampling, providing three-dimensional (3D) information and allowing rapid scanning to study sample dynamics in near real-time (Houston et al., 2013a). However, several authors have dedicated attention to the appropriate pore-solid CT threshold (Elliot and Heck, 2007; Houston et al., 2013b) and the better method to estimate the multifractal parameters (Grau et al., 2006; Tarquis et al., 2009). The aim of the present study is to evaluate the effect of the algorithm applied in the multifractal method (box counting and box gliding) and the cube size on the calculation of generalized fractal dimensions (Dq) in grey images without applying any threshold. To this end, soil samples were extracted from different areas plowed with three tools (moldboard, chissel and plow). Soil samples for each of the tillage treatment were packed into polypropylene cylinders of 8 cm diameter and 10 cm high. These were imaged using an mSIMCT at 155keV and 25 mA. An aluminium filter (0.25 mm) was applied to reduce beam hardening and later several corrections where applied during reconstruction. References Elliot, T.R. and Heck, R.J. 2007. A comparison of 2D and 3D thresholding of CT imagery. Can. J. Soil Sci., 87(4), 405-412. Grau, J, Médez, V.; Tarquis, A.M., Saa, A. and Díaz, M.C.. 2006. Comparison of gliding box and box-counting methods in soil image analysis. Geoderma, 134, 349-359. González-Torres, Iván. Theory and

  18. 3D multiplexed immunoplasmonics microscopy

    NASA Astrophysics Data System (ADS)

    Bergeron, Éric; Patskovsky, Sergiy; Rioux, David; Meunier, Michel

    2016-07-01

    Selective labelling, identification and spatial distribution of cell surface biomarkers can provide important clinical information, such as distinction between healthy and diseased cells, evolution of a disease and selection of the optimal patient-specific treatment. Immunofluorescence is the gold standard for efficient detection of biomarkers expressed by cells. However, antibodies (Abs) conjugated to fluorescent dyes remain limited by their photobleaching, high sensitivity to the environment, low light intensity, and wide absorption and emission spectra. Immunoplasmonics is a novel microscopy method based on the visualization of Abs-functionalized plasmonic nanoparticles (fNPs) targeting cell surface biomarkers. Tunable fNPs should provide higher multiplexing capacity than immunofluorescence since NPs are photostable over time, strongly scatter light at their plasmon peak wavelengths and can be easily functionalized. In this article, we experimentally demonstrate accurate multiplexed detection based on the immunoplasmonics approach. First, we achieve the selective labelling of three targeted cell surface biomarkers (cluster of differentiation 44 (CD44), epidermal growth factor receptor (EGFR) and voltage-gated K+ channel subunit KV1.1) on human cancer CD44+ EGFR+ KV1.1+ MDA-MB-231 cells and reference CD44- EGFR- KV1.1+ 661W cells. The labelling efficiency with three stable specific immunoplasmonics labels (functionalized silver nanospheres (CD44-AgNSs), gold (Au) NSs (EGFR-AuNSs) and Au nanorods (KV1.1-AuNRs)) detected by reflected light microscopy (RLM) is similar to the one with immunofluorescence. Second, we introduce an improved method for 3D localization and spectral identification of fNPs based on fast z-scanning by RLM with three spectral filters corresponding to the plasmon peak wavelengths of the immunoplasmonics labels in the cellular environment (500 nm for 80 nm AgNSs, 580 nm for 100 nm AuNSs and 700 nm for 40 nm × 92 nm AuNRs). Third, the developed

  19. 3D multiplexed immunoplasmonics microscopy.

    PubMed

    Bergeron, Éric; Patskovsky, Sergiy; Rioux, David; Meunier, Michel

    2016-07-21

    Selective labelling, identification and spatial distribution of cell surface biomarkers can provide important clinical information, such as distinction between healthy and diseased cells, evolution of a disease and selection of the optimal patient-specific treatment. Immunofluorescence is the gold standard for efficient detection of biomarkers expressed by cells. However, antibodies (Abs) conjugated to fluorescent dyes remain limited by their photobleaching, high sensitivity to the environment, low light intensity, and wide absorption and emission spectra. Immunoplasmonics is a novel microscopy method based on the visualization of Abs-functionalized plasmonic nanoparticles (fNPs) targeting cell surface biomarkers. Tunable fNPs should provide higher multiplexing capacity than immunofluorescence since NPs are photostable over time, strongly scatter light at their plasmon peak wavelengths and can be easily functionalized. In this article, we experimentally demonstrate accurate multiplexed detection based on the immunoplasmonics approach. First, we achieve the selective labelling of three targeted cell surface biomarkers (cluster of differentiation 44 (CD44), epidermal growth factor receptor (EGFR) and voltage-gated K(+) channel subunit KV1.1) on human cancer CD44(+) EGFR(+) KV1.1(+) MDA-MB-231 cells and reference CD44(-) EGFR(-) KV1.1(+) 661W cells. The labelling efficiency with three stable specific immunoplasmonics labels (functionalized silver nanospheres (CD44-AgNSs), gold (Au) NSs (EGFR-AuNSs) and Au nanorods (KV1.1-AuNRs)) detected by reflected light microscopy (RLM) is similar to the one with immunofluorescence. Second, we introduce an improved method for 3D localization and spectral identification of fNPs based on fast z-scanning by RLM with three spectral filters corresponding to the plasmon peak wavelengths of the immunoplasmonics labels in the cellular environment (500 nm for 80 nm AgNSs, 580 nm for 100 nm AuNSs and 700 nm for 40 nm × 92 nm AuNRs). Third

  20. NIF Ignition Target 3D Point Design

    SciTech Connect

    Jones, O; Marinak, M; Milovich, J; Callahan, D

    2008-11-05

    We have developed an input file for running 3D NIF hohlraums that is optimized such that it can be run in 1-2 days on parallel computers. We have incorporated increasing levels of automation into the 3D input file: (1) Configuration controlled input files; (2) Common file for 2D and 3D, different types of capsules (symcap, etc.); and (3) Can obtain target dimensions, laser pulse, and diagnostics settings automatically from NIF Campaign Management Tool. Using 3D Hydra calculations to investigate different problems: (1) Intrinsic 3D asymmetry; (2) Tolerance to nonideal 3D effects (e.g. laser power balance, pointing errors); and (3) Synthetic diagnostics.

  1. 3D Kitaev spin liquids

    NASA Astrophysics Data System (ADS)

    Hermanns, Maria

    The Kitaev honeycomb model has become one of the archetypal spin models exhibiting topological phases of matter, where the magnetic moments fractionalize into Majorana fermions interacting with a Z2 gauge field. In this talk, we discuss generalizations of this model to three-dimensional lattice structures. Our main focus is the metallic state that the emergent Majorana fermions form. In particular, we discuss the relation of the nature of this Majorana metal to the details of the underlying lattice structure. Besides (almost) conventional metals with a Majorana Fermi surface, one also finds various realizations of Dirac semi-metals, where the gapless modes form Fermi lines or even Weyl nodes. We introduce a general classification of these gapless quantum spin liquids using projective symmetry analysis. Furthermore, we briefly outline why these Majorana metals in 3D Kitaev systems provide an even richer variety of Dirac and Weyl phases than possible for electronic matter and comment on possible experimental signatures. Work done in collaboration with Kevin O'Brien and Simon Trebst.

  2. Locomotive wheel 3D reconstruction

    NASA Astrophysics Data System (ADS)

    Guan, Xin; Luo, Zhisheng; Gao, Xiaorong; Wu, Jianle

    2010-08-01

    In the article, a system, which is used to reconstruct locomotive wheels, is described, helping workers detect the condition of a wheel through a direct view. The system consists of a line laser, a 2D camera, and a computer. We use 2D camera to capture the line-laser light reflected by the object, a wheel, and then compute the final coordinates of the structured light. Finally, using Matlab programming language, we transform the coordinate of points to a smooth surface and illustrate the 3D view of the wheel. The article also proposes the system structure, processing steps and methods, and sets up an experimental platform to verify the design proposal. We verify the feasibility of the whole process, and analyze the results comparing to standard date. The test results show that this system can work well, and has a high accuracy on the reconstruction. And because there is still no such application working in railway industries, so that it has practical value in railway inspection system.

  3. 3D ultrafast laser scanner

    NASA Astrophysics Data System (ADS)

    Mahjoubfar, A.; Goda, K.; Wang, C.; Fard, A.; Adam, J.; Gossett, D. R.; Ayazi, A.; Sollier, E.; Malik, O.; Chen, E.; Liu, Y.; Brown, R.; Sarkhosh, N.; Di Carlo, D.; Jalali, B.

    2013-03-01

    Laser scanners are essential for scientific research, manufacturing, defense, and medical practice. Unfortunately, often times the speed of conventional laser scanners (e.g., galvanometric mirrors and acousto-optic deflectors) falls short for many applications, resulting in motion blur and failure to capture fast transient information. Here, we present a novel type of laser scanner that offers roughly three orders of magnitude higher scan rates than conventional methods. Our laser scanner, which we refer to as the hybrid dispersion laser scanner, performs inertia-free laser scanning by dispersing a train of broadband pulses both temporally and spatially. More specifically, each broadband pulse is temporally processed by time stretch dispersive Fourier transform and further dispersed into space by one or more diffractive elements such as prisms and gratings. As a proof-of-principle demonstration, we perform 1D line scans at a record high scan rate of 91 MHz and 2D raster scans and 3D volumetric scans at an unprecedented scan rate of 105 kHz. The method holds promise for a broad range of scientific, industrial, and biomedical applications. To show the utility of our method, we demonstrate imaging, nanometer-resolved surface vibrometry, and high-precision flow cytometry with real-time throughput that conventional laser scanners cannot offer due to their low scan rates.

  4. 3D multiplexed immunoplasmonics microscopy

    NASA Astrophysics Data System (ADS)

    Bergeron, Éric; Patskovsky, Sergiy; Rioux, David; Meunier, Michel

    2016-07-01

    Selective labelling, identification and spatial distribution of cell surface biomarkers can provide important clinical information, such as distinction between healthy and diseased cells, evolution of a disease and selection of the optimal patient-specific treatment. Immunofluorescence is the gold standard for efficient detection of biomarkers expressed by cells. However, antibodies (Abs) conjugated to fluorescent dyes remain limited by their photobleaching, high sensitivity to the environment, low light intensity, and wide absorption and emission spectra. Immunoplasmonics is a novel microscopy method based on the visualization of Abs-functionalized plasmonic nanoparticles (fNPs) targeting cell surface biomarkers. Tunable fNPs should provide higher multiplexing capacity than immunofluorescence since NPs are photostable over time, strongly scatter light at their plasmon peak wavelengths and can be easily functionalized. In this article, we experimentally demonstrate accurate multiplexed detection based on the immunoplasmonics approach. First, we achieve the selective labelling of three targeted cell surface biomarkers (cluster of differentiation 44 (CD44), epidermal growth factor receptor (EGFR) and voltage-gated K+ channel subunit KV1.1) on human cancer CD44+ EGFR+ KV1.1+ MDA-MB-231 cells and reference CD44- EGFR- KV1.1+ 661W cells. The labelling efficiency with three stable specific immunoplasmonics labels (functionalized silver nanospheres (CD44-AgNSs), gold (Au) NSs (EGFR-AuNSs) and Au nanorods (KV1.1-AuNRs)) detected by reflected light microscopy (RLM) is similar to the one with immunofluorescence. Second, we introduce an improved method for 3D localization and spectral identification of fNPs based on fast z-scanning by RLM with three spectral filters corresponding to the plasmon peak wavelengths of the immunoplasmonics labels in the cellular environment (500 nm for 80 nm AgNSs, 580 nm for 100 nm AuNSs and 700 nm for 40 nm × 92 nm AuNRs). Third, the developed

  5. Crowdsourcing Based 3d Modeling

    NASA Astrophysics Data System (ADS)

    Somogyi, A.; Barsi, A.; Molnar, B.; Lovas, T.

    2016-06-01

    Web-based photo albums that support organizing and viewing the users' images are widely used. These services provide a convenient solution for storing, editing and sharing images. In many cases, the users attach geotags to the images in order to enable using them e.g. in location based applications on social networks. Our paper discusses a procedure that collects open access images from a site frequently visited by tourists. Geotagged pictures showing the image of a sight or tourist attraction are selected and processed in photogrammetric processing software that produces the 3D model of the captured object. For the particular investigation we selected three attractions in Budapest. To assess the geometrical accuracy, we used laser scanner and DSLR as well as smart phone photography to derive reference values to enable verifying the spatial model obtained from the web-album images. The investigation shows how detailed and accurate models could be derived applying photogrammetric processing software, simply by using images of the community, without visiting the site.

  6. 3-D Object Recognition from Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Smith, W.; Walker, A. S.; Zhang, B.

    2011-09-01

    The market for real-time 3-D mapping includes not only traditional geospatial applications but also navigation of unmanned autonomous vehicles (UAVs). Massively parallel processes such as graphics processing unit (GPU) computing make real-time 3-D object recognition and mapping achievable. Geospatial technologies such as digital photogrammetry and GIS offer advanced capabilities to produce 2-D and 3-D static maps using UAV data. The goal is to develop real-time UAV navigation through increased automation. It is challenging for a computer to identify a 3-D object such as a car, a tree or a house, yet automatic 3-D object recognition is essential to increasing the productivity of geospatial data such as 3-D city site models. In the past three decades, researchers have used radiometric properties to identify objects in digital imagery with limited success, because these properties vary considerably from image to image. Consequently, our team has developed software that recognizes certain types of 3-D objects within 3-D point clouds. Although our software is developed for modeling, simulation and visualization, it has the potential to be valuable in robotics and UAV applications. The locations and shapes of 3-D objects such as buildings and trees are easily recognizable by a human from a brief glance at a representation of a point cloud such as terrain-shaded relief. The algorithms to extract these objects have been developed and require only the point cloud and minimal human inputs such as a set of limits on building size and a request to turn on a squaring option. The algorithms use both digital surface model (DSM) and digital elevation model (DEM), so software has also been developed to derive the latter from the former. The process continues through the following steps: identify and group 3-D object points into regions; separate buildings and houses from trees; trace region boundaries; regularize and simplify boundary polygons; construct complex roofs. Several case

  7. Forward ramp in 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Mars Pathfinder's forward rover ramp can be seen successfully unfurled in this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. This ramp was not used for the deployment of the microrover Sojourner, which occurred at the end of Sol 2. When this image was taken, Sojourner was still latched to one of the lander's petals, waiting for the command sequence that would execute its descent off of the lander's petal.

    The image helped Pathfinder scientists determine whether to deploy the rover using the forward or backward ramps and the nature of the first rover traverse. The metallic object at the lower left of the image is the lander's low-gain antenna. The square at the end of the ramp is one of the spacecraft's magnetic targets. Dust that accumulates on the magnetic targets will later be examined by Sojourner's Alpha Proton X-Ray Spectrometer instrument for chemical analysis. At right, a lander petal is visible.

    The IMP is a stereo imaging system with color capability provided by 24 selectable filters -- twelve filters per 'eye.' It stands 1.8 meters above the Martian surface, and has a resolution of two millimeters at a range of two meters.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  8. 3D grain boundary migration

    NASA Astrophysics Data System (ADS)

    Becker, J. K.; Bons, P. D.

    2009-04-01

    Microstructures of rocks play an important role in determining rheological properties and help to reveal the processes that lead to their formation. Some of these processes change the microstructure significantly and may thus have the opposite effect in obliterating any fabrics indicative of the previous history of the rocks. One of these processes is grain boundary migration (GBM). During static recrystallisation, GBM may produce a foam texture that completely overprints a pre-existing grain boundary network and GBM actively influences the rheology of a rock, via its influence on grain size and lattice defect concentration. We here present a new numerical simulation software that is capable of simulating a whole range of processes on the grain scale (it is not limited to grain boundary migration). The software is polyhedron-based, meaning that each grain (or phase) is represented by a polyhedron that has discrete boundaries. The boundary (the shell) of the polyhedron is defined by a set of facets which in turn is defined by a set of vertices. Each structural entity (polyhedron, facets and vertices) can have an unlimited number of parameters (depending on the process to be modeled) such as surface energy, concentration, etc. which can be used to calculate changes of the microstructre. We use the processes of grain boundary migration of a "regular" and a partially molten rock to demonstrate the software. Since this software is 3D, the formation of melt networks in a partially molten rock can also be studied. The interconnected melt network is of fundamental importance for melt segregation and migration in the crust and mantle and can help to understand the core-mantle differentiation of large terrestrial planets.

  9. Quantitative 3-D imaging topogrammetry for telemedicine applications

    NASA Technical Reports Server (NTRS)

    Altschuler, Bruce R.

    1994-01-01

    The technology to reliably transmit high-resolution visual imagery over short to medium distances in real time has led to the serious considerations of the use of telemedicine, telepresence, and telerobotics in the delivery of health care. These concepts may involve, and evolve toward: consultation from remote expert teaching centers; diagnosis; triage; real-time remote advice to the surgeon; and real-time remote surgical instrument manipulation (telerobotics with virtual reality). Further extrapolation leads to teledesign and telereplication of spare surgical parts through quantitative teleimaging of 3-D surfaces tied to CAD/CAM devices and an artificially intelligent archival data base of 'normal' shapes. The ability to generate 'topogrames' or 3-D surface numerical tables of coordinate values capable of creating computer-generated virtual holographic-like displays, machine part replication, and statistical diagnostic shape assessment is critical to the progression of telemedicine. Any virtual reality simulation will remain in 'video-game' realm until realistic dimensional and spatial relational inputs from real measurements in vivo during surgeries are added to an ever-growing statistical data archive. The challenges of managing and interpreting this 3-D data base, which would include radiographic and surface quantitative data, are considerable. As technology drives toward dynamic and continuous 3-D surface measurements, presenting millions of X, Y, Z data points per second of flexing, stretching, moving human organs, the knowledge base and interpretive capabilities of 'brilliant robots' to work as a surgeon's tireless assistants becomes imaginable. The brilliant robot would 'see' what the surgeon sees--and more, for the robot could quantify its 3-D sensing and would 'see' in a wider spectral range than humans, and could zoom its 'eyes' from the macro world to long-distance microscopy. Unerring robot hands could rapidly perform machine-aided suturing with

  10. 3D Printing and Its Urologic Applications

    PubMed Central

    Soliman, Youssef; Feibus, Allison H; Baum, Neil

    2015-01-01

    3D printing is the development of 3D objects via an additive process in which successive layers of material are applied under computer control. This article discusses 3D printing, with an emphasis on its historical context and its potential use in the field of urology. PMID:26028997

  11. Imaging a Sustainable Future in 3D

    NASA Astrophysics Data System (ADS)

    Schuhr, W.; Lee, J. D.; Kanngieser, E.

    2012-07-01

    It is the intention of this paper, to contribute to a sustainable future by providing objective object information based on 3D photography as well as promoting 3D photography not only for scientists, but also for amateurs. Due to the presentation of this article by CIPA Task Group 3 on "3D Photographs in Cultural Heritage", the presented samples are masterpieces of historic as well as of current 3D photography concentrating on cultural heritage. In addition to a report on exemplarily access to international archives of 3D photographs, samples for new 3D photographs taken with modern 3D cameras, as well as by means of a ground based high resolution XLITE staff camera and also 3D photographs taken from a captive balloon and the use of civil drone platforms are dealt with. To advise on optimum suited 3D methodology, as well as to catch new trends in 3D, an updated synoptic overview of the 3D visualization technology, even claiming completeness, has been carried out as a result of a systematic survey. In this respect, e.g., today's lasered crystals might be "early bird" products in 3D, which, due to lack in resolution, contrast and color, remember to the stage of the invention of photography.

  12. 3D Printing and Its Urologic Applications.

    PubMed

    Soliman, Youssef; Feibus, Allison H; Baum, Neil

    2015-01-01

    3D printing is the development of 3D objects via an additive process in which successive layers of material are applied under computer control. This article discusses 3D printing, with an emphasis on its historical context and its potential use in the field of urology.

  13. Beowulf 3D: a case study

    NASA Astrophysics Data System (ADS)

    Engle, Rob

    2008-02-01

    This paper discusses the creative and technical challenges encountered during the production of "Beowulf 3D," director Robert Zemeckis' adaptation of the Old English epic poem and the first film to be simultaneously released in IMAX 3D and digital 3D formats.

  14. Teaching Geography with 3-D Visualization Technology

    ERIC Educational Resources Information Center

    Anthamatten, Peter; Ziegler, Susy S.

    2006-01-01

    Technology that helps students view images in three dimensions (3-D) can support a broad range of learning styles. "Geo-Wall systems" are visualization tools that allow scientists, teachers, and students to project stereographic images and view them in 3-D. We developed and presented 3-D visualization exercises in several undergraduate courses.…

  15. Expanding Geometry Understanding with 3D Printing

    ERIC Educational Resources Information Center

    Cochran, Jill A.; Cochran, Zane; Laney, Kendra; Dean, Mandi

    2016-01-01

    With the rise of personal desktop 3D printing, a wide spectrum of educational opportunities has become available for educators to leverage this technology in their classrooms. Until recently, the ability to create physical 3D models was well beyond the scope, skill, and budget of many schools. However, since desktop 3D printers have become readily…

  16. 3D Elastic Seismic Wave Propagation Code

    1998-09-23

    E3D is capable of simulating seismic wave propagation in a 3D heterogeneous earth. Seismic waves are initiated by earthquake, explosive, and/or other sources. These waves propagate through a 3D geologic model, and are simulated as synthetic seismograms or other graphical output.

  17. 3D Flow Visualization Using Texture Advection

    NASA Technical Reports Server (NTRS)

    Kao, David; Zhang, Bing; Kim, Kwansik; Pang, Alex; Moran, Pat (Technical Monitor)

    2001-01-01

    Texture advection is an effective tool for animating and investigating 2D flows. In this paper, we discuss how this technique can be extended to 3D flows. In particular, we examine the use of 3D and 4D textures on 3D synthetic and computational fluid dynamics flow fields.

  18. Multi-Purpose Crew Vehicle Camera Asset Planning: Imagery Previsualization

    NASA Technical Reports Server (NTRS)

    Beaulieu, K.

    2014-01-01

    Using JSC-developed and other industry-standard off-the-shelf 3D modeling, animation, and rendering software packages, the Image Science Analysis Group (ISAG) supports Orion Project imagery planning efforts through dynamic 3D simulation and realistic previsualization of ground-, vehicle-, and air-based camera output.

  19. 3-D Perspective Kamchatka Peninsula Russia

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This perspective view shows the western side of the volcanically active Kamchatka Peninsula in eastern Russia. The image was generated using the first data collected during the Shuttle Radar Topography Mission (SRTM). In the foreground is the Sea of Okhotsk. Inland from the coast, vegetated floodplains and low relief hills rise toward snow capped peaks. The topographic effects on snow and vegetation distribution are very clear in this near-horizontal view. Forming the skyline is the Sredinnyy Khrebet, the volcanic mountain range that makes up the spine of the peninsula. High resolution SRTM topographic data will be used by geologists to study how volcanoes form and to understand the hazards posed by future eruptions. This image was generated using topographic data from SRTM and an enhanced true-color image from the Landsat 7 satellite. This image contains about 2,400 meters (7,880 feet) of total relief. The topographic expression was enhanced by adding artificial shading as calculated from the SRTM elevation model. The Landsat data was provided by the United States Geological Survey's Earth Resources Observations Systems (EROS) Data Center, Sioux Falls, South Dakota. SRTM, launched on February 11, 2000, used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. To collect the 3-D SRTM data, engineers added a 60- meter-long (200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. SRTM collected three dimensional measurements of nearly 80 percent of the Earth's surface. SRTM is a cooperative project between NASA, the National Imagery and Mapping Agency (NIMA) of the U.S. Department of Defense, and the German and Italian space agencies. Size: 33.3 km (20.6 miles) wide x 136 km (84 miles) coast to skyline. Location: 58.3 deg. North lat., 160 deg. East long. Orientation: Easterly view, 2 degrees

  20. 3-D Perspective View, Kamchatka Peninsula, Russia

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This perspective view shows the western side of the volcanically active Kamchatka Peninsula in eastern Russia. The image was generated using the first data collected during the Shuttle Radar Topography Mission (SRTM). In the foreground is the Sea of Okhotsk. Inland from the coast, vegetated floodplains and low relief hills rise toward snow capped peaks. The topographic effects on snow and vegetation distribution are very clear in this near-horizontal view. Forming the skyline is the Sredinnyy Khrebet, the volcanic mountain range that makes up the spine of the peninsula. High resolution SRTM topographic data will be used by geologists to study how volcanoes form and to understand the hazards posed by future eruptions.

    This image was generated using topographic data from SRTM and an enhanced true-color image from the Landsat 7 satellite. This image contains about 2,400 meters (7,880 feet) of total relief. The topographic expression was enhanced by adding artificial shading as calculated from the SRTM elevation model. The Landsat data was provided by the United States Geological Survey's Earth Resources Observations Systems (EROS) Data Center, Sioux Falls, South Dakota.

    SRTM, launched on February 11, 2000, used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar(SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. To collect the 3-D SRTM data, engineers added a 60-meter-long (200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. SRTM collected three-dimensional measurements of nearly 80 percent of the Earth's surface. SRTM is a cooperative project between NASA, the National Imagery and Mapping Agency (NIMA) of the U.S. Department of Defense, and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Earth Science Enterprise, Washington, D.C.

    Size: 33.3 km (20.6 miles) wide x

  1. Current efforts on developing an HWIL synthetic environment for LADAR sensor testing at AMRDEC

    NASA Astrophysics Data System (ADS)

    Kim, Hajin J.; Cornell, Michael C.; Naumann, Charles B.

    2005-05-01

    Efforts in developing a synthetic environment for testing LADAR sensors in a hardware-in-the-loop simulation are continuing at the Aviation and Missile Research, Engineering, and Development Center (AMRDEC) of the U.S. Army Research, Engineering and Development Command (RDECOM). Current activities have concentrated on developing the optical projection hardware portion of the synthetic environment. These activities range from system level design down to component level testing. Of particular interest have been schemes for generating the optical signals representing the individual pixels of the projection. Several approaches have been investigated and tested with emphasis on operating wavelength, intensity dynamic range and uniformity, and flexibility in pixel waveform generation. This paper will discuss some of the results from these current efforts at RDECOM's Advanced Simulation Center (ASC).

  2. Pose recognition of articulated target based on ladar range image with elastic shape analysis

    NASA Astrophysics Data System (ADS)

    Liu, Zheng-Jun; Li, Qi; Wang, Qi

    2014-10-01

    Elastic shape analysis is introduced for pose recognition of articulated target which is based on small samples of ladar range images. Shape deformations caused by poses changes represented as closed elastic curves given by the square-root velocity function geodesics are used to quantify shape differences and the Karcher mean is used to build a model library. Three kinds of moments - Hu moment invariants, affine moment invariants, and Zernike moment invariants based on support vector machines (SVMs) - are applied to evaluate this approach. The experiment results show that no matter what the azimuth angles of the testing samples are, this approach is capable of achieving a high recognition rate using only 3 model samples with different carrier to noise ratios (CNR); the performance of this approach is much better than that of three kinds of moments based on SVM, especially under high noise conditions.

  3. Detection performance improvement of chirped amplitude modulation ladar based on Gieger-mode avalanche photoelectric detector.

    PubMed

    Zhang, Zijing; Wu, Long; Zhang, Yu; Zhao, Yuan; Sun, Xiudong

    2011-12-10

    This paper presents an improved system structure of photon-counting chirped amplitude modulation (AM) ladar based on the Geiger-mode avalanche photoelectric detector (GmAPD). The error-pulse probability is investigated with statistical method. The research shows that most of the error pulses that are triggered by noise are distributed in the intensity troughs of the chirped AM waveform. The error-pulse probability is lowered with the sliding window and the threshold. With the average intensity of noise and signal being 0.3 count/sample and 1 count/sample, respectively, the probability of error pulses is reduced from 12% to 1.0%, and the SNR is improved by 2.2 dB in the improved system. PMID:22193131

  4. Case study: Beauty and the Beast 3D: benefits of 3D viewing for 2D to 3D conversion

    NASA Astrophysics Data System (ADS)

    Handy Turner, Tara

    2010-02-01

    From the earliest stages of the Beauty and the Beast 3D conversion project, the advantages of accurate desk-side 3D viewing was evident. While designing and testing the 2D to 3D conversion process, the engineering team at Walt Disney Animation Studios proposed a 3D viewing configuration that not only allowed artists to "compose" stereoscopic 3D but also improved efficiency by allowing artists to instantly detect which image features were essential to the stereoscopic appeal of a shot and which features had minimal or even negative impact. At a time when few commercial 3D monitors were available and few software packages provided 3D desk-side output, the team designed their own prototype devices and collaborated with vendors to create a "3D composing" workstation. This paper outlines the display technologies explored, final choices made for Beauty and the Beast 3D, wish-lists for future development and a few rules of thumb for composing compelling 2D to 3D conversions.

  5. RELAP5-3D User Problems

    SciTech Connect

    Riemke, Richard Allan

    2002-09-01

    The Reactor Excursion and Leak Analysis Program with 3D capability1 (RELAP5-3D) is a reactor system analysis code that has been developed at the Idaho National Engineering and Environmental Laboratory (INEEL) for the U. S. Department of Energy (DOE). The 3D capability in RELAP5-3D includes 3D hydrodynamics2 and 3D neutron kinetics3,4. Assessment, verification, and validation of the 3D capability in RELAP5-3D is discussed in the literature5,6,7,8,9,10. Additional assessment, verification, and validation of the 3D capability of RELAP5-3D will be presented in other papers in this users seminar. As with any software, user problems occur. User problems usually fall into the categories of input processing failure, code execution failure, restart/renodalization failure, unphysical result, and installation. This presentation will discuss some of the more generic user problems that have been reported on RELAP5-3D as well as their resolution.

  6. Effective 3-D surface modeling for geographic information systems

    NASA Astrophysics Data System (ADS)

    Yüksek, K.; Alparslan, M.; Mendi, E.

    2016-01-01

    In this work, we propose a dynamic, flexible and interactive urban digital terrain platform with spatial data and query processing capabilities of geographic information systems, multimedia database functionality and graphical modeling infrastructure. A new data element, called Geo-Node, which stores image, spatial data and 3-D CAD objects is developed using an efficient data structure. The system effectively handles data transfer of Geo-Nodes between main memory and secondary storage with an optimized directional replacement policy (DRP) based buffer management scheme. Polyhedron structures are used in digital surface modeling and smoothing process is performed by interpolation. The experimental results show that our framework achieves high performance and works effectively with urban scenes independent from the amount of spatial data and image size. The proposed platform may contribute to the development of various applications such as Web GIS systems based on 3-D graphics standards (e.g., X3-D and VRML) and services which integrate multi-dimensional spatial information and satellite/aerial imagery.

  7. 3D laptop for defense applications

    NASA Astrophysics Data System (ADS)

    Edmondson, Richard; Chenault, David

    2012-06-01

    Polaris Sensor Technologies has developed numerous 3D display systems using a US Army patented approach. These displays have been developed as prototypes for handheld controllers for robotic systems and closed hatch driving, and as part of a TALON robot upgrade for 3D vision, providing depth perception for the operator for improved manipulation and hazard avoidance. In this paper we discuss the prototype rugged 3D laptop computer and its applications to defense missions. The prototype 3D laptop combines full temporal and spatial resolution display with the rugged Amrel laptop computer. The display is viewed through protective passive polarized eyewear, and allows combined 2D and 3D content. Uses include robot tele-operation with live 3D video or synthetically rendered scenery, mission planning and rehearsal, enhanced 3D data interpretation, and simulation.

  8. Multiple-input multiple-output synthetic aperture ladar system for wide-range swath with high azimuth resolution.

    PubMed

    Tang, Yu; Qin, Bao; Yan, Yun; Xing, Mengdao

    2016-02-20

    For the trade-off between the high azimuth resolution and the wide-range swath in the single-input single-output synthetic aperture ladar (SAL) system, the range swath of the SAL system is restricted to a narrow range, this paper proposes a multiple-input multiple-output (MIMO) synthetic aperture ladar system. The MIMO system adopts a low pulse repetition frequency (PRF) to avoid a range ambiguity for the wide-range swath and in azimuth adopts the multi-channel method to achieve azimuth high resolution from the unambiguous azimuth wide-spectrum signal, processed through adaptive digital beam-forming technology. Simulations and analytical results are presented.

  9. 3-D Technology Approaches for Biological Ecologies

    NASA Astrophysics Data System (ADS)

    Liu, Liyu; Austin, Robert; U. S-China Physical-Oncology Sciences Alliance (PS-OA) Team

    Constructing three dimensional (3-D) landscapes is an inevitable issue in deep study of biological ecologies, because in whatever scales in nature, all of the ecosystems are composed by complex 3-D environments and biological behaviors. Just imagine if a 3-D technology could help complex ecosystems be built easily and mimic in vivo microenvironment realistically with flexible environmental controls, it will be a fantastic and powerful thrust to assist researchers for explorations. For years, we have been utilizing and developing different technologies for constructing 3-D micro landscapes for biophysics studies in in vitro. Here, I will review our past efforts, including probing cancer cell invasiveness with 3-D silicon based Tepuis, constructing 3-D microenvironment for cell invasion and metastasis through polydimethylsiloxane (PDMS) soft lithography, as well as explorations of optimized stenting positions for coronary bifurcation disease with 3-D wax printing and the latest home designed 3-D bio-printer. Although 3-D technologies is currently considered not mature enough for arbitrary 3-D micro-ecological models with easy design and fabrication, I hope through my talk, the audiences will be able to sense its significance and predictable breakthroughs in the near future. This work was supported by the State Key Development Program for Basic Research of China (Grant No. 2013CB837200), the National Natural Science Foundation of China (Grant No. 11474345) and the Beijing Natural Science Foundation (Grant No. 7154221).

  10. Automatic 3D video format detection

    NASA Astrophysics Data System (ADS)

    Zhang, Tao; Wang, Zhe; Zhai, Jiefu; Doyen, Didier

    2011-03-01

    Many 3D formats exist and will probably co-exist for a long time even if 3D standards are today under definition. The support for multiple 3D formats will be important for bringing 3D into home. In this paper, we propose a novel and effective method to detect whether a video is a 3D video or not, and to further identify the exact 3D format. First, we present how to detect those 3D formats that encode a pair of stereo images into a single image. The proposed method detects features and establishes correspondences between features in the left and right view images, and applies the statistics from the distribution of the positional differences between corresponding features to detect the existence of a 3D format and to identify the format. Second, we present how to detect the frame sequential 3D format. In the frame sequential 3D format, the feature points are oscillating from frame to frame. Similarly, the proposed method tracks feature points over consecutive frames, computes the positional differences between features, and makes a detection decision based on whether the features are oscillating. Experiments show the effectiveness of our method.

  11. RT3D tutorials for GMS users

    SciTech Connect

    Clement, T.P.; Jones, N.L.

    1998-02-01

    RT3D (Reactive Transport in 3-Dimensions) is a computer code that solves coupled partial differential equations that describe reactive-flow and transport of multiple mobile and/or immobile species in a three dimensional saturated porous media. RT3D was developed from the single-species transport code, MT3D (DoD-1.5, 1997 version). As with MT3D, RT3D also uses the USGS groundwater flow model MODFLOW for computing spatial and temporal variations in groundwater head distribution. This report presents a set of tutorial problems that are designed to illustrate how RT3D simulations can be performed within the Department of Defense Groundwater Modeling System (GMS). GMS serves as a pre- and post-processing interface for RT3D. GMS can be used to define all the input files needed by RT3D code, and later the code can be launched from within GMS and run as a separate application. Once the RT3D simulation is completed, the solution can be imported to GMS for graphical post-processing. RT3D v1.0 supports several reaction packages that can be used for simulating different types of reactive contaminants. Each of the tutorials, described below, provides training on a different RT3D reaction package. Each reaction package has different input requirements, and the tutorials are designed to describe these differences. Furthermore, the tutorials illustrate the various options available in GMS for graphical post-processing of RT3D results. Users are strongly encouraged to complete the tutorials before attempting to use RT3D and GMS on a routine basis.

  12. Dimensional accuracy of 3D printed vertebra

    NASA Astrophysics Data System (ADS)

    Ogden, Kent; Ordway, Nathaniel; Diallo, Dalanda; Tillapaugh-Fay, Gwen; Aslan, Can

    2014-03-01

    3D printer applications in the biomedical sciences and medical imaging are expanding and will have an increasing impact on the practice of medicine. Orthopedic and reconstructive surgery has been an obvious area for development of 3D printer applications as the segmentation of bony anatomy to generate printable models is relatively straightforward. There are important issues that should be addressed when using 3D printed models for applications that may affect patient care; in particular the dimensional accuracy of the printed parts needs to be high to avoid poor decisions being made prior to surgery or therapeutic procedures. In this work, the dimensional accuracy of 3D printed vertebral bodies derived from CT data for a cadaver spine is compared with direct measurements on the ex-vivo vertebra and with measurements made on the 3D rendered vertebra using commercial 3D image processing software. The vertebra was printed on a consumer grade 3D printer using an additive print process using PLA (polylactic acid) filament. Measurements were made for 15 different anatomic features of the vertebral body, including vertebral body height, endplate width and depth, pedicle height and width, and spinal canal width and depth, among others. It is shown that for the segmentation and printing process used, the results of measurements made on the 3D printed vertebral body are substantially the same as those produced by direct measurement on the vertebra and measurements made on the 3D rendered vertebra.

  13. Stereo 3-D Vision in Teaching Physics

    NASA Astrophysics Data System (ADS)

    Zabunov, Svetoslav

    2012-03-01

    Stereo 3-D vision is a technology used to present images on a flat surface (screen, paper, etc.) and at the same time to create the notion of three-dimensional spatial perception of the viewed scene. A great number of physical processes are much better understood when viewed in stereo 3-D vision compared to standard flat 2-D presentation. The current paper describes the modern stereo 3-D technologies that are applicable to various tasks in teaching physics in schools, colleges, and universities. Examples of stereo 3-D simulations developed by the author can be observed on online.

  14. Software for 3D radiotherapy dosimetry. Validation

    NASA Astrophysics Data System (ADS)

    Kozicki, Marek; Maras, Piotr; Karwowski, Andrzej C.

    2014-08-01

    The subject of this work is polyGeVero® software (GeVero Co., Poland), which has been developed to fill the requirements of fast calculations of 3D dosimetry data with the emphasis on polymer gel dosimetry for radiotherapy. This software comprises four workspaces that have been prepared for: (i) calculating calibration curves and calibration equations, (ii) storing the calibration characteristics of the 3D dosimeters, (iii) calculating 3D dose distributions in irradiated 3D dosimeters, and (iv) comparing 3D dose distributions obtained from measurements with the aid of 3D dosimeters and calculated with the aid of treatment planning systems (TPSs). The main features and functions of the software are described in this work. Moreover, the core algorithms were validated and the results are presented. The validation was performed using the data of the new PABIGnx polymer gel dosimeter. The polyGeVero® software simplifies and greatly accelerates the calculations of raw 3D dosimetry data. It is an effective tool for fast verification of TPS-generated plans for tumor irradiation when combined with a 3D dosimeter. Consequently, the software may facilitate calculations by the 3D dosimetry community. In this work, the calibration characteristics of the PABIGnx obtained through four calibration methods: multi vial, cross beam, depth dose, and brachytherapy, are discussed as well.

  15. [3D reconstructions in radiotherapy planning].

    PubMed

    Schlegel, W

    1991-10-01

    3D Reconstructions from tomographic images are used in the planning of radiation therapy to study important anatomical structures such as the body surface, target volumes, and organs at risk. The reconstructed anatomical models are used to define the geometry of the radiation beams. In addition, 3D voxel models are used for the calculation of the 3D dose distributions with an accuracy, previously impossible to achieve. Further uses of 3D reconstructions are in the display and evaluation of 3D therapy plans, and in the transfer of treatment planning parameters to the irradiation situation with the help of digitally reconstructed radiographs. 3D tomographic imaging with subsequent 3D reconstruction must be regarded as a completely new basis for the planning of radiation therapy, enabling tumor-tailored radiation therapy of localized target volumes with increased radiation doses and improved sparing of organs at risk. 3D treatment planning is currently being evaluated in clinical trials in connection with the new treatment techniques of conformation radiotherapy. Early experience with 3D treatment planning shows that its clinical importance in radiotherapy is growing, but will only become a standard radiotherapy tool when volumetric CT scanning, reliable and user-friendly treatment planning software, and faster and cheaper PACS-integrated medical work stations are accessible to radiotherapists.

  16. FastScript3D - A Companion to Java 3D

    NASA Technical Reports Server (NTRS)

    Koenig, Patti

    2005-01-01

    FastScript3D is a computer program, written in the Java 3D(TM) programming language, that establishes an alternative language that helps users who lack expertise in Java 3D to use Java 3D for constructing three-dimensional (3D)-appearing graphics. The FastScript3D language provides a set of simple, intuitive, one-line text-string commands for creating, controlling, and animating 3D models. The first word in a string is the name of a command; the rest of the string contains the data arguments for the command. The commands can also be used as an aid to learning Java 3D. Developers can extend the language by adding custom text-string commands. The commands can define new 3D objects or load representations of 3D objects from files in formats compatible with such other software systems as X3D. The text strings can be easily integrated into other languages. FastScript3D facilitates communication between scripting languages [which enable programming of hyper-text markup language (HTML) documents to interact with users] and Java 3D. The FastScript3D language can be extended and customized on both the scripting side and the Java 3D side.

  17. 3D Lunar Terrain Reconstruction from Apollo Images

    NASA Technical Reports Server (NTRS)

    Broxton, Michael J.; Nefian, Ara V.; Moratto, Zachary; Kim, Taemin; Lundy, Michael; Segal, Alkeksandr V.

    2009-01-01

    Generating accurate three dimensional planetary models is becoming increasingly important as NASA plans manned missions to return to the Moon in the next decade. This paper describes a 3D surface reconstruction system called the Ames Stereo Pipeline that is designed to produce such models automatically by processing orbital stereo imagery. We discuss two important core aspects of this system: (1) refinement of satellite station positions and pose estimates through least squares bundle adjustment; and (2) a stochastic plane fitting algorithm that generalizes the Lucas-Kanade method for optimal matching between stereo pair images.. These techniques allow us to automatically produce seamless, highly accurate digital elevation models from multiple stereo image pairs while significantly reducing the influence of image noise. Our technique is demonstrated on a set of 71 high resolution scanned images from the Apollo 15 mission

  18. 3D PDF - a means of public access to geological 3D - objects, using the example of GTA3D

    NASA Astrophysics Data System (ADS)

    Slaby, Mark-Fabian; Reimann, Rüdiger

    2013-04-01

    In geology, 3D modeling has become very important. In the past, two-dimensional data such as isolines, drilling profiles, or cross-sections based on those, were used to illustrate the subsurface geology, whereas now, we can create complex digital 3D models. These models are produced with special software, such as GOCAD ®. The models can be viewed, only through the software used to create them, or through viewers available for free. The platform-independent PDF (Portable Document Format), enforced by Adobe, has found a wide distribution. This format has constantly evolved over time. Meanwhile, it is possible to display CAD data in an Adobe 3D PDF file with the free Adobe Reader (version 7). In a 3D PDF, a 3D model is freely rotatable and can be assembled from a plurality of objects, which can thus be viewed from all directions on their own. In addition, it is possible to create moveable cross-sections (profiles), and to assign transparency to the objects. Based on industry-standard CAD software, 3D PDFs can be generated from a large number of formats, or even be exported directly from this software. In geoinformatics, different approaches to creating 3D PDFs exist. The intent of the Authority for Mining, Energy and Geology to allow free access to the models of the Geotectonic Atlas (GTA3D), could not be realized with standard software solutions. A specially designed code converts the 3D objects to VRML (Virtual Reality Modeling Language). VRML is one of the few formats that allow using image files (maps) as textures, and to represent colors and shapes correctly. The files were merged in Acrobat X Pro, and a 3D PDF was generated subsequently. A topographic map, a display of geographic directions and horizontal and vertical scales help to facilitate the use.

  19. Auditory Imagery: Empirical Findings

    ERIC Educational Resources Information Center

    Hubbard, Timothy L.

    2010-01-01

    The empirical literature on auditory imagery is reviewed. Data on (a) imagery for auditory features (pitch, timbre, loudness), (b) imagery for complex nonverbal auditory stimuli (musical contour, melody, harmony, tempo, notational audiation, environmental sounds), (c) imagery for verbal stimuli (speech, text, in dreams, interior monologue), (d)…

  20. 3D ultrafast ultrasound imaging in vivo.

    PubMed

    Provost, Jean; Papadacci, Clement; Arango, Juan Esteban; Imbault, Marion; Fink, Mathias; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-10-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32  ×  32 matrix-array probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging, and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra--and inter-observer variability.

  1. 3D ultrafast ultrasound imaging in vivo

    NASA Astrophysics Data System (ADS)

    Provost, Jean; Papadacci, Clement; Esteban Arango, Juan; Imbault, Marion; Fink, Mathias; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-10-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32  ×  32 matrix-array probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging, and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra—and inter-observer variability.

  2. 3D ultrafast ultrasound imaging in vivo.

    PubMed

    Provost, Jean; Papadacci, Clement; Arango, Juan Esteban; Imbault, Marion; Fink, Mathias; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-10-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32  ×  32 matrix-array probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging, and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra--and inter-observer variability. PMID:25207828

  3. An aerial 3D printing test mission

    NASA Astrophysics Data System (ADS)

    Hirsch, Michael; McGuire, Thomas; Parsons, Michael; Leake, Skye; Straub, Jeremy

    2016-05-01

    This paper provides an overview of an aerial 3D printing technology, its development and its testing. This technology is potentially useful in its own right. In addition, this work advances the development of a related in-space 3D printing technology. A series of aerial 3D printing test missions, used to test the aerial printing technology, are discussed. Through completing these test missions, the design for an in-space 3D printer may be advanced. The current design for the in-space 3D printer involves focusing thermal energy to heat an extrusion head and allow for the extrusion of molten print material. Plastics can be used as well as composites including metal, allowing for the extrusion of conductive material. A variety of experiments will be used to test this initial 3D printer design. High altitude balloons will be used to test the effects of microgravity on 3D printing, as well as parabolic flight tests. Zero pressure balloons can be used to test the effect of long 3D printing missions subjected to low temperatures. Vacuum chambers will be used to test 3D printing in a vacuum environment. The results will be used to adapt a current prototype of an in-space 3D printer. Then, a small scale prototype can be sent into low-Earth orbit as a 3-U cube satellite. With the ability to 3D print in space demonstrated, future missions can launch production hardware through which the sustainability and durability of structures in space will be greatly improved.

  4. Wow! 3D Content Awakens the Classroom

    ERIC Educational Resources Information Center

    Gordon, Dan

    2010-01-01

    From her first encounter with stereoscopic 3D technology designed for classroom instruction, Megan Timme, principal at Hamilton Park Pacesetter Magnet School in Dallas, sensed it could be transformative. Last spring, when she began pilot-testing 3D content in her third-, fourth- and fifth-grade classrooms, Timme wasn't disappointed. Students…

  5. 3D, or Not to Be?

    ERIC Educational Resources Information Center

    Norbury, Keith

    2012-01-01

    It may be too soon for students to be showing up for class with popcorn and gummy bears, but technology similar to that behind the 3D blockbuster movie "Avatar" is slowly finding its way into college classrooms. 3D classroom projectors are taking students on fantastic voyages inside the human body, to the ruins of ancient Greece--even to faraway…

  6. 3D Printed Block Copolymer Nanostructures

    ERIC Educational Resources Information Center

    Scalfani, Vincent F.; Turner, C. Heath; Rupar, Paul A.; Jenkins, Alexander H.; Bara, Jason E.

    2015-01-01

    The emergence of 3D printing has dramatically advanced the availability of tangible molecular and extended solid models. Interestingly, there are few nanostructure models available both commercially and through other do-it-yourself approaches such as 3D printing. This is unfortunate given the importance of nanotechnology in science today. In this…

  7. Immersive 3D Geovisualization in Higher Education

    ERIC Educational Resources Information Center

    Philips, Andrea; Walz, Ariane; Bergner, Andreas; Graeff, Thomas; Heistermann, Maik; Kienzler, Sarah; Korup, Oliver; Lipp, Torsten; Schwanghart, Wolfgang; Zeilinger, Gerold

    2015-01-01

    In this study, we investigate how immersive 3D geovisualization can be used in higher education. Based on MacEachren and Kraak's geovisualization cube, we examine the usage of immersive 3D geovisualization and its usefulness in a research-based learning module on flood risk, called GEOSimulator. Results of a survey among participating students…

  8. 3D elastic control for mobile devices.

    PubMed

    Hachet, Martin; Pouderoux, Joachim; Guitton, Pascal

    2008-01-01

    To increase the input space of mobile devices, the authors developed a proof-of-concept 3D elastic controller that easily adapts to mobile devices. This embedded device improves the completion of high-level interaction tasks such as visualization of large documents and navigation in 3D environments. It also opens new directions for tomorrow's mobile applications.

  9. Static & Dynamic Response of 3D Solids

    1996-07-15

    NIKE3D is a large deformations 3D finite element code used to obtain the resulting displacements and stresses from multi-body static and dynamic structural thermo-mechanics problems with sliding interfaces. Many nonlinear and temperature dependent constitutive models are available.

  10. 3D Printing. What's the Harm?

    ERIC Educational Resources Information Center

    Love, Tyler S.; Roy, Ken

    2016-01-01

    Health concerns from 3D printing were first documented by Stephens, Azimi, Orch, and Ramos (2013), who found that commercially available 3D printers were producing hazardous levels of ultrafine particles (UFPs) and volatile organic compounds (VOCs) when plastic materials were melted through the extruder. UFPs are particles less than 100 nanometers…

  11. 3D Printing of Molecular Models

    ERIC Educational Resources Information Center

    Gardner, Adam; Olson, Arthur

    2016-01-01

    Physical molecular models have played a valuable role in our understanding of the invisible nano-scale world. We discuss 3D printing and its use in producing models of the molecules of life. Complex biomolecular models, produced from 3D printed parts, can demonstrate characteristics of molecular structure and function, such as viral self-assembly,…

  12. A 3D Geostatistical Mapping Tool

    SciTech Connect

    Weiss, W. W.; Stevenson, Graig; Patel, Ketan; Wang, Jun

    1999-02-09

    This software provides accurate 3D reservoir modeling tools and high quality 3D graphics for PC platforms enabling engineers and geologists to better comprehend reservoirs and consequently improve their decisions. The mapping algorithms are fractals, kriging, sequential guassian simulation, and three nearest neighbor methods.

  13. Pathways for Learning from 3D Technology

    ERIC Educational Resources Information Center

    Carrier, L. Mark; Rab, Saira S.; Rosen, Larry D.; Vasquez, Ludivina; Cheever, Nancy A.

    2012-01-01

    The purpose of this study was to find out if 3D stereoscopic presentation of information in a movie format changes a viewer's experience of the movie content. Four possible pathways from 3D presentation to memory and learning were considered: a direct connection based on cognitive neuroscience research; a connection through "immersion" in that 3D…

  14. Stereo 3-D Vision in Teaching Physics

    ERIC Educational Resources Information Center

    Zabunov, Svetoslav

    2012-01-01

    Stereo 3-D vision is a technology used to present images on a flat surface (screen, paper, etc.) and at the same time to create the notion of three-dimensional spatial perception of the viewed scene. A great number of physical processes are much better understood when viewed in stereo 3-D vision compared to standard flat 2-D presentation. The…

  15. Clinical applications of 3-D dosimeters

    NASA Astrophysics Data System (ADS)

    Wuu, Cheng-Shie

    2015-01-01

    Both 3-D gels and radiochromic plastic dosimeters, in conjunction with dose image readout systems (MRI or optical-CT), have been employed to measure 3-D dose distributions in many clinical applications. The 3-D dose maps obtained from these systems can provide a useful tool for clinical dose verification for complex treatment techniques such as IMRT, SRS/SBRT, brachytherapy, and proton beam therapy. These complex treatments present high dose gradient regions in the boundaries between the target and surrounding critical organs. Dose accuracy in these areas can be critical, and may affect treatment outcome. In this review, applications of 3-D gels and PRESAGE dosimeter are reviewed and evaluated in terms of their performance in providing information on clinical dose verification as well as commissioning of various treatment modalities. Future interests and clinical needs on studies of 3-D dosimetry are also discussed.

  16. Fabrication of 3D Silicon Sensors

    SciTech Connect

    Kok, A.; Hansen, T.E.; Hansen, T.A.; Lietaer, N.; Summanwar, A.; Kenney, C.; Hasi, J.; Da Via, C.; Parker, S.I.; /Hawaii U.

    2012-06-06

    Silicon sensors with a three-dimensional (3-D) architecture, in which the n and p electrodes penetrate through the entire substrate, have many advantages over planar silicon sensors including radiation hardness, fast time response, active edge and dual readout capabilities. The fabrication of 3D sensors is however rather complex. In recent years, there have been worldwide activities on 3D fabrication. SINTEF in collaboration with Stanford Nanofabrication Facility have successfully fabricated the original (single sided double column type) 3D detectors in two prototype runs and the third run is now on-going. This paper reports the status of this fabrication work and the resulted yield. The work of other groups such as the development of double sided 3D detectors is also briefly reported.

  17. BEAMS3D Neutral Beam Injection Model

    SciTech Connect

    Lazerson, Samuel

    2014-04-14

    With the advent of applied 3D fi elds in Tokamaks and modern high performance stellarators, a need has arisen to address non-axisymmetric effects on neutral beam heating and fueling. We report on the development of a fully 3D neutral beam injection (NBI) model, BEAMS3D, which addresses this need by coupling 3D equilibria to a guiding center code capable of modeling neutral and charged particle trajectories across the separatrix and into the plasma core. Ionization, neutralization, charge-exchange, viscous velocity reduction, and pitch angle scattering are modeled with the ADAS atomic physics database [1]. Benchmark calculations are presented to validate the collisionless particle orbits, neutral beam injection model, frictional drag, and pitch angle scattering effects. A calculation of neutral beam heating in the NCSX device is performed, highlighting the capability of the code to handle 3D magnetic fields.

  18. 3D View of Los Angeles

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Topography Mission (SRTM) aboard the Space Shuttle Endeavour, launched on February 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration (NASA), the National Imagery and Mapping Agency (NIMA) of the U.S. Department of Defense (DoD), and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Earth Science Enterprise, Washington, DC. Size (of full images): 141 by 107 kilometers (88 by 66 miles) Location: 34.5 deg. North lat., 118.7 deg. West lon. Orientation: North toward upper right Image: Landsat bands 1, 2and4, 3 as blue, green, and red, respectively Date Acquired: February 16, 2000 (SRTM), November 11, 1986 (Landsat) Image courtesy NASA/JPL/NIMA

  19. 3D Ultrafast Ultrasound Imaging In Vivo

    PubMed Central

    Provost, Jean; Papadacci, Clement; Arango, Juan Esteban; Imbault, Marion; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-01-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative real-time imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in three dimensions based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32×32 matrix-array probe. Its capability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3-D Shear-Wave Imaging, 3-D Ultrafast Doppler Imaging and finally 3D Ultrafast combined Tissue and Flow Doppler. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3-D Ultrafast Doppler was used to obtain 3-D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, for the first time, the complex 3-D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, and the 3-D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3-D Ultrafast Ultrasound Imaging for the 3-D real-time mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra- and inter-observer variability. PMID:25207828

  20. The psychology of the 3D experience

    NASA Astrophysics Data System (ADS)

    Janicke, Sophie H.; Ellis, Andrew

    2013-03-01

    With 3D televisions expected to reach 50% home saturation as early as 2016, understanding the psychological mechanisms underlying the user response to 3D technology is critical for content providers, educators and academics. Unfortunately, research examining the effects of 3D technology has not kept pace with the technology's rapid adoption, resulting in large-scale use of a technology about which very little is actually known. Recognizing this need for new research, we conducted a series of studies measuring and comparing many of the variables and processes underlying both 2D and 3D media experiences. In our first study, we found narratives within primetime dramas had the power to shift viewer attitudes in both 2D and 3D settings. However, we found no difference in persuasive power between 2D and 3D content. We contend this lack of effect was the result of poor conversion quality and the unique demands of 3D production. In our second study, we found 3D technology significantly increased enjoyment when viewing sports content, yet offered no added enjoyment when viewing a movie trailer. The enhanced enjoyment of the sports content was shown to be the result of heightened emotional arousal and attention in the 3D condition. We believe the lack of effect found for the movie trailer may be genre-related. In our final study, we found 3D technology significantly enhanced enjoyment of two video games from different genres. The added enjoyment was found to be the result of an increased sense of presence.

  1. Low-cost 3D rangefinder system

    NASA Astrophysics Data System (ADS)

    Chen, Bor-Tow; Lou, Wen-Shiou; Chen, Chia-Chen; Lin, Hsien-Chang

    1998-06-01

    Nowadays, 3D data are popularly performed in computer, and 3D browsers manipulate 3D model in the virtual world. Yet, till now, 3D digitizer is still a high-cost product and not a familiar equipment. In order to meet the requirement of 3D fancy world, in this paper, the concept of a low-cost 3D digitizer system is proposed to catch 3D range data from objects. The specified optical design of the 3D extraction is effective to depress the size, and the processing software of the system is compatible with PC to promote its portable capability. Both features contribute a low-cost system in PC environment in contrast to a large system bundled in an expensive workstation platform. In the structure of 3D extraction, laser beam and CCD camera are adopted to construct a 3D sensor. Instead of 2 CCD cameras for capturing laser lines twice before, a 2-in-1 system is proposed to merge 2 images in one CCD which still retains the information of two fields of views to inhibit occlusion problems. Besides, optical paths of two camera views are reflected by mirror in order that the volume of the system can be minified with one rotary axis only. It makes a portable system be more possible to work. Combined with the processing software executable in PC windows system, the proposed system not only saves hardware cost but also processing time of software. The system performance achieves 0.05 mm accuracy. It shows that a low- cost system is more possible to be high-performance.

  2. 3D Visualization Development of SIUE Campus

    NASA Astrophysics Data System (ADS)

    Nellutla, Shravya

    Geographic Information Systems (GIS) has progressed from the traditional map-making to the modern technology where the information can be created, edited, managed and analyzed. Like any other models, maps are simplified representations of real world. Hence visualization plays an essential role in the applications of GIS. The use of sophisticated visualization tools and methods, especially three dimensional (3D) modeling, has been rising considerably due to the advancement of technology. There are currently many off-the-shelf technologies available in the market to build 3D GIS models. One of the objectives of this research was to examine the available ArcGIS and its extensions for 3D modeling and visualization and use them to depict a real world scenario. Furthermore, with the advent of the web, a platform for accessing and sharing spatial information on the Internet, it is possible to generate interactive online maps. Integrating Internet capacity with GIS functionality redefines the process of sharing and processing the spatial information. Enabling a 3D map online requires off-the-shelf GIS software, 3D model builders, web server, web applications and client server technologies. Such environments are either complicated or expensive because of the amount of hardware and software involved. Therefore, the second objective of this research was to investigate and develop simpler yet cost-effective 3D modeling approach that uses available ArcGIS suite products and the free 3D computer graphics software for designing 3D world scenes. Both ArcGIS Explorer and ArcGIS Online will be used to demonstrate the way of sharing and distributing 3D geographic information on the Internet. A case study of the development of 3D campus for the Southern Illinois University Edwardsville is demonstrated.

  3. The 3D Elevation Program: summary for Michigan

    USGS Publications Warehouse

    Carswell, William J.

    2014-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation's natural and constructed features. The Michigan Statewide Authoritative Imagery and Lidar (MiSAIL) program provides statewide lidar coordination with local, State, and national groups in support of 3DEP for Michigan.

  4. Medical 3D Printing for the Radiologist.

    PubMed

    Mitsouras, Dimitris; Liacouras, Peter; Imanzadeh, Amir; Giannopoulos, Andreas A; Cai, Tianrun; Kumamaru, Kanako K; George, Elizabeth; Wake, Nicole; Caterson, Edward J; Pomahac, Bohdan; Ho, Vincent B; Grant, Gerald T; Rybicki, Frank J

    2015-01-01

    While use of advanced visualization in radiology is instrumental in diagnosis and communication with referring clinicians, there is an unmet need to render Digital Imaging and Communications in Medicine (DICOM) images as three-dimensional (3D) printed models capable of providing both tactile feedback and tangible depth information about anatomic and pathologic states. Three-dimensional printed models, already entrenched in the nonmedical sciences, are rapidly being embraced in medicine as well as in the lay community. Incorporating 3D printing from images generated and interpreted by radiologists presents particular challenges, including training, materials and equipment, and guidelines. The overall costs of a 3D printing laboratory must be balanced by the clinical benefits. It is expected that the number of 3D-printed models generated from DICOM images for planning interventions and fabricating implants will grow exponentially. Radiologists should at a minimum be familiar with 3D printing as it relates to their field, including types of 3D printing technologies and materials used to create 3D-printed anatomic models, published applications of models to date, and clinical benefits in radiology. Online supplemental material is available for this article.

  5. 3D facial expression modeling for recognition

    NASA Astrophysics Data System (ADS)

    Lu, Xiaoguang; Jain, Anil K.; Dass, Sarat C.

    2005-03-01

    Current two-dimensional image based face recognition systems encounter difficulties with large variations in facial appearance due to the pose, illumination and expression changes. Utilizing 3D information of human faces is promising for handling the pose and lighting variations. While the 3D shape of a face does not change due to head pose (rigid) and lighting changes, it is not invariant to the non-rigid facial movement and evolution, such as expressions and aging effect. We propose a facial surface matching framework to match multiview facial scans to a 3D face model, where the (non-rigid) expression deformation is explicitly modeled for each subject, resulting in a person-specific deformation model. The thin plate spline (TPS) is applied to model the deformation based on the facial landmarks. The deformation is applied to the 3D neutral expression face model to synthesize the corresponding expression. Both the neutral and the synthesized 3D surface models are used to match a test scan. The surface registration and matching between a test scan and a 3D model are achieved by a modified Iterative Closest Point (ICP) algorithm. Preliminary experimental results demonstrate that the proposed expression modeling and recognition-by-synthesis schemes improve the 3D matching accuracy.

  6. Digital relief generation from 3D models

    NASA Astrophysics Data System (ADS)

    Wang, Meili; Sun, Yu; Zhang, Hongming; Qian, Kun; Chang, Jian; He, Dongjian

    2016-09-01

    It is difficult to extend image-based relief generation to high-relief generation, as the images contain insufficient height information. To generate reliefs from three-dimensional (3D) models, it is necessary to extract the height fields from the model, but this can only generate bas-reliefs. To overcome this problem, an efficient method is proposed to generate bas-reliefs and high-reliefs directly from 3D meshes. To produce relief features that are visually appropriate, the 3D meshes are first scaled. 3D unsharp masking is used to enhance the visual features in the 3D mesh, and average smoothing and Laplacian smoothing are implemented to achieve better smoothing results. A nonlinear variable scaling scheme is then employed to generate the final bas-reliefs and high-reliefs. Using the proposed method, relief models can be generated from arbitrary viewing positions with different gestures and combinations of multiple 3D models. The generated relief models can be printed by 3D printers. The proposed method provides a means of generating both high-reliefs and bas-reliefs in an efficient and effective way under the appropriate scaling factors.

  7. NUBEAM developments and 3d halo modeling

    NASA Astrophysics Data System (ADS)

    Gorelenkova, M. V.; Medley, S. S.; Kaye, S. M.

    2012-10-01

    Recent developments related to the 3D halo model in NUBEAM code are described. To have a reliable halo neutral source for diagnostic simulation, the TRANSP/NUBEAM code has been enhanced with full implementation of ADAS atomic physic ground state and excited state data for hydrogenic beams and mixed species plasma targets. The ADAS codes and database provide the density and temperature dependence of the atomic data, and the collective nature of the state excitation process. To be able to populate 3D halo output with sufficient statistical resolution, the capability to control the statistics of fast ion CX modeling and for thermal halo launch has been added to NUBEAM. The 3D halo neutral model is based on modification and extension of the ``beam in box'' aligned 3d Cartesian grid that includes the neutral beam itself, 3D fast neutral densities due to CX of partially slowed down fast ions in the beam halo region, 3D thermal neutral densities due to CX deposition and fast neutral recapture source. More details on the 3D halo simulation design will be presented.

  8. Medical 3D Printing for the Radiologist.

    PubMed

    Mitsouras, Dimitris; Liacouras, Peter; Imanzadeh, Amir; Giannopoulos, Andreas A; Cai, Tianrun; Kumamaru, Kanako K; George, Elizabeth; Wake, Nicole; Caterson, Edward J; Pomahac, Bohdan; Ho, Vincent B; Grant, Gerald T; Rybicki, Frank J

    2015-01-01

    While use of advanced visualization in radiology is instrumental in diagnosis and communication with referring clinicians, there is an unmet need to render Digital Imaging and Communications in Medicine (DICOM) images as three-dimensional (3D) printed models capable of providing both tactile feedback and tangible depth information about anatomic and pathologic states. Three-dimensional printed models, already entrenched in the nonmedical sciences, are rapidly being embraced in medicine as well as in the lay community. Incorporating 3D printing from images generated and interpreted by radiologists presents particular challenges, including training, materials and equipment, and guidelines. The overall costs of a 3D printing laboratory must be balanced by the clinical benefits. It is expected that the number of 3D-printed models generated from DICOM images for planning interventions and fabricating implants will grow exponentially. Radiologists should at a minimum be familiar with 3D printing as it relates to their field, including types of 3D printing technologies and materials used to create 3D-printed anatomic models, published applications of models to date, and clinical benefits in radiology. Online supplemental material is available for this article. PMID:26562233

  9. Perception of detail in 3D images

    NASA Astrophysics Data System (ADS)

    Heynderickx, Ingrid; Kaptein, Ronald

    2009-01-01

    A lot of current 3D displays suffer from the fact that their spatial resolution is lower compared to their 2D counterparts. One reason for this is that the multiple views needed to generate 3D are often spatially multiplexed. Besides this, imperfect separation of the left- and right-eye view leads to blurring or ghosting, and therefore to a decrease in perceived sharpness. However, people watching stereoscopic videos have reported that the 3D scene contained more details, compared to the 2D scene with identical spatial resolution. This is an interesting notion, that has never been tested in a systematic and quantitative way. To investigate this effect, we had people compare the amount of detail ("detailedness") in pairs of 2D and 3D images. A blur filter was applied to one of the two images, and the blur level was varied using an adaptive staircase procedure. In this way, the blur threshold for which the 2D and 3D image contained perceptually the same amount of detail could be found. Our results show that the 3D image needed to be blurred more than the 2D image. This confirms the earlier qualitative findings that 3D images contain perceptually more details than 2D images with the same spatial resolution.

  10. 3D bioprinting of tissues and organs.

    PubMed

    Murphy, Sean V; Atala, Anthony

    2014-08-01

    Additive manufacturing, otherwise known as three-dimensional (3D) printing, is driving major innovations in many areas, such as engineering, manufacturing, art, education and medicine. Recent advances have enabled 3D printing of biocompatible materials, cells and supporting components into complex 3D functional living tissues. 3D bioprinting is being applied to regenerative medicine to address the need for tissues and organs suitable for transplantation. Compared with non-biological printing, 3D bioprinting involves additional complexities, such as the choice of materials, cell types, growth and differentiation factors, and technical challenges related to the sensitivities of living cells and the construction of tissues. Addressing these complexities requires the integration of technologies from the fields of engineering, biomaterials science, cell biology, physics and medicine. 3D bioprinting has already been used for the generation and transplantation of several tissues, including multilayered skin, bone, vascular grafts, tracheal splints, heart tissue and cartilaginous structures. Other applications include developing high-throughput 3D-bioprinted tissue models for research, drug discovery and toxicology. PMID:25093879

  11. Medical 3D Printing for the Radiologist

    PubMed Central

    Mitsouras, Dimitris; Liacouras, Peter; Imanzadeh, Amir; Giannopoulos, Andreas A.; Cai, Tianrun; Kumamaru, Kanako K.; George, Elizabeth; Wake, Nicole; Caterson, Edward J.; Pomahac, Bohdan; Ho, Vincent B.; Grant, Gerald T.

    2015-01-01

    While use of advanced visualization in radiology is instrumental in diagnosis and communication with referring clinicians, there is an unmet need to render Digital Imaging and Communications in Medicine (DICOM) images as three-dimensional (3D) printed models capable of providing both tactile feedback and tangible depth information about anatomic and pathologic states. Three-dimensional printed models, already entrenched in the nonmedical sciences, are rapidly being embraced in medicine as well as in the lay community. Incorporating 3D printing from images generated and interpreted by radiologists presents particular challenges, including training, materials and equipment, and guidelines. The overall costs of a 3D printing laboratory must be balanced by the clinical benefits. It is expected that the number of 3D-printed models generated from DICOM images for planning interventions and fabricating implants will grow exponentially. Radiologists should at a minimum be familiar with 3D printing as it relates to their field, including types of 3D printing technologies and materials used to create 3D-printed anatomic models, published applications of models to date, and clinical benefits in radiology. Online supplemental material is available for this article. ©RSNA, 2015 PMID:26562233

  12. Extra Dimensions: 3D in PDF Documentation

    NASA Astrophysics Data System (ADS)

    Graf, Norman A.

    2012-12-01

    Experimental science is replete with multi-dimensional information which is often poorly represented by the two dimensions of presentation slides and print media. Past efforts to disseminate such information to a wider audience have failed for a number of reasons, including a lack of standards which are easy to implement and have broad support. Adobe's Portable Document Format (PDF) has in recent years become the de facto standard for secure, dependable electronic information exchange. It has done so by creating an open format, providing support for multiple platforms and being reliable and extensible. By providing support for the ECMA standard Universal 3D (U3D) and the ISO PRC file format in its free Adobe Reader software, Adobe has made it easy to distribute and interact with 3D content. Until recently, Adobe's Acrobat software was also capable of incorporating 3D content into PDF files from a variety of 3D file formats, including proprietary CAD formats. However, this functionality is no longer available in Acrobat X, having been spun off to a separate company. Incorporating 3D content now requires the additional purchase of a separate plug-in. In this talk we present alternatives based on open source libraries which allow the programmatic creation of 3D content in PDF format. While not providing the same level of access to CAD files as the commercial software, it does provide physicists with an alternative path to incorporate 3D content into PDF files from such disparate applications as detector geometries from Geant4, 3D data sets, mathematical surfaces or tesselated volumes.

  13. FUN3D Manual: 12.7

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2015-01-01

    This manual describes the installation and execution of FUN3D version 12.7, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  14. FUN3D Manual: 13.0

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bill; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2016-01-01

    This manual describes the installation and execution of FUN3D version 13.0, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  15. FUN3D Manual: 12.6

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, William L.; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2015-01-01

    This manual describes the installation and execution of FUN3D version 12.6, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  16. FUN3D Manual: 12.5

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, William L.; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2014-01-01

    This manual describes the installation and execution of FUN3D version 12.5, including optional dependent packages. FUN3D is a suite of computational uid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables ecient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  17. FUN3D Manual: 12.9

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2016-01-01

    This manual describes the installation and execution of FUN3D version 12.9, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  18. FUN3D Manual: 12.8

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2015-01-01

    This manual describes the installation and execution of FUN3D version 12.8, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  19. FUN3D Manual: 12.4

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2014-01-01

    This manual describes the installation and execution of FUN3D version 12.4, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixedelement unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  20. VALIDATION OF IMPROVED 3D ATR MODEL

    SciTech Connect

    Soon Sam Kim; Bruce G. Schnitzler

    2005-11-01

    A full-core Monte Carlo based 3D model of the Advanced Test Reactor (ATR) was previously developed. [1] An improved 3D model has been developed by the International Criticality Safety Benchmark Evaluation Project (ICSBEP) to eliminate homogeneity of fuel plates of the old model, incorporate core changes into the new model, and to validate against a newer, more complicated core configuration. This new 3D model adds capability for fuel loading design and azimuthal power peaking studies of the ATR fuel elements.

  1. Explicit 3-D Hydrodynamic FEM Program

    2000-11-07

    DYNA3D is a nonlinear explicit finite element code for analyzing 3-D structures and solid continuum. The code is vectorized and available on several computer platforms. The element library includes continuum, shell, beam, truss and spring/damper elements to allow maximum flexibility in modeling physical problems. Many materials are available to represent a wide range of material behavior, including elasticity, plasticity, composites, thermal effects and rate dependence. In addition, DYNA3D has a sophisticated contact interface capability, includingmore » frictional sliding, single surface contact and automatic contact generation.« less

  2. A high capacity 3D steganography algorithm.

    PubMed

    Chao, Min-Wen; Lin, Chao-hung; Yu, Cheng-Wei; Lee, Tong-Yee

    2009-01-01

    In this paper, we present a very high-capacity and low-distortion 3D steganography scheme. Our steganography approach is based on a novel multilayered embedding scheme to hide secret messages in the vertices of 3D polygon models. Experimental results show that the cover model distortion is very small as the number of hiding layers ranges from 7 to 13 layers. To the best of our knowledge, this novel approach can provide much higher hiding capacity than other state-of-the-art approaches, while obeying the low distortion and security basic requirements for steganography on 3D models.

  3. How We 3D-Print Aerogel

    SciTech Connect

    2015-04-23

    A new type of graphene aerogel will make for better energy storage, sensors, nanoelectronics, catalysis and separations. Lawrence Livermore National Laboratory researchers have made graphene aerogel microlattices with an engineered architecture via a 3D printing technique known as direct ink writing. The research appears in the April 22 edition of the journal, Nature Communications. The 3D printed graphene aerogels have high surface area, excellent electrical conductivity, are lightweight, have mechanical stiffness and exhibit supercompressibility (up to 90 percent compressive strain). In addition, the 3D printed graphene aerogel microlattices show an order of magnitude improvement over bulk graphene materials and much better mass transport.

  4. FIT3D: Fitting optical spectra

    NASA Astrophysics Data System (ADS)

    Sánchez, S. F.; Pérez, E.; Sánchez-Blázquez, P.; González, J. J.; Rosales-Ortega, F. F.; Cano-Díaz, M.; López-Cobá, C.; Marino, R. A.; Gil de Paz, A.; Mollá, M.; López-Sánchez, A. R.; Ascasibar, Y.; Barrera-Ballesteros, J.

    2016-09-01

    FIT3D fits optical spectra to deblend the underlying stellar population and the ionized gas, and extract physical information from each component. FIT3D is focused on the analysis of Integral Field Spectroscopy data, but is not restricted to it, and is the basis of Pipe3D, a pipeline used in the analysis of datasets like CALIFA, MaNGA, and SAMI. It can run iteratively or in an automatic way to derive the parameters of a large set of spectra.

  5. 3D packaging for integrated circuit systems

    SciTech Connect

    Chu, D.; Palmer, D.W.

    1996-11-01

    A goal was set for high density, high performance microelectronics pursued through a dense 3D packing of integrated circuits. A {open_quotes}tool set{close_quotes} of assembly processes have been developed that enable 3D system designs: 3D thermal analysis, silicon electrical through vias, IC thinning, mounting wells in silicon, adhesives for silicon stacking, pretesting of IC chips before commitment to stacks, and bond pad bumping. Validation of these process developments occurred through both Sandia prototypes and subsequent commercial examples.

  6. Investigations in massive 3D gravity

    SciTech Connect

    Accioly, Antonio; Helayeel-Neto, Jose; Morais, Jefferson; Turcati, Rodrigo; Scatena, Eslley

    2011-05-15

    Some interesting gravitational properties of the Bergshoeff-Hohm-Townsend model (massive 3D gravity), such as the presence of a short-range gravitational force in the nonrelativistic limit and the existence of an impact-parameter-dependent gravitational deflection angle, are studied. Interestingly enough, these phenomena have no counterpart in the usual Einstein 3D gravity. In order to better understand the two aforementioned gravitational properties, they are also analyzed in the framework of 3D higher-derivative gravity with the Einstein-Hilbert term with the 'wrong sign'.

  7. An Improved Version of TOPAZ 3D

    SciTech Connect

    Krasnykh, Anatoly

    2003-07-29

    An improved version of the TOPAZ 3D gun code is presented as a powerful tool for beam optics simulation. In contrast to the previous version of TOPAZ 3D, the geometry of the device under test is introduced into TOPAZ 3D directly from a CAD program, such as Solid Edge or AutoCAD. In order to have this new feature, an interface was developed, using the GiD software package as a meshing code. The article describes this method with two models to illustrate the results.

  8. JAR3D Webserver: Scoring and aligning RNA loop sequences to known 3D motifs

    PubMed Central

    Roll, James; Zirbel, Craig L.; Sweeney, Blake; Petrov, Anton I.; Leontis, Neocles

    2016-01-01

    Many non-coding RNAs have been identified and may function by forming 2D and 3D structures. RNA hairpin and internal loops are often represented as unstructured on secondary structure diagrams, but RNA 3D structures show that most such loops are structured by non-Watson–Crick basepairs and base stacking. Moreover, different RNA sequences can form the same RNA 3D motif. JAR3D finds possible 3D geometries for hairpin and internal loops by matching loop sequences to motif groups from the RNA 3D Motif Atlas, by exact sequence match when possible, and by probabilistic scoring and edit distance for novel sequences. The scoring gauges the ability of the sequences to form the same pattern of interactions observed in 3D structures of the motif. The JAR3D webserver at http://rna.bgsu.edu/jar3d/ takes one or many sequences of a single loop as input, or else one or many sequences of longer RNAs with multiple loops. Each sequence is scored against all current motif groups. The output shows the ten best-matching motif groups. Users can align input sequences to each of the motif groups found by JAR3D. JAR3D will be updated with every release of the RNA 3D Motif Atlas, and so its performance is expected to improve over time. PMID:27235417

  9. XML3D and Xflow: combining declarative 3D for the Web with generic data flows.

    PubMed

    Klein, Felix; Sons, Kristian; Rubinstein, Dmitri; Slusallek, Philipp

    2013-01-01

    Researchers have combined XML3D, which provides declarative, interactive 3D scene descriptions based on HTML5, with Xflow, a language for declarative, high-performance data processing. The result lets Web developers combine a 3D scene graph with data flows for dynamic meshes, animations, image processing, and postprocessing. PMID:24808080

  10. Do-It-Yourself: 3D Models of Hydrogenic Orbitals through 3D Printing

    ERIC Educational Resources Information Center

    Griffith, Kaitlyn M.; de Cataldo, Riccardo; Fogarty, Keir H.

    2016-01-01

    Introductory chemistry students often have difficulty visualizing the 3-dimensional shapes of the hydrogenic electron orbitals without the aid of physical 3D models. Unfortunately, commercially available models can be quite expensive. 3D printing offers a solution for producing models of hydrogenic orbitals. 3D printing technology is widely…

  11. Resampling technique in the orthogonal direction for down-looking Synthetic Aperture Imaging Ladar

    NASA Astrophysics Data System (ADS)

    Li, Guangyuan; Sun, Jianfeng; Lu, Zhiyong; Zhang, Ning; Cai, Guangyu; Sun, Zhiwei; Liu, Liren

    2015-09-01

    The implementation of down-looking Synthetic Aperture Imaging Ladar(SAIL) uses quadratic phase history reconstruction in the travel direction and linear phase modulation reconstruction in the orthogonal direction. And the linear phase modulation in the orthogonal direction is generated by the shift of two cylindrical lenses in the two polarization-orthogonal beams. Therefore, the fast-moving of two cylindrical lenses is necessary for airborne down-looking SAIL to match the aircraft flight speed and to realize the compression of the orthogonal direction, but the quick start and the quick stop of the cylindrical lenses must greatly damage the motor and make the motion trail non-uniform. To reduce the damage and get relatively well trajectory, we make the motor move like a sinusoidal curve to make it more realistic movement, and through a resampling interpolation imaging algorithm, we can transform the nonlinear phase to linear phase, and get good reconstruction results of point target and area target in laboratory. The influences on imaging quality in different sampling positions when the motor make a sinusoidal motion and the necessity of the algorithm are analyzed. At last, we perform a comparison of the results of two cases in resolution.

  12. Outward atmospheric scintillation effects and inward atmospheric scintillation effects comparisons for direct detection ladar applications

    NASA Astrophysics Data System (ADS)

    Youmans, Douglas G.

    2014-06-01

    Atmospheric turbulence produces intensity modulation or "scintillation" effects on both on the outward laser-mode path and on the return backscattered radiation path. These both degrade laser radar (ladar) target acquisition, ranging, imaging, and feature estimation. However, the finite sized objects create scintillation averaging on the outgoing path and the finite sized telescope apertures produce scintillation averaging on the return path. We expand on previous papers going to moderate to strong turbulence cases by starting from a 20kft altitude platform and propagating at 0° elevation (with respect to the local vertical) for 100km range to a 1 m diameter diffuse sphere. The outward scintillation and inward scintillation effects, as measured at the focal plane detector array of the receiving aperture, will be compared. To eliminate hard-body surface speckle effects in order to study scintillation, Goodman's M-parameter is set to 106 in the analytical equations and the non-coherent imaging algorithm is employed in Monte Carlo realizations. The analytical equations of the signal-to-noise ratio (SNRp), or mean squared signal over a variance, for a given focal plane array pixel window of interest will be summarized and compared to Monte Carlo realizations of a 1m diffuse sphere.

  13. Texture mapping based on multiple aerial imageries in urban areas

    NASA Astrophysics Data System (ADS)

    Zhou, Guoqing; Ye, Siqi; Wang, Yuefeng; Han, Caiyun; Wang, Chenxi

    2015-12-01

    In the realistic 3D model reconstruction, the requirement of the texture is very high. Texture is one of the key factors that affecting realistic of the model and using texture mapping technology to realize. In this paper we present a practical approach of texture mapping based on photogrammetry theory from multiple aerial imageries in urban areas. By collinearity equation to matching the model and imageries, and in order to improving the quality of texture, we describe an automatic approach for select the optimal texture to realized 3D building from the aerial imageries of many strip. The texture of buildings can be automatically matching by the algorithm. The experimental results show that the platform of texture mapping process has a high degree of automation and improve the efficiency of the 3D modeling reconstruction.

  14. Quality Analysis of 3d Surface Reconstruction Using Multi-Platform Photogrammetric Systems

    NASA Astrophysics Data System (ADS)

    Lari, Z.; El-Sheimy, N.

    2016-06-01

    In recent years, the necessity of accurate 3D surface reconstruction has been more pronounced for a wide range of mapping, modelling, and monitoring applications. The 3D data for satisfying the needs of these applications can be collected using different digital imaging systems. Among them, photogrammetric systems have recently received considerable attention due to significant improvements in digital imaging sensors, emergence of new mapping platforms, and development of innovative data processing techniques. To date, a variety of techniques haven been proposed for 3D surface reconstruction using imagery collected by multi-platform photogrammetric systems. However, these approaches suffer from the lack of a well-established quality control procedure which evaluates the quality of reconstructed 3D surfaces independent of the utilized reconstruction technique. Hence, this paper aims to introduce a new quality assessment platform for the evaluation of the 3D surface reconstruction using photogrammetric data. This quality control procedure is performed while considering the quality of input data, processing procedures, and photo-realistic 3D surface modelling. The feasibility of the proposed quality control procedure is finally verified by quality assessment of the 3D surface reconstruction using images from different photogrammetric systems.

  15. TRMM 3-D Flyby of Ingrid

    NASA Video Gallery

    This 3-D flyby of Tropical Storm Ingrid's rainfall was created from TRMM satellite data for Sept. 16. Heaviest rainfall appears in red towers over the Gulf of Mexico, while moderate rainfall stretc...

  16. 3DSEM: A 3D microscopy dataset.

    PubMed

    Tafti, Ahmad P; Kirkpatrick, Andrew B; Holz, Jessica D; Owen, Heather A; Yu, Zeyun

    2016-03-01

    The Scanning Electron Microscope (SEM) as a 2D imaging instrument has been widely used in many scientific disciplines including biological, mechanical, and materials sciences to determine the surface attributes of microscopic objects. However the SEM micrographs still remain 2D images. To effectively measure and visualize the surface properties, we need to truly restore the 3D shape model from 2D SEM images. Having 3D surfaces would provide anatomic shape of micro-samples which allows for quantitative measurements and informative visualization of the specimens being investigated. The 3DSEM is a dataset for 3D microscopy vision which is freely available at [1] for any academic, educational, and research purposes. The dataset includes both 2D images and 3D reconstructed surfaces of several real microscopic samples. PMID:26779561

  17. 3DSEM: A 3D microscopy dataset

    PubMed Central

    Tafti, Ahmad P.; Kirkpatrick, Andrew B.; Holz, Jessica D.; Owen, Heather A.; Yu, Zeyun

    2015-01-01

    The Scanning Electron Microscope (SEM) as a 2D imaging instrument has been widely used in many scientific disciplines including biological, mechanical, and materials sciences to determine the surface attributes of microscopic objects. However the SEM micrographs still remain 2D images. To effectively measure and visualize the surface properties, we need to truly restore the 3D shape model from 2D SEM images. Having 3D surfaces would provide anatomic shape of micro-samples which allows for quantitative measurements and informative visualization of the specimens being investigated. The 3DSEM is a dataset for 3D microscopy vision which is freely available at [1] for any academic, educational, and research purposes. The dataset includes both 2D images and 3D reconstructed surfaces of several real microscopic samples. PMID:26779561

  18. Tropical Cyclone Jack in Satellite 3-D

    NASA Video Gallery

    This 3-D flyby from NASA's TRMM satellite of Tropical Cyclone Jack on April 21 shows that some of the thunderstorms were shown by TRMM PR were still reaching height of at least 17 km (10.5 miles). ...

  19. An Augmented Reality based 3D Catalog

    NASA Astrophysics Data System (ADS)

    Yamada, Ryo; Kishimoto, Katsumi

    This paper presents a 3D catalog system that uses Augmented Reality technology. The use of Web-based catalog systems that present products in 3D form is increasing in various fields, along with the rapid and widespread adoption of Electronic Commerce. However, 3D shapes could previously only be seen in a virtual space, and it was difficult to understand how the products would actually look in the real world. To solve this, we propose a method that combines the virtual and real worlds simply and intuitively. The method applies Augmented Reality technology, and the system developed based on the method enables users to evaluate 3D virtual products in a real environment.

  20. 3D-printed bioanalytical devices.

    PubMed

    Bishop, Gregory W; Satterwhite-Warden, Jennifer E; Kadimisetty, Karteek; Rusling, James F

    2016-07-15

    While 3D printing technologies first appeared in the 1980s, prohibitive costs, limited materials, and the relatively small number of commercially available printers confined applications mainly to prototyping for manufacturing purposes. As technologies, printer cost, materials, and accessibility continue to improve, 3D printing has found widespread implementation in research and development in many disciplines due to ease-of-use and relatively fast design-to-object workflow. Several 3D printing techniques have been used to prepare devices such as milli- and microfluidic flow cells for analyses of cells and biomolecules as well as interfaces that enable bioanalytical measurements using cellphones. This review focuses on preparation and applications of 3D-printed bioanalytical devices.

  1. Cyclone Rusty's Landfall in 3-D

    NASA Video Gallery

    This 3-D image derived from NASA's TRMM satellite Precipitation Radar data on February 26, 2013 at 0654 UTC showed that the tops of some towering thunderstorms in Rusty's eye wall were reaching hei...

  2. 3-D Animation of Typhoon Bopha

    NASA Video Gallery

    This 3-D animation of NASA's TRMM satellite data showed Typhoon Bopha tracking over the Philippines on Dec. 3 and moving into the Sulu Sea on Dec. 4, 2012. TRMM saw heavy rain (red) was falling at ...

  3. Palacios field: A 3-D case history

    SciTech Connect

    McWhorter, R.; Torguson, B.

    1994-12-31

    In late 1992, Mitchell Energy Corporation acquired a 7.75 sq mi (20.0 km{sup 2}) 3-D seismic survey over Palacios field. Matagorda County, Texas. The company shot the survey to help evaluate the field for further development by delineating the fault pattern of the producing Middle Oligocene Frio interval. They compare the mapping of the field before and after the 3-D survey. This comparison shows that the 3-D volume yields superior fault imaging and interpretability compared to the dense 2-D data set. The problems with the 2-D data set are improper imaging of small and oblique faults and insufficient coverage over a complex fault pattern. Whereas the 2-D data set validated a simple fault model, the 3-D volume revealed a more complex history of faulting that includes three different fault systems. This discovery enabled them to reconstruct the depositional and structural history of Palacios field.

  4. 3D-printed bioanalytical devices

    NASA Astrophysics Data System (ADS)

    Bishop, Gregory W.; Satterwhite-Warden, Jennifer E.; Kadimisetty, Karteek; Rusling, James F.

    2016-07-01

    While 3D printing technologies first appeared in the 1980s, prohibitive costs, limited materials, and the relatively small number of commercially available printers confined applications mainly to prototyping for manufacturing purposes. As technologies, printer cost, materials, and accessibility continue to improve, 3D printing has found widespread implementation in research and development in many disciplines due to ease-of-use and relatively fast design-to-object workflow. Several 3D printing techniques have been used to prepare devices such as milli- and microfluidic flow cells for analyses of cells and biomolecules as well as interfaces that enable bioanalytical measurements using cellphones. This review focuses on preparation and applications of 3D-printed bioanalytical devices.

  5. 3-D TRMM Flyby of Hurricane Amanda

    NASA Video Gallery

    The TRMM satellite flew over Hurricane Amanda on Tuesday, May 27 at 1049 UTC (6:49 a.m. EDT) and captured rainfall rates and cloud height data that was used to create this 3-D simulated flyby. Cred...

  6. Eyes on the Earth 3D

    NASA Technical Reports Server (NTRS)

    Kulikov, anton I.; Doronila, Paul R.; Nguyen, Viet T.; Jackson, Randal K.; Greene, William M.; Hussey, Kevin J.; Garcia, Christopher M.; Lopez, Christian A.

    2013-01-01

    Eyes on the Earth 3D software gives scientists, and the general public, a realtime, 3D interactive means of accurately viewing the real-time locations, speed, and values of recently collected data from several of NASA's Earth Observing Satellites using a standard Web browser (climate.nasa.gov/eyes). Anyone with Web access can use this software to see where the NASA fleet of these satellites is now, or where they will be up to a year in the future. The software also displays several Earth Science Data sets that have been collected on a daily basis. This application uses a third-party, 3D, realtime, interactive game engine called Unity 3D to visualize the satellites and is accessible from a Web browser.

  7. 3D Printing for Tissue Engineering

    PubMed Central

    Jia, Jia; Yao, Hai; Mei, Ying

    2016-01-01

    Tissue engineering aims to fabricate functional tissue for applications in regenerative medicine and drug testing. More recently, 3D printing has shown great promise in tissue fabrication with a structural control from micro- to macro-scale by using a layer-by-layer approach. Whether through scaffold-based or scaffold-free approaches, the standard for 3D printed tissue engineering constructs is to provide a biomimetic structural environment that facilitates tissue formation and promotes host tissue integration (e.g., cellular infiltration, vascularization, and active remodeling). This review will cover several approaches that have advanced the field of 3D printing through novel fabrication methods of tissue engineering constructs. It will also discuss the applications of synthetic and natural materials for 3D printing facilitated tissue fabrication. PMID:26869728

  8. 3DSEM: A 3D microscopy dataset.

    PubMed

    Tafti, Ahmad P; Kirkpatrick, Andrew B; Holz, Jessica D; Owen, Heather A; Yu, Zeyun

    2016-03-01

    The Scanning Electron Microscope (SEM) as a 2D imaging instrument has been widely used in many scientific disciplines including biological, mechanical, and materials sciences to determine the surface attributes of microscopic objects. However the SEM micrographs still remain 2D images. To effectively measure and visualize the surface properties, we need to truly restore the 3D shape model from 2D SEM images. Having 3D surfaces would provide anatomic shape of micro-samples which allows for quantitative measurements and informative visualization of the specimens being investigated. The 3DSEM is a dataset for 3D microscopy vision which is freely available at [1] for any academic, educational, and research purposes. The dataset includes both 2D images and 3D reconstructed surfaces of several real microscopic samples.

  9. 3D-printed bioanalytical devices.

    PubMed

    Bishop, Gregory W; Satterwhite-Warden, Jennifer E; Kadimisetty, Karteek; Rusling, James F

    2016-07-15

    While 3D printing technologies first appeared in the 1980s, prohibitive costs, limited materials, and the relatively small number of commercially available printers confined applications mainly to prototyping for manufacturing purposes. As technologies, printer cost, materials, and accessibility continue to improve, 3D printing has found widespread implementation in research and development in many disciplines due to ease-of-use and relatively fast design-to-object workflow. Several 3D printing techniques have been used to prepare devices such as milli- and microfluidic flow cells for analyses of cells and biomolecules as well as interfaces that enable bioanalytical measurements using cellphones. This review focuses on preparation and applications of 3D-printed bioanalytical devices. PMID:27250897

  10. Nonlaser-based 3D surface imaging

    SciTech Connect

    Lu, Shin-yee; Johnson, R.K.; Sherwood, R.J.

    1994-11-15

    3D surface imaging refers to methods that generate a 3D surface representation of objects of a scene under viewing. Laser-based 3D surface imaging systems are commonly used in manufacturing, robotics and biomedical research. Although laser-based systems provide satisfactory solutions for most applications, there are situations where non laser-based approaches are preferred. The issues that make alternative methods sometimes more attractive are: (1) real-time data capturing, (2) eye-safety, (3) portability, and (4) work distance. The focus of this presentation is on generating a 3D surface from multiple 2D projected images using CCD cameras, without a laser light source. Two methods are presented: stereo vision and depth-from-focus. Their applications are described.

  11. 3-D Flyover Visualization of Veil Nebula

    NASA Video Gallery

    This 3-D visualization flies across a small portion of the Veil Nebula as photographed by the Hubble Space Telescope. This region is a small part of a huge expanding remnant from a star that explod...

  12. Future Engineers 3-D Print Timelapse

    NASA Video Gallery

    NASA Challenges K-12 students to create a model of a container for space using 3-D modeling software. Astronauts need containers of all kinds - from advanced containers that can study fruit flies t...

  13. Modeling Cellular Processes in 3-D

    PubMed Central

    Mogilner, Alex; Odde, David

    2011-01-01

    Summary Recent advances in photonic imaging and fluorescent protein technology offer unprecedented views of molecular space-time dynamics in living cells. At the same time, advances in computing hardware and software enable modeling of ever more complex systems, from global climate to cell division. As modeling and experiment become more closely integrated, we must address the issue of modeling cellular processes in 3-D. Here, we highlight recent advances related to 3-D modeling in cell biology. While some processes require full 3-D analysis, we suggest that others are more naturally described in 2-D or 1-D. Keeping the dimensionality as low as possible reduces computational time and makes models more intuitively comprehensible; however, the ability to test full 3-D models will build greater confidence in models generally and remains an important emerging area of cell biological modeling. PMID:22036197

  14. Generalized poisson 3-D scatterer distributions.

    PubMed

    Laporte, Catherine; Clark, James J; Arbel, Tal

    2009-02-01

    This paper describes a simple, yet powerful ultrasound scatterer distribution model. The model extends a 1-D generalized Poisson process to multiple dimensions using a Hilbert curve. The model is intuitively tuned by spatial density and regularity parameters which reliably predict the first and second-order statistics of varied synthetic imagery. PMID:19251530

  15. 3D goes digital: from stereoscopy to modern 3D imaging techniques

    NASA Astrophysics Data System (ADS)

    Kerwien, N.

    2014-11-01

    In the 19th century, English physicist Charles Wheatstone discovered stereopsis, the basis for 3D perception. His construction of the first stereoscope established the foundation for stereoscopic 3D imaging. Since then, many optical instruments were influenced by these basic ideas. In recent decades, the advent of digital technologies revolutionized 3D imaging. Powerful readily available sensors and displays combined with efficient pre- or post-processing enable new methods for 3D imaging and applications. This paper draws an arc from basic concepts of 3D imaging to modern digital implementations, highlighting instructive examples from its 175 years of history.

  16. Motif3D: Relating protein sequence motifs to 3D structure.

    PubMed

    Gaulton, Anna; Attwood, Teresa K

    2003-07-01

    Motif3D is a web-based protein structure viewer designed to allow sequence motifs, and in particular those contained in the fingerprints of the PRINTS database, to be visualised on three-dimensional (3D) structures. Additional functionality is provided for the rhodopsin-like G protein-coupled receptors, enabling fingerprint motifs of any of the receptors in this family to be mapped onto the single structure available, that of bovine rhodopsin. Motif3D can be used via the web interface available at: http://www.bioinf.man.ac.uk/dbbrowser/motif3d/motif3d.html.

  17. Assessing 3d Photogrammetry Techniques in Craniometrics

    NASA Astrophysics Data System (ADS)

    Moshobane, M. C.; de Bruyn, P. J. N.; Bester, M. N.

    2016-06-01

    Morphometrics (the measurement of morphological features) has been revolutionized by the creation of new techniques to study how organismal shape co-varies with several factors such as ecophenotypy. Ecophenotypy refers to the divergence of phenotypes due to developmental changes induced by local environmental conditions, producing distinct ecophenotypes. None of the techniques hitherto utilized could explicitly address organismal shape in a complete biological form, i.e. three-dimensionally. This study investigates the use of the commercial software, Photomodeler Scanner® (PMSc®) three-dimensional (3D) modelling software to produce accurate and high-resolution 3D models. Henceforth, the modelling of Subantarctic fur seal (Arctocephalus tropicalis) and Antarctic fur seal (Arctocephalus gazella) skulls which could allow for 3D measurements. Using this method, sixteen accurate 3D skull models were produced and five metrics were determined. The 3D linear measurements were compared to measurements taken manually with a digital caliper. In addition, repetitive measurements were recorded by varying researchers to determine repeatability. To allow for comparison straight line measurements were taken with the software, assuming that close accord with all manually measured features would illustrate the model's accurate replication of reality. Measurements were not significantly different demonstrating that realistic 3D skull models can be successfully produced to provide a consistent basis for craniometrics, with the additional benefit of allowing non-linear measurements if required.

  18. Recording stereoscopic 3D neurosurgery with a head-mounted 3D camera system.

    PubMed

    Lee, Brian; Chen, Brian R; Chen, Beverly B; Lu, James Y; Giannotta, Steven L

    2015-06-01

    Stereoscopic three-dimensional (3D) imaging can present more information to the viewer and further enhance the learning experience over traditional two-dimensional (2D) video. Most 3D surgical videos are recorded from the operating microscope and only feature the crux, or the most important part of the surgery, leaving out other crucial parts of surgery including the opening, approach, and closing of the surgical site. In addition, many other surgeries including complex spine, trauma, and intensive care unit procedures are also rarely recorded. We describe and share our experience with a commercially available head-mounted stereoscopic 3D camera system to obtain stereoscopic 3D recordings of these seldom recorded aspects of neurosurgery. The strengths and limitations of using the GoPro(®) 3D system as a head-mounted stereoscopic 3D camera system in the operating room are reviewed in detail. Over the past several years, we have recorded in stereoscopic 3D over 50 cranial and spinal surgeries and created a library for education purposes. We have found the head-mounted stereoscopic 3D camera system to be a valuable asset to supplement 3D footage from a 3D microscope. We expect that these comprehensive 3D surgical videos will become an important facet of resident education and ultimately lead to improved patient care.

  19. CFL3D, FUN3d, and NSU3D Contributions to the Fifth Drag Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Laflin, Kelly R.; Chaffin, Mark S.; Powell, Nicholas; Levy, David W.

    2013-01-01

    Results presented at the Fifth Drag Prediction Workshop using CFL3D, FUN3D, and NSU3D are described. These are calculations on the workshop provided grids and drag adapted grids. The NSU3D results have been updated to reflect an improvement to skin friction calculation on skewed grids. FUN3D results generated after the workshop are included for custom participant generated grids and a grid from a previous workshop. Uniform grid refinement at the design condition shows a tight grouping in calculated drag, where the variation in the pressure component of drag is larger than the skin friction component. At this design condition, A fine-grid drag value was predicted with a smaller drag adjoint adapted grid via tetrahedral adaption to a metric and mixed-element subdivision. The buffet study produced larger variation than the design case, which is attributed to large differences in the predicted side-of-body separation extent. Various modeling and discretization approaches had a strong impact on predicted side-of-body separation. This large wing root separation bubble was not observed in wind tunnel tests indicating that more work is necessary in modeling wing root juncture flows to predict experiments.

  20. Self assembled structures for 3D integration

    NASA Astrophysics Data System (ADS)

    Rao, Madhav

    Three dimensional (3D) micro-scale structures attached to a silicon substrate have various applications in microelectronics. However, formation of 3D structures using conventional micro-fabrication techniques are not efficient and require precise control of processing parameters. Self assembly is a method for creating 3D structures that takes advantage of surface area minimization phenomena. Solder based self assembly (SBSA), the subject of this dissertation, uses solder as a facilitator in the formation of 3D structures from 2D patterns. Etching a sacrificial layer underneath a portion of the 2D pattern allows the solder reflow step to pull those areas out of the substrate plane resulting in a folded 3D structure. Initial studies using the SBSA method demonstrated low yields in the formation of five different polyhedra. The failures in folding were primarily attributed to nonuniform solder deposition on the underlying metal pads. The dip soldering method was analyzed and subsequently refined. A modified dip soldering process provided improved yield among the polyhedra. Solder bridging referred as joining of solder deposited on different metal patterns in an entity influenced the folding mechanism. In general, design parameters such as small gap-spacings and thick metal pads were found to favor solder bridging for all patterns studied. Two types of soldering: face and edge soldering were analyzed. Face soldering refers to the application of solder on the entire metal face. Edge soldering indicates application of solder only on the edges of the metal face. Mechanical grinding showed that face soldered SBSA structures were void free and robust in nature. In addition, the face soldered 3D structures provide a consistent heat resistant solder standoff height that serve as attachments in the integration of dissimilar electronic technologies. Face soldered 3D structures were developed on the underlying conducting channel to determine the thermo-electric reliability of

  1. PLOT3D Export Tool for Tecplot

    NASA Technical Reports Server (NTRS)

    Alter, Stephen

    2010-01-01

    The PLOT3D export tool for Tecplot solves the problem of modified data being impossible to output for use by another computational science solver. The PLOT3D Exporter add-on enables the use of the most commonly available visualization tools to engineers for output of a standard format. The exportation of PLOT3D data from Tecplot has far reaching effects because it allows for grid and solution manipulation within a graphical user interface (GUI) that is easily customized with macro language-based and user-developed GUIs. The add-on also enables the use of Tecplot as an interpolation tool for solution conversion between different grids of different types. This one add-on enhances the functionality of Tecplot so significantly, it offers the ability to incorporate Tecplot into a general suite of tools for computational science applications as a 3D graphics engine for visualization of all data. Within the PLOT3D Export Add-on are several functions that enhance the operations and effectiveness of the add-on. Unlike Tecplot output functions, the PLOT3D Export Add-on enables the use of the zone selection dialog in Tecplot to choose which zones are to be written by offering three distinct options - output of active, inactive, or all zones (grid blocks). As the user modifies the zones to output with the zone selection dialog, the zones to be written are similarly updated. This enables the use of Tecplot to create multiple configurations of a geometry being analyzed. For example, if an aircraft is loaded with multiple deflections of flaps, by activating and deactivating different zones for a specific flap setting, new specific configurations of that aircraft can be easily generated by only writing out specific zones. Thus, if ten flap settings are loaded into Tecplot, the PLOT3D Export software can output ten different configurations, one for each flap setting.

  2. A microfluidic device for 2D to 3D and 3D to 3D cell navigation

    NASA Astrophysics Data System (ADS)

    Shamloo, Amir; Amirifar, Leyla

    2016-01-01

    Microfluidic devices have received wide attention and shown great potential in the field of tissue engineering and regenerative medicine. Investigating cell response to various stimulations is much more accurate and comprehensive with the aid of microfluidic devices. In this study, we introduced a microfluidic device by which the matrix density as a mechanical property and the concentration profile of a biochemical factor as a chemical property could be altered. Our microfluidic device has a cell tank and a cell culture chamber to mimic both 2D to 3D and 3D to 3D migration of three types of cells. Fluid shear stress is negligible on the cells and a stable concentration gradient can be obtained by diffusion. The device was designed by a numerical simulation so that the uniformity of the concentration gradients throughout the cell culture chamber was obtained. Adult neural cells were cultured within this device and they showed different branching and axonal navigation phenotypes within varying nerve growth factor (NGF) concentration profiles. Neural stem cells were also cultured within varying collagen matrix densities while exposed to NGF concentrations and they experienced 3D to 3D collective migration. By generating vascular endothelial growth factor concentration gradients, adult human dermal microvascular endothelial cells also migrated in a 2D to 3D manner and formed a stable lumen within a specific collagen matrix density. It was observed that a minimum absolute concentration and concentration gradient were required to stimulate migration of all types of the cells. This device has the advantage of changing multiple parameters simultaneously and is expected to have wide applicability in cell studies.

  3. RAG-3D: A search tool for RNA 3D substructures

    DOE PAGES

    Zahran, Mai; Sevim Bayrak, Cigdem; Elmetwaly, Shereef; Schlick, Tamar

    2015-08-24

    In this study, to address many challenges in RNA structure/function prediction, the characterization of RNA's modular architectural units is required. Using the RNA-As-Graphs (RAG) database, we have previously explored the existence of secondary structure (2D) submotifs within larger RNA structures. Here we present RAG-3D—a dataset of RNA tertiary (3D) structures and substructures plus a web-based search tool—designed to exploit graph representations of RNAs for the goal of searching for similar 3D structural fragments. The objects in RAG-3D consist of 3D structures translated into 3D graphs, cataloged based on the connectivity between their secondary structure elements. Each graph is additionally describedmore » in terms of its subgraph building blocks. The RAG-3D search tool then compares a query RNA 3D structure to those in the database to obtain structurally similar structures and substructures. This comparison reveals conserved 3D RNA features and thus may suggest functional connections. Though RNA search programs based on similarity in sequence, 2D, and/or 3D structural elements are available, our graph-based search tool may be advantageous for illuminating similarities that are not obvious; using motifs rather than sequence space also reduces search times considerably. Ultimately, such substructuring could be useful for RNA 3D structure prediction, structure/function inference and inverse folding.« less

  4. RAG-3D: A search tool for RNA 3D substructures

    SciTech Connect

    Zahran, Mai; Sevim Bayrak, Cigdem; Elmetwaly, Shereef; Schlick, Tamar

    2015-08-24

    In this study, to address many challenges in RNA structure/function prediction, the characterization of RNA's modular architectural units is required. Using the RNA-As-Graphs (RAG) database, we have previously explored the existence of secondary structure (2D) submotifs within larger RNA structures. Here we present RAG-3D—a dataset of RNA tertiary (3D) structures and substructures plus a web-based search tool—designed to exploit graph representations of RNAs for the goal of searching for similar 3D structural fragments. The objects in RAG-3D consist of 3D structures translated into 3D graphs, cataloged based on the connectivity between their secondary structure elements. Each graph is additionally described in terms of its subgraph building blocks. The RAG-3D search tool then compares a query RNA 3D structure to those in the database to obtain structurally similar structures and substructures. This comparison reveals conserved 3D RNA features and thus may suggest functional connections. Though RNA search programs based on similarity in sequence, 2D, and/or 3D structural elements are available, our graph-based search tool may be advantageous for illuminating similarities that are not obvious; using motifs rather than sequence space also reduces search times considerably. Ultimately, such substructuring could be useful for RNA 3D structure prediction, structure/function inference and inverse folding.

  5. RAG-3D: a search tool for RNA 3D substructures

    PubMed Central

    Zahran, Mai; Sevim Bayrak, Cigdem; Elmetwaly, Shereef; Schlick, Tamar

    2015-01-01

    To address many challenges in RNA structure/function prediction, the characterization of RNA's modular architectural units is required. Using the RNA-As-Graphs (RAG) database, we have previously explored the existence of secondary structure (2D) submotifs within larger RNA structures. Here we present RAG-3D—a dataset of RNA tertiary (3D) structures and substructures plus a web-based search tool—designed to exploit graph representations of RNAs for the goal of searching for similar 3D structural fragments. The objects in RAG-3D consist of 3D structures translated into 3D graphs, cataloged based on the connectivity between their secondary structure elements. Each graph is additionally described in terms of its subgraph building blocks. The RAG-3D search tool then compares a query RNA 3D structure to those in the database to obtain structurally similar structures and substructures. This comparison reveals conserved 3D RNA features and thus may suggest functional connections. Though RNA search programs based on similarity in sequence, 2D, and/or 3D structural elements are available, our graph-based search tool may be advantageous for illuminating similarities that are not obvious; using motifs rather than sequence space also reduces search times considerably. Ultimately, such substructuring could be useful for RNA 3D structure prediction, structure/function inference and inverse folding. PMID:26304547

  6. Improved Prediction of Momentum and Scalar Fluxes Using MODIS Imagery

    NASA Technical Reports Server (NTRS)

    Crago, Richard D.; Jasinski, Michael F.

    2003-01-01

    There are remote sensing and science objectives. The remote sensing objectives are: To develop and test a theoretical method for estimating local momentum aerodynamic roughness length, z(sub 0m), using satellite multispectral imagery. To adapt the method to the MODIS imagery. To develop a high-resolution (approx. 1km) gridded dataset of local momentum roughness for the continental United States and southern Canada, using MODIS imagery and other MODIS derived products. The science objective is: To determine the sensitivity of improved satellite-derived (MODIS-) estimates of surface roughness on the momentum and scalar fluxes, within the context of 3-D atmospheric modeling.

  7. ICER-3D Hyperspectral Image Compression Software

    NASA Technical Reports Server (NTRS)

    Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received

  8. T-HEMP3D user manual

    SciTech Connect

    Turner, D.

    1983-08-01

    The T-HEMP3D (Transportable HEMP3D) computer program is a derivative of the STEALTH three-dimensional thermodynamics code developed by Science Applications, Inc., under the direction of Ron Hofmann. STEALTH, in turn, is based entirely on the original HEMP3D code written at Lawrence Livermore National Laboratory. The primary advantage STEALTH has over its predecessors is that it was designed using modern structured design techniques, with rigorous programming standards enforced. This yields two benefits. First, the code is easily changeable; this is a necessity for a physics code used for research. The second benefit is that the code is easily transportable between different types of computers. The STEALTH program was transferred to LLNL under a cooperative development agreement. Changes were made primarily in three areas: material specification, coordinate generation, and the addition of sliding surface boundary conditions. The code was renamed T-HEMP3D to avoid confusion with other versions of STEALTH. This document summarizes the input to T-HEMP3D, as used at LLNL. It does not describe the physics simulated by the program, nor the numerical techniques employed. Furthermore, it does not describe the separate job steps of coordinate generation and post-processing, including graphical display of results. (WHK)

  9. The importance of 3D dosimetry

    NASA Astrophysics Data System (ADS)

    Low, Daniel

    2015-01-01

    Radiation therapy has been getting progressively more complex for the past 20 years. Early radiation therapy techniques needed only basic dosimetry equipment; motorized water phantoms, ionization chambers, and basic radiographic film techniques. As intensity modulated radiation therapy and image guided therapy came into widespread practice, medical physicists were challenged with developing effective and efficient dose measurement techniques. The complex 3-dimensional (3D) nature of the dose distributions that were being delivered demanded the development of more quantitative and more thorough methods for dose measurement. The quality assurance vendors developed a wide array of multidetector arrays that have been enormously useful for measuring and characterizing dose distributions, and these have been made especially useful with the advent of 3D dose calculation systems based on the array measurements, as well as measurements made using film and portal imagers. Other vendors have been providing 3D calculations based on data from the linear accelerator or the record and verify system, providing thorough evaluation of the dose but lacking quality assurance (QA) of the dose delivery process, including machine calibration. The current state of 3D dosimetry is one of a state of flux. The vendors and professional associations are trying to determine the optimal balance between thorough QA, labor efficiency, and quantitation. This balance will take some time to reach, but a necessary component will be the 3D measurement and independent calculation of delivered radiation therapy dose distributions.

  10. 3D Spray Droplet Distributions in Sneezes

    NASA Astrophysics Data System (ADS)

    Techet, Alexandra; Scharfman, Barry; Bourouiba, Lydia

    2015-11-01

    3D spray droplet clouds generated during human sneezing are investigated using the Synthetic Aperture Feature Extraction (SAFE) method, which relies on light field imaging (LFI) and synthetic aperture (SA) refocusing computational photographic techniques. An array of nine high-speed cameras are used to image sneeze droplets and tracked the droplets in 3D space and time (3D + T). An additional high-speed camera is utilized to track the motion of the head during sneezing. In the SAFE method, the raw images recorded by each camera in the array are preprocessed and binarized, simplifying post processing after image refocusing and enabling the extraction of feature sizes and positions in 3D + T. These binary images are refocused using either additive or multiplicative methods, combined with thresholding. Sneeze droplet centroids, radii, distributions and trajectories are determined and compared with existing data. The reconstructed 3D droplet centroids and radii enable a more complete understanding of the physical extent and fluid dynamics of sneeze ejecta. These measurements are important for understanding the infectious disease transmission potential of sneezes in various indoor environments.

  11. Extra dimensions: 3D in PDF documentation

    SciTech Connect

    Graf, Norman A.

    2011-01-11

    Experimental science is replete with multi-dimensional information which is often poorly represented by the two dimensions of presentation slides and print media. Past efforts to disseminate such information to a wider audience have failed for a number of reasons, including a lack of standards which are easy to implement and have broad support. Adobe's Portable Document Format (PDF) has in recent years become the de facto standard for secure, dependable electronic information exchange. It has done so by creating an open format, providing support for multiple platforms and being reliable and extensible. By providing support for the ECMA standard Universal 3D (U3D) file format in its free Adobe Reader software, Adobe has made it easy to distribute and interact with 3D content. By providing support for scripting and animation, temporal data can also be easily distributed to a wide, non-technical audience. We discuss how the field of radiation imaging could benefit from incorporating full 3D information about not only the detectors, but also the results of the experimental analyses, in its electronic publications. In this article, we present examples drawn from high-energy physics, mathematics and molecular biology which take advantage of this functionality. Furthermore, we demonstrate how 3D detector elements can be documented, using either CAD drawings or other sources such as GEANT visualizations as input.

  12. 3D dynamic roadmapping for abdominal catheterizations.

    PubMed

    Bender, Frederik; Groher, Martin; Khamene, Ali; Wein, Wolfgang; Heibel, Tim Hauke; Navab, Nassir

    2008-01-01

    Despite rapid advances in interventional imaging, the navigation of a guide wire through abdominal vasculature remains, not only for novice radiologists, a difficult task. Since this navigation is mostly based on 2D fluoroscopic image sequences from one view, the process is slowed down significantly due to missing depth information and patient motion. We propose a novel approach for 3D dynamic roadmapping in deformable regions by predicting the location of the guide wire tip in a 3D vessel model from the tip's 2D location, respiratory motion analysis, and view geometry. In a first step, the method compensates for the apparent respiratory motion in 2D space before backprojecting the 2D guide wire tip into three dimensional space, using a given projection matrix. To countervail the error connected to the projection parameters and the motion compensation, as well as the ambiguity caused by vessel deformation, we establish a statistical framework, which computes a reliable estimate of the guide wire tip location within the 3D vessel model. With this 2D-to-3D transfer, the navigation can be performed from arbitrary viewing angles, disconnected from the static perspective view of the fluoroscopic sequence. Tests on a realistic breathing phantom and on synthetic data with a known ground truth clearly reveal the superiority of our approach compared to naive methods for 3D roadmapping. The concepts and information presented in this paper are based on research and are not commercially available. PMID:18982662

  13. 3D bioprinting for engineering complex tissues.

    PubMed

    Mandrycky, Christian; Wang, Zongjie; Kim, Keekyoung; Kim, Deok-Ho

    2016-01-01

    Bioprinting is a 3D fabrication technology used to precisely dispense cell-laden biomaterials for the construction of complex 3D functional living tissues or artificial organs. While still in its early stages, bioprinting strategies have demonstrated their potential use in regenerative medicine to generate a variety of transplantable tissues, including skin, cartilage, and bone. However, current bioprinting approaches still have technical challenges in terms of high-resolution cell deposition, controlled cell distributions, vascularization, and innervation within complex 3D tissues. While no one-size-fits-all approach to bioprinting has emerged, it remains an on-demand, versatile fabrication technique that may address the growing organ shortage as well as provide a high-throughput method for cell patterning at the micrometer scale for broad biomedical engineering applications. In this review, we introduce the basic principles, materials, integration strategies and applications of bioprinting. We also discuss the recent developments, current challenges and future prospects of 3D bioprinting for engineering complex tissues. Combined with recent advances in human pluripotent stem cell technologies, 3D-bioprinted tissue models could serve as an enabling platform for high-throughput predictive drug screening and more effective regenerative therapies.

  14. Shim3d Helmholtz Solution Package

    2009-01-29

    This suite of codes solves the Helmholtz Equation for the steady-state propagation of single-frequency electromagnetic radiation in an arbitrary 2D or 3D dielectric medium. Materials can be either transparent or absorptive (including metals) and are described entirely by their shape and complex dielectric constant. Dielectric boundaries are assumed to always fall on grid boundaries and the material within a single grid cell is considered to be uniform. Input to the problem is in the formmore » of a Dirichlet boundary condition on a single boundary, and may be either analytic (Gaussian) in shape, or a mode shape computed using a separate code (such as the included eigenmode solver vwave20), and written to a file. Solution is via the finite difference method using Jacobi iteration for 3D problems or direct matrix inversion for 2D problems. Note that 3D problems that include metals will require different iteration parameters than described in the above reference. For structures with curved boundaries not easily modeled on a rectangular grid, the auxillary codes helmholtz11(2D), helm3d (semivectoral), and helmv3d (full vectoral) are provided. For these codes the finite difference equations are specified on a topological regular triangular grid and solved using Jacobi iteration or direct matrix inversion as before. An automatic grid generator is supplied.« less

  15. Full-color holographic 3D printer

    NASA Astrophysics Data System (ADS)

    Takano, Masami; Shigeta, Hiroaki; Nishihara, Takashi; Yamaguchi, Masahiro; Takahashi, Susumu; Ohyama, Nagaaki; Kobayashi, Akihiko; Iwata, Fujio

    2003-05-01

    A holographic 3D printer is a system that produces a direct hologram with full-parallax information using the 3-dimensional data of a subject from a computer. In this paper, we present a proposal for the reproduction of full-color images with the holographic 3D printer. In order to realize the 3-dimensional color image, we selected the 3 laser wavelength colors of red (λ=633nm), green (λ=533nm), and blue (λ=442nm), and we built a one-step optical system using a projection system and a liquid crystal display. The 3-dimensional color image is obtained by synthesizing in a 2D array the multiple exposure with these 3 wavelengths made on each 250mm elementary hologram, and moving recording medium on a x-y stage. For the natural color reproduction in the holographic 3D printer, we take the approach of the digital processing technique based on the color management technology. The matching between the input and output colors is performed by investigating first, the relation between the gray level transmittance of the LCD and the diffraction efficiency of the hologram and second, by measuring the color displayed by the hologram to establish a correlation. In our first experimental results a non-linear functional relation for single and multiple exposure of the three components were found. These results are the first step in the realization of a natural color 3D image produced by the holographic color 3D printer.

  16. DYNA3D Code Practices and Developments

    SciTech Connect

    Lin, L.; Zywicz, E.; Raboin, P.

    2000-04-21

    DYNA3D is an explicit, finite element code developed to solve high rate dynamic simulations for problems of interest to the engineering mechanics community. The DYNA3D code has been under continuous development since 1976[1] by the Methods Development Group in the Mechanical Engineering Department of Lawrence Livermore National Laboratory. The pace of code development activities has substantially increased in the past five years, growing from one to between four and six code developers. This has necessitated the use of software tools such as CVS (Concurrent Versions System) to help manage multiple version updates. While on-line documentation with an Adobe PDF manual helps to communicate software developments, periodically a summary document describing recent changes and improvements in DYNA3D software is needed. The first part of this report describes issues surrounding software versions and source control. The remainder of this report details the major capability improvements since the last publicly released version of DYNA3D in 1996. Not included here are the many hundreds of bug corrections and minor enhancements, nor the development in DYNA3D between the manual release in 1993[2] and the public code release in 1996.

  17. BEAMS3D Neutral Beam Injection Model

    NASA Astrophysics Data System (ADS)

    McMillan, Matthew; Lazerson, Samuel A.

    2014-09-01

    With the advent of applied 3D fields in Tokamaks and modern high performance stellarators, a need has arisen to address non-axisymmetric effects on neutral beam heating and fueling. We report on the development of a fully 3D neutral beam injection (NBI) model, BEAMS3D, which addresses this need by coupling 3D equilibria to a guiding center code capable of modeling neutral and charged particle trajectories across the separatrix and into the plasma core. Ionization, neutralization, charge-exchange, viscous slowing down, and pitch angle scattering are modeled with the ADAS atomic physics database. Elementary benchmark calculations are presented to verify the collisionless particle orbits, NBI model, frictional drag, and pitch angle scattering effects. A calculation of neutral beam heating in the NCSX device is performed, highlighting the capability of the code to handle 3D magnetic fields. Notice: this manuscript has been authored by Princeton University under Contract Number DE-AC02-09CH11466 with the US Department of Energy. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.

  18. Lifting Object Detection Datasets into 3D.

    PubMed

    Carreira, Joao; Vicente, Sara; Agapito, Lourdes; Batista, Jorge

    2016-07-01

    While data has certainly taken the center stage in computer vision in recent years, it can still be difficult to obtain in certain scenarios. In particular, acquiring ground truth 3D shapes of objects pictured in 2D images remains a challenging feat and this has hampered progress in recognition-based object reconstruction from a single image. Here we propose to bypass previous solutions such as 3D scanning or manual design, that scale poorly, and instead populate object category detection datasets semi-automatically with dense, per-object 3D reconstructions, bootstrapped from:(i) class labels, (ii) ground truth figure-ground segmentations and (iii) a small set of keypoint annotations. Our proposed algorithm first estimates camera viewpoint using rigid structure-from-motion and then reconstructs object shapes by optimizing over visual hull proposals guided by loose within-class shape similarity assumptions. The visual hull sampling process attempts to intersect an object's projection cone with the cones of minimal subsets of other similar objects among those pictured from certain vantage points. We show that our method is able to produce convincing per-object 3D reconstructions and to accurately estimate cameras viewpoints on one of the most challenging existing object-category detection datasets, PASCAL VOC. We hope that our results will re-stimulate interest on joint object recognition and 3D reconstruction from a single image. PMID:27295458

  19. Magnetic Properties of 3D Printed Toroids

    NASA Astrophysics Data System (ADS)

    Bollig, Lindsey; Otto, Austin; Hilpisch, Peter; Mowry, Greg; Nelson-Cheeseman, Brittany; Renewable Energy; Alternatives Lab (REAL) Team

    Transformers are ubiquitous in electronics today. Although toroidal geometries perform most efficiently, transformers are traditionally made with rectangular cross-sections due to the lower manufacturing costs. Additive manufacturing techniques (3D printing) can easily achieve toroidal geometries by building up a part through a series of 2D layers. To get strong magnetic properties in a 3D printed transformer, a composite filament is used containing Fe dispersed in a polymer matrix. How the resulting 3D printed toroid responds to a magnetic field depends on two structural factors of the printed 2D layers: fill factor (planar density) and fill pattern. In this work, we investigate how the fill factor and fill pattern affect the magnetic properties of 3D printed toroids. The magnetic properties of the printed toroids are measured by a custom circuit that produces a hysteresis loop for each toroid. Toroids with various fill factors and fill patterns are compared to determine how these two factors can affect the magnetic field the toroid can produce. These 3D printed toroids can be used for numerous applications in order to increase the efficiency of transformers by making it possible for manufacturers to make a toroidal geometry.

  20. 3D bioprinting for engineering complex tissues.

    PubMed

    Mandrycky, Christian; Wang, Zongjie; Kim, Keekyoung; Kim, Deok-Ho

    2016-01-01

    Bioprinting is a 3D fabrication technology used to precisely dispense cell-laden biomaterials for the construction of complex 3D functional living tissues or artificial organs. While still in its early stages, bioprinting strategies have demonstrated their potential use in regenerative medicine to generate a variety of transplantable tissues, including skin, cartilage, and bone. However, current bioprinting approaches still have technical challenges in terms of high-resolution cell deposition, controlled cell distributions, vascularization, and innervation within complex 3D tissues. While no one-size-fits-all approach to bioprinting has emerged, it remains an on-demand, versatile fabrication technique that may address the growing organ shortage as well as provide a high-throughput method for cell patterning at the micrometer scale for broad biomedical engineering applications. In this review, we introduce the basic principles, materials, integration strategies and applications of bioprinting. We also discuss the recent developments, current challenges and future prospects of 3D bioprinting for engineering complex tissues. Combined with recent advances in human pluripotent stem cell technologies, 3D-bioprinted tissue models could serve as an enabling platform for high-throughput predictive drug screening and more effective regenerative therapies. PMID:26724184

  1. 3D culture for cardiac cells.

    PubMed

    Zuppinger, Christian

    2016-07-01

    This review discusses historical milestones, recent developments and challenges in the area of 3D culture models with cardiovascular cell types. Expectations in this area have been raised in recent years, but more relevant in vitro research, more accurate drug testing results, reliable disease models and insights leading to bioartificial organs are expected from the transition to 3D cell culture. However, the construction of organ-like cardiac 3D models currently remains a difficult challenge. The heart consists of highly differentiated cells in an intricate arrangement.Furthermore, electrical “wiring”, a vascular system and multiple cell types act in concert to respond to the rapidly changing demands of the body. Although cardiovascular 3D culture models have been predominantly developed for regenerative medicine in the past, their use in drug screening and for disease models has become more popular recently. Many sophisticated 3D culture models are currently being developed in this dynamic area of life science. This article is part of a Special Issue entitled: Cardiomyocyte Biology: Integration of Developmental and Environmental Cues in the Heart edited by Marcus Schaub and Hughes Abriel.

  2. Miniaturized 3D microscope imaging system

    NASA Astrophysics Data System (ADS)

    Lan, Yung-Sung; Chang, Chir-Weei; Sung, Hsin-Yueh; Wang, Yen-Chang; Chang, Cheng-Yi

    2015-05-01

    We designed and assembled a portable 3-D miniature microscopic image system with the size of 35x35x105 mm3 . By integrating a microlens array (MLA) into the optical train of a handheld microscope, the biological specimen's image will be captured for ease of use in a single shot. With the light field raw data and program, the focal plane can be changed digitally and the 3-D image can be reconstructed after the image was taken. To localize an object in a 3-D volume, an automated data analysis algorithm to precisely distinguish profundity position is needed. The ability to create focal stacks from a single image allows moving or specimens to be recorded. Applying light field microscope algorithm to these focal stacks, a set of cross sections will be produced, which can be visualized using 3-D rendering. Furthermore, we have developed a series of design rules in order to enhance the pixel using efficiency and reduce the crosstalk between each microlens for obtain good image quality. In this paper, we demonstrate a handheld light field microscope (HLFM) to distinguish two different color fluorescence particles separated by a cover glass in a 600um range, show its focal stacks, and 3-D position.

  3. Extra dimensions: 3D in PDF documentation

    DOE PAGES

    Graf, Norman A.

    2011-01-11

    Experimental science is replete with multi-dimensional information which is often poorly represented by the two dimensions of presentation slides and print media. Past efforts to disseminate such information to a wider audience have failed for a number of reasons, including a lack of standards which are easy to implement and have broad support. Adobe's Portable Document Format (PDF) has in recent years become the de facto standard for secure, dependable electronic information exchange. It has done so by creating an open format, providing support for multiple platforms and being reliable and extensible. By providing support for the ECMA standard Universalmore » 3D (U3D) file format in its free Adobe Reader software, Adobe has made it easy to distribute and interact with 3D content. By providing support for scripting and animation, temporal data can also be easily distributed to a wide, non-technical audience. We discuss how the field of radiation imaging could benefit from incorporating full 3D information about not only the detectors, but also the results of the experimental analyses, in its electronic publications. In this article, we present examples drawn from high-energy physics, mathematics and molecular biology which take advantage of this functionality. Furthermore, we demonstrate how 3D detector elements can be documented, using either CAD drawings or other sources such as GEANT visualizations as input.« less

  4. 3D optical measuring technologies and systems

    NASA Astrophysics Data System (ADS)

    Chugui, Yuri V.

    2005-02-01

    The results of the R & D activity of TDI SIE SB RAS in the field of the 3D optical measuring technologies and systems for noncontact 3D optical dimensional inspection applied to atomic and railway industry safety problems are presented. This activity includes investigations of diffraction phenomena on some 3D objects, using the original constructive calculation method. The efficient algorithms for precise determining the transverse and longitudinal sizes of 3D objects of constant thickness by diffraction method, peculiarities on formation of the shadow and images of the typical elements of the extended objects were suggested. Ensuring the safety of nuclear reactors and running trains as well as their high exploitation reliability requires a 100% noncontact precise inspection of geometrical parameters of their components. To solve this problem we have developed methods and produced the technical vision measuring systems LMM, CONTROL, PROFIL, and technologies for noncontact 3D dimensional inspection of grid spacers and fuel elements for the nuclear reactor VVER-1000 and VVER-440, as well as automatic laser diagnostic COMPLEX for noncontact inspection of geometric parameters of running freight car wheel pairs. The performances of these systems and the results of industrial testing are presented and discussed. The created devices are in pilot operation at Atomic and Railway Companies.

  5. Mobile 3d Mapping with a Low-Cost Uav System

    NASA Astrophysics Data System (ADS)

    Neitzel, F.; Klonowski, J.

    2011-09-01

    In this contribution it is shown how an UAV system can be built at low costs. The components of the system, the equipment as well as the control software are presented. Furthermore an implemented programme for photogrammetric flight planning and its execution are described. The main focus of this contribution is on the generation of 3D point clouds from digital imagery. For this web services and free software solutions are presented which automatically generate 3D point clouds from arbitrary image configurations. Possibilities of georeferencing are described whereas the achieved accuracy has been determined. The presented workflow is finally used for the acquisition of 3D geodata. On the example of a landfill survey it is shown that marketable products can be derived using a low-cost UAV.

  6. 3D Modelling of Interior Spaces: Learning the Language of Indoor Architecture

    NASA Astrophysics Data System (ADS)

    Khoshelham, K.; Díaz-Vilariño, L.

    2014-06-01

    3D models of indoor environments are important in many applications, but they usually exist only for newly constructed buildings. Automated approaches to modelling indoor environments from imagery and/or point clouds can make the process easier, faster and cheaper. We present an approach to 3D indoor modelling based on a shape grammar. We demonstrate that interior spaces can be modelled by iteratively placing, connecting and merging cuboid shapes. We also show that the parameters and sequence of grammar rules can be learned automatically from a point cloud. Experiments with simulated and real point clouds show promising results, and indicate the potential of the method in 3D modelling of large indoor environments.

  7. 3D Simulation: Microgravity Environments and Applications

    NASA Technical Reports Server (NTRS)

    Hunter, Steve L.; Dischinger, Charles; Estes, Samantha; Parker, Nelson C. (Technical Monitor)

    2001-01-01

    Most, if not all, 3-D and Virtual Reality (VR) software programs are designed for one-G gravity applications. Space environments simulations require gravity effects of one one-thousandth to one one-million of that of the Earth's surface (10(exp -3) - 10(exp -6) G), thus one must be able to generate simulations that replicate those microgravity effects upon simulated astronauts. Unfortunately, the software programs utilized by the National Aeronautical and Space Administration does not have the ability to readily neutralize the one-G gravity effect. This pre-programmed situation causes the engineer or analysis difficulty during micro-gravity simulations. Therefore, microgravity simulations require special techniques or additional code in order to apply the power of 3D graphic simulation to space related applications. This paper discusses the problem and possible solutions to allow microgravity 3-D/VR simulations to be completed successfully without program code modifications.

  8. 3D differential phase contrast microscopy

    NASA Astrophysics Data System (ADS)

    Chen, Michael; Tian, Lei; Waller, Laura

    2016-03-01

    We demonstrate three-dimensional (3D) optical phase and amplitude reconstruction based on coded source illumination using a programmable LED array. Multiple stacks of images along the optical axis are computed from recorded intensities captured by multiple images under off-axis illumination. Based on the first Born approximation, a linear differential phase contrast (DPC) model is built between 3D complex index of refraction and the intensity stacks. Therefore, 3D volume reconstruction can be achieved via a fast inversion method, without the intermediate 2D phase retrieval step. Our system employs spatially partially coherent illumination, so the transverse resolution achieves twice the NA of coherent systems, while axial resolution is also improved 2× as compared to holographic imaging.

  9. The CIFIST 3D model atmosphere grid.

    NASA Astrophysics Data System (ADS)

    Ludwig, H.-G.; Caffau, E.; Steffen, M.; Freytag, B.; Bonifacio, P.; Kučinskas, A.

    Grids of stellar atmosphere models and associated synthetic spectra are numerical products which have a large impact in astronomy due to their ubiquitous application in the interpretation of radiation from individual stars and stellar populations. 3D model atmospheres are now on the verge of becoming generally available for a wide range of stellar atmospheric parameters. We report on efforts to develop a grid of 3D model atmospheres for late-type stars within the CIFIST Team at Paris Observatory. The substantial demands in computational and human labor for the model production and post-processing render this apparently mundane task a challenging logistic exercise. At the moment the CIFIST grid comprises 77 3D model atmospheres with emphasis on dwarfs of solar and sub-solar metallicities. While the model production is still ongoing, first applications are already worked upon by the CIFIST Team and collaborators.

  10. 3D Printed Multimaterial Microfluidic Valve.

    PubMed

    Keating, Steven J; Gariboldi, Maria Isabella; Patrick, William G; Sharma, Sunanda; Kong, David S; Oxman, Neri

    2016-01-01

    We present a novel 3D printed multimaterial microfluidic proportional valve. The microfluidic valve is a fundamental primitive that enables the development of programmable, automated devices for controlling fluids in a precise manner. We discuss valve characterization results, as well as exploratory design variations in channel width, membrane thickness, and membrane stiffness. Compared to previous single material 3D printed valves that are stiff, these printed valves constrain fluidic deformation spatially, through combinations of stiff and flexible materials, to enable intricate geometries in an actuated, functionally graded device. Research presented marks a shift towards 3D printing multi-property programmable fluidic devices in a single step, in which integrated multimaterial valves can be used to control complex fluidic reactions for a variety of applications, including DNA assembly and analysis, continuous sampling and sensing, and soft robotics.

  11. Simnple, portable, 3-D projection routine

    SciTech Connect

    Wagner, J.S.

    1987-04-01

    A 3-D projection routine is presented for use in computer graphics applications. The routine is simple enough to be considered portable, and easily modified for special problems. There is often the need to draw three-dimensional objects on a two-dimensional plotting surface. For the object to appear realistic, perspective effects must be included that allow near objects to appear larger than distant objects. Several 3-D projection routines are commercially available, but they are proprietary, not portable, and not easily changed by the user. Most are restricted to surfaces that are functions of two variables. This makes them unsuitable for viewing physical objects such as accelerator prototypes or propagating beams. This report develops a very simple algorithm for 3-D projections; the core routine is only 39 FORTRAN lines long. It can be easily modified for special problems. Software dependent calls are confined to simple drivers that can be exchanged when different plotting software packages are used.

  12. Ames Lab 101: 3D Metals Printer

    SciTech Connect

    Ott, Ryan

    2014-02-13

    To meet one of the biggest energy challenges of the 21st century - finding alternatives to rare-earth elements and other critical materials - scientists will need new and advanced tools. The Critical Materials Institute at the U.S. Department of Energy's Ames Laboratory has a new one: a 3D printer for metals research. 3D printing technology, which has captured the imagination of both industry and consumers, enables ideas to move quickly from the initial design phase to final form using materials including polymers, ceramics, paper and even food. But the Critical Materials Institute (CMI) will apply the advantages of the 3D printing process in a unique way: for materials discovery.

  13. 3D Printed Multimaterial Microfluidic Valve

    PubMed Central

    Patrick, William G.; Sharma, Sunanda; Kong, David S.; Oxman, Neri

    2016-01-01

    We present a novel 3D printed multimaterial microfluidic proportional valve. The microfluidic valve is a fundamental primitive that enables the development of programmable, automated devices for controlling fluids in a precise manner. We discuss valve characterization results, as well as exploratory design variations in channel width, membrane thickness, and membrane stiffness. Compared to previous single material 3D printed valves that are stiff, these printed valves constrain fluidic deformation spatially, through combinations of stiff and flexible materials, to enable intricate geometries in an actuated, functionally graded device. Research presented marks a shift towards 3D printing multi-property programmable fluidic devices in a single step, in which integrated multimaterial valves can be used to control complex fluidic reactions for a variety of applications, including DNA assembly and analysis, continuous sampling and sensing, and soft robotics. PMID:27525809

  14. Structured light field 3D imaging.

    PubMed

    Cai, Zewei; Liu, Xiaoli; Peng, Xiang; Yin, Yongkai; Li, Ameng; Wu, Jiachen; Gao, Bruce Z

    2016-09-01

    In this paper, we propose a method by means of light field imaging under structured illumination to deal with high dynamic range 3D imaging. Fringe patterns are projected onto a scene and modulated by the scene depth then a structured light field is detected using light field recording devices. The structured light field contains information about ray direction and phase-encoded depth, via which the scene depth can be estimated from different directions. The multidirectional depth estimation can achieve high dynamic 3D imaging effectively. We analyzed and derived the phase-depth mapping in the structured light field and then proposed a flexible ray-based calibration approach to determine the independent mapping coefficients for each ray. Experimental results demonstrated the validity of the proposed method to perform high-quality 3D imaging for highly and lowly reflective surfaces. PMID:27607639

  15. 3D-printed microfluidic devices.

    PubMed

    Amin, Reza; Knowlton, Stephanie; Hart, Alexander; Yenilmez, Bekir; Ghaderinezhad, Fariba; Katebifar, Sara; Messina, Michael; Khademhosseini, Ali; Tasoglu, Savas

    2016-06-20

    Microfluidics is a flourishing field, enabling a wide range of biochemical and clinical applications such as cancer screening, micro-physiological system engineering, high-throughput drug testing, and point-of-care diagnostics. However, fabrication of microfluidic devices is often complicated, time consuming, and requires expensive equipment and sophisticated cleanroom facilities. Three-dimensional (3D) printing presents a promising alternative to traditional techniques such as lithography and PDMS-glass bonding, not only by enabling rapid design iterations in the development stage, but also by reducing the costs associated with institutional infrastructure, equipment installation, maintenance, and physical space. With the recent advancements in 3D printing technologies, highly complex microfluidic devices can be fabricated via single-step, rapid, and cost-effective protocols, making microfluidics more accessible to users. In this review, we discuss a broad range of approaches for the application of 3D printing technology to fabrication of micro-scale lab-on-a-chip devices.

  16. 3D Printed Multimaterial Microfluidic Valve.

    PubMed

    Keating, Steven J; Gariboldi, Maria Isabella; Patrick, William G; Sharma, Sunanda; Kong, David S; Oxman, Neri

    2016-01-01

    We present a novel 3D printed multimaterial microfluidic proportional valve. The microfluidic valve is a fundamental primitive that enables the development of programmable, automated devices for controlling fluids in a precise manner. We discuss valve characterization results, as well as exploratory design variations in channel width, membrane thickness, and membrane stiffness. Compared to previous single material 3D printed valves that are stiff, these printed valves constrain fluidic deformation spatially, through combinations of stiff and flexible materials, to enable intricate geometries in an actuated, functionally graded device. Research presented marks a shift towards 3D printing multi-property programmable fluidic devices in a single step, in which integrated multimaterial valves can be used to control complex fluidic reactions for a variety of applications, including DNA assembly and analysis, continuous sampling and sensing, and soft robotics. PMID:27525809

  17. 3-D Mesh Generation Nonlinear Systems

    SciTech Connect

    Christon, M. A.; Dovey, D.; Stillman, D. W.; Hallquist, J. O.; Rainsberger, R. B

    1994-04-07

    INGRID is a general-purpose, three-dimensional mesh generator developed for use with finite element, nonlinear, structural dynamics codes. INGRID generates the large and complex input data files for DYNA3D, NIKE3D, FACET, and TOPAZ3D. One of the greatest advantages of INGRID is that virtually any shape can be described without resorting to wedge elements, tetrahedrons, triangular elements or highly distorted quadrilateral or hexahedral elements. Other capabilities available are in the areas of geometry and graphics. Exact surface equations and surface intersections considerably improve the ability to deal with accurate models, and a hidden line graphics algorithm is included which is efficient on the most complicated meshes. The primary new capability is associated with the boundary conditions, loads, and material properties required by nonlinear mechanics programs. Commands have been designed for each case to minimize user effort. This is particularly important since special processing is almost always required for each load or boundary condition.

  18. 3D holoscopic video imaging system

    NASA Astrophysics Data System (ADS)

    Steurer, Johannes H.; Pesch, Matthias; Hahne, Christopher

    2012-03-01

    Since many years, integral imaging has been discussed as a technique to overcome the limitations of standard still photography imaging systems where a three-dimensional scene is irrevocably projected onto two dimensions. With the success of 3D stereoscopic movies, a huge interest in capturing three-dimensional motion picture scenes has been generated. In this paper, we present a test bench integral imaging camera system aiming to tailor the methods of light field imaging towards capturing integral 3D motion picture content. We estimate the hardware requirements needed to generate high quality 3D holoscopic images and show a prototype camera setup that allows us to study these requirements using existing technology. The necessary steps that are involved in the calibration of the system as well as the technique of generating human readable holoscopic images from the recorded data are discussed.

  19. 3D face analysis for demographic biometrics

    SciTech Connect

    Tokola, Ryan A; Mikkilineni, Aravind K; Boehnen, Chris Bensing

    2015-01-01

    Despite being increasingly easy to acquire, 3D data is rarely used for face-based biometrics applications beyond identification. Recent work in image-based demographic biometrics has enjoyed much success, but these approaches suffer from the well-known limitations of 2D representations, particularly variations in illumination, texture, and pose, as well as a fundamental inability to describe 3D shape. This paper shows that simple 3D shape features in a face-based coordinate system are capable of representing many biometric attributes without problem-specific models or specialized domain knowledge. The same feature vector achieves impressive results for problems as diverse as age estimation, gender classification, and race classification.

  20. 3-D Finite Element Heat Transfer

    1992-02-01

    TOPAZ3D is a three-dimensional implicit finite element computer code for heat transfer analysis. TOPAZ3D can be used to solve for the steady-state or transient temperature field on three-dimensional geometries. Material properties may be temperature-dependent and either isotropic or orthotropic. A variety of time-dependent and temperature-dependent boundary conditions can be specified including temperature, flux, convection, and radiation. By implementing the user subroutine feature, users can model chemical reaction kinetics and allow for any type of functionalmore » representation of boundary conditions and internal heat generation. TOPAZ3D can solve problems of diffuse and specular band radiation in an enclosure coupled with conduction in the material surrounding the enclosure. Additional features include thermal contact resistance across an interface, bulk fluids, phase change, and energy balances.« less

  1. Ames Lab 101: 3D Metals Printer

    ScienceCinema

    Ott, Ryan

    2016-07-12

    To meet one of the biggest energy challenges of the 21st century - finding alternatives to rare-earth elements and other critical materials - scientists will need new and advanced tools. The Critical Materials Institute at the U.S. Department of Energy's Ames Laboratory has a new one: a 3D printer for metals research. 3D printing technology, which has captured the imagination of both industry and consumers, enables ideas to move quickly from the initial design phase to final form using materials including polymers, ceramics, paper and even food. But the Critical Materials Institute (CMI) will apply the advantages of the 3D printing process in a unique way: for materials discovery.

  2. Real-time monitoring of 3D cell culture using a 3D capacitance biosensor.

    PubMed

    Lee, Sun-Mi; Han, Nalae; Lee, Rimi; Choi, In-Hong; Park, Yong-Beom; Shin, Jeon-Soo; Yoo, Kyung-Hwa

    2016-03-15

    Three-dimensional (3D) cell cultures have recently received attention because they represent a more physiologically relevant environment compared to conventional two-dimensional (2D) cell cultures. However, 2D-based imaging techniques or cell sensors are insufficient for real-time monitoring of cellular behavior in 3D cell culture. Here, we report investigations conducted with a 3D capacitance cell sensor consisting of vertically aligned pairs of electrodes. When GFP-expressing human breast cancer cells (GFP-MCF-7) encapsulated in alginate hydrogel were cultured in a 3D cell culture system, cellular activities, such as cell proliferation and apoptosis at different heights, could be monitored non-invasively and in real-time by measuring the change in capacitance with the 3D capacitance sensor. Moreover, we were able to monitor cell migration of human mesenchymal stem cells (hMSCs) with our 3D capacitance sensor.

  3. 3D scene reconstruction based on 3D laser point cloud combining UAV images

    NASA Astrophysics Data System (ADS)

    Liu, Huiyun; Yan, Yangyang; Zhang, Xitong; Wu, Zhenzhen

    2016-03-01

    It is a big challenge capturing and modeling 3D information of the built environment. A number of techniques and technologies are now in use. These include GPS, and photogrammetric application and also remote sensing applications. The experiment uses multi-source data fusion technology for 3D scene reconstruction based on the principle of 3D laser scanning technology, which uses the laser point cloud data as the basis and Digital Ortho-photo Map as an auxiliary, uses 3DsMAX software as a basic tool for building three-dimensional scene reconstruction. The article includes data acquisition, data preprocessing, 3D scene construction. The results show that the 3D scene has better truthfulness, and the accuracy of the scene meet the need of 3D scene construction.

  4. 3D whiteboard: collaborative sketching with 3D-tracked smart phones

    NASA Astrophysics Data System (ADS)

    Lue, James; Schulze, Jürgen P.

    2014-02-01

    We present the results of our investigation of the feasibility of a new approach for collaborative drawing in 3D, based on Android smart phones. Our approach utilizes a number of fiduciary markers, placed in the working area where they can be seen by the smart phones' cameras, in order to estimate the pose of each phone in the room. Our prototype allows two users to draw 3D objects with their smart phones by moving their phones around in 3D space. For example, 3D lines are drawn by recording the path of the phone as it is moved around in 3D space, drawing line segments on the screen along the way. Each user can see the virtual drawing space on their smart phones' displays, as if the display was a window into this space. Besides lines, our prototype application also supports 3D geometry creation, geometry transformation operations, and it shows the location of the other user's phone.

  5. Real-time monitoring of 3D cell culture using a 3D capacitance biosensor.

    PubMed

    Lee, Sun-Mi; Han, Nalae; Lee, Rimi; Choi, In-Hong; Park, Yong-Beom; Shin, Jeon-Soo; Yoo, Kyung-Hwa

    2016-03-15

    Three-dimensional (3D) cell cultures have recently received attention because they represent a more physiologically relevant environment compared to conventional two-dimensional (2D) cell cultures. However, 2D-based imaging techniques or cell sensors are insufficient for real-time monitoring of cellular behavior in 3D cell culture. Here, we report investigations conducted with a 3D capacitance cell sensor consisting of vertically aligned pairs of electrodes. When GFP-expressing human breast cancer cells (GFP-MCF-7) encapsulated in alginate hydrogel were cultured in a 3D cell culture system, cellular activities, such as cell proliferation and apoptosis at different heights, could be monitored non-invasively and in real-time by measuring the change in capacitance with the 3D capacitance sensor. Moreover, we were able to monitor cell migration of human mesenchymal stem cells (hMSCs) with our 3D capacitance sensor. PMID:26386332

  6. Spatial watermarking of 3D triangle meshes

    NASA Astrophysics Data System (ADS)

    Cayre, Francois; Macq, Benoit M. M.

    2001-12-01

    Although it is obvious that watermarking has become of great interest in protecting audio, videos, and still pictures, few work has been done considering 3D meshes. We propose a new method for watermarking 3D triangle meshes. This method embeds the watermark as triangles deformations. The list of watermarked triangles is obtained through a similar way to the one used in the TSPS (Triangle Strip Peeling Sequence) method. Unlike TSPS, our method is automatic and more secure. We also show that it is reversible.

  7. Acquisition and applications of 3D images

    NASA Astrophysics Data System (ADS)

    Sterian, Paul; Mocanu, Elena

    2007-08-01

    The moiré fringes method and their analysis up to medical and entertainment applications are discussed in this paper. We describe the procedure of capturing 3D images with an Inspeck Camera that is a real-time 3D shape acquisition system based on structured light techniques. The method is a high-resolution one. After processing the images, using computer, we can use the data for creating laser fashionable objects by engraving them with a Q-switched Nd:YAG. In medical field we mention the plastic surgery and the replacement of X-Ray especially in pediatric use.

  8. Superplastic forming using NIKE3D

    SciTech Connect

    Puso, M.

    1996-12-04

    The superplastic forming process requires careful control of strain rates in order to avoid strain localizations. A load scheduler was developed and implemented into the nonlinear finite element code NIKE3D to provide strain rate control during forming simulation and process schedule output. Often the sheets being formed in SPF are very thin such that less expensive membrane elements can be used as opposed to shell elements. A large strain membrane element was implemented into NIKE3D to assist in SPF process modeling.

  9. The Galicia 3D experiment: an Introduction.

    NASA Astrophysics Data System (ADS)

    Reston, Timothy; Martinez Loriente, Sara; Holroyd, Luke; Merry, Tobias; Sawyer, Dale; Morgan, Julia; Jordan, Brian; Tesi Sanjurjo, Mari; Alexanian, Ara; Shillington, Donna; Gibson, James; Minshull, Tim; Karplus, Marianne; Bayracki, Gaye; Davy, Richard; Klaeschen, Dirk; Papenberg, Cord; Ranero, Cesar; Perez-Gussinye, Marta; Martinez, Miguel

    2014-05-01

    In June and July 2013, scientists from 8 institutions took part in the Galicia 3D seismic experiment, the first ever crustal -scale academic 3D MCS survey over a rifted margin. The aim was to determine the 3D structure of a critical portion of the west Galicia rifted margin. At this margin, well-defined tilted fault blocks, bound by west-dipping faults and capped by synrift sediments are underlain by a bright reflection, undulating on time sections, termed the S reflector and thought to represent a major detachment fault of some kind. Moving west, the crust thins to zero thickness and mantle is unroofed, as evidence by the "Peridotite Ridge" first reported at this margin, but since observed at many other magma-poor margins. By imaging such a margin in detail, the experiment aimed to resolve the processes controlling crustal thinning and mantle unroofing at a type example magma poor margin. The experiment set out to collect several key datasets: a 3D seismic reflection volume measuring ~20x64km and extending down to ~14s TWT, a 3D ocean bottom seismometer dataset suitable for full wavefield inversion (the recording of the complete 3D seismic shots by 70 ocean bottom instruments), the "mirror imaging" of the crust using the same grid of OBS, a single 2D combined reflection/refraction profile extending to the west to determine the transition from unroofed mantle to true oceanic crust, and the seismic imaging of the water column, calibrated by regular deployment of XBTs to measure the temperature structure of the water column. We collected 1280 km2 of seismic reflection data, consisting of 136533 shots recorded on 1920 channels, producing 260 million seismic traces, each ~ 14s long. This adds up to ~ 8 terabytes of data, representing, we believe, the largest ever academic 3D MCS survey in terms of both the area covered and the volume of data. The OBS deployment was the largest ever within an academic 3D survey.

  10. 3D Modeling Engine Representation Summary Report

    SciTech Connect

    Steven Prescott; Ramprasad Sampath; Curtis Smith; Timothy Yang

    2014-09-01

    Computers have been used for 3D modeling and simulation, but only recently have computational resources been able to give realistic results in a reasonable time frame for large complex models. This summary report addressed the methods, techniques, and resources used to develop a 3D modeling engine to represent risk analysis simulation for advanced small modular reactor structures and components. The simulations done for this evaluation were focused on external events, specifically tsunami floods, for a hypothetical nuclear power facility on a coastline.

  11. Immersive 3D geovisualisation in higher education

    NASA Astrophysics Data System (ADS)

    Philips, Andrea; Walz, Ariane; Bergner, Andreas; Graeff, Thomas; Heistermann, Maik; Kienzler, Sarah; Korup, Oliver; Lipp, Torsten; Schwanghart, Wolfgang; Zeilinger, Gerold

    2014-05-01

    Through geovisualisation we explore spatial data, we analyse it towards a specific questions, we synthesise results, and we present and communicate them to a specific audience (MacEachren & Kraak 1997). After centuries of paper maps, the means to represent and visualise our physical environment and its abstract qualities have changed dramatically since the 1990s - and accordingly the methods how to use geovisualisation in teaching. Whereas some people might still consider the traditional classroom as ideal setting for teaching and learning geographic relationships and its mapping, we used a 3D CAVE (computer-animated virtual environment) as environment for a problem-oriented learning project called "GEOSimulator". Focussing on this project, we empirically investigated, if such a technological advance like the CAVE make 3D visualisation, including 3D geovisualisation, not only an important tool for businesses (Abulrub et al. 2012) and for the public (Wissen et al. 2008), but also for educational purposes, for which it had hardly been used yet. The 3D CAVE is a three-sided visualisation platform, that allows for immersive and stereoscopic visualisation of observed and simulated spatial data. We examined the benefits of immersive 3D visualisation for geographic research and education and synthesized three fundamental technology-based visual aspects: First, the conception and comprehension of space and location does not need to be generated, but is instantaneously and intuitively present through stereoscopy. Second, optical immersion into virtual reality strengthens this spatial perception which is in particular important for complex 3D geometries. And third, a significant benefit is interactivity, which is enhanced through immersion and allows for multi-discursive and dynamic data exploration and knowledge transfer. Based on our problem-oriented learning project, which concentrates on a case study on flood risk management at the Wilde Weisseritz in Germany, a river

  12. 3D printed diffractive terahertz lenses.

    PubMed

    Furlan, Walter D; Ferrando, Vicente; Monsoriu, Juan A; Zagrajek, Przemysław; Czerwińska, Elżbieta; Szustakowski, Mieczysław

    2016-04-15

    A 3D printer was used to realize custom-made diffractive THz lenses. After testing several materials, phase binary lenses with periodic and aperiodic radial profiles were designed and constructed in polyamide material to work at 0.625 THz. The nonconventional focusing properties of such lenses were assessed by computing and measuring their axial point spread function (PSF). Our results demonstrate that inexpensive 3D printed THz diffractive lenses can be reliably used in focusing and imaging THz systems. Diffractive THz lenses with unprecedented features, such as extended depth of focus or bifocalization, have been demonstrated. PMID:27082335

  13. Sensitivity Analysis of the Scattering-Based SARBM3D Despeckling Algorithm.

    PubMed

    Di Simone, Alessio

    2016-01-01

    Synthetic Aperture Radar (SAR) imagery greatly suffers from multiplicative speckle noise, typical of coherent image acquisition sensors, such as SAR systems. Therefore, a proper and accurate despeckling preprocessing step is almost mandatory to aid the interpretation and processing of SAR data by human users and computer algorithms, respectively. Very recently, a scattering-oriented version of the popular SAR Block-Matching 3D (SARBM3D) despeckling filter, named Scattering-Based (SB)-SARBM3D, was proposed. The new filter is based on the a priori knowledge of the local topography of the scene. In this paper, an experimental sensitivity analysis of the above-mentioned despeckling algorithm is carried out, and the main results are shown and discussed. In particular, the role of both electromagnetic and geometrical parameters of the surface and the impact of its scattering behavior are investigated. Furthermore, a comprehensive sensitivity analysis of the SB-SARBM3D filter against the Digital Elevation Model (DEM) resolution and the SAR image-DEM coregistration step is also provided. The sensitivity analysis shows a significant robustness of the algorithm against most of the surface parameters, while the DEM resolution plays a key role in the despeckling process. Furthermore, the SB-SARBM3D algorithm outperforms the original SARBM3D in the presence of the most realistic scattering behaviors of the surface. An actual scenario is also presented to assess the DEM role in real-life conditions. PMID:27347971

  14. Sensitivity Analysis of the Scattering-Based SARBM3D Despeckling Algorithm

    PubMed Central

    Di Simone, Alessio

    2016-01-01

    Synthetic Aperture Radar (SAR) imagery greatly suffers from multiplicative speckle noise, typical of coherent image acquisition sensors, such as SAR systems. Therefore, a proper and accurate despeckling preprocessing step is almost mandatory to aid the interpretation and processing of SAR data by human users and computer algorithms, respectively. Very recently, a scattering-oriented version of the popular SAR Block-Matching 3D (SARBM3D) despeckling filter, named Scattering-Based (SB)-SARBM3D, was proposed. The new filter is based on the a priori knowledge of the local topography of the scene. In this paper, an experimental sensitivity analysis of the above-mentioned despeckling algorithm is carried out, and the main results are shown and discussed. In particular, the role of both electromagnetic and geometrical parameters of the surface and the impact of its scattering behavior are investigated. Furthermore, a comprehensive sensitivity analysis of the SB-SARBM3D filter against the Digital Elevation Model (DEM) resolution and the SAR image-DEM coregistration step is also provided. The sensitivity analysis shows a significant robustness of the algorithm against most of the surface parameters, while the DEM resolution plays a key role in the despeckling process. Furthermore, the SB-SARBM3D algorithm outperforms the original SARBM3D in the presence of the most realistic scattering behaviors of the surface. An actual scenario is also presented to assess the DEM role in real-life conditions. PMID:27347971

  15. Sensitivity Analysis of the Scattering-Based SARBM3D Despeckling Algorithm.

    PubMed

    Di Simone, Alessio

    2016-01-01

    Synthetic Aperture Radar (SAR) imagery greatly suffers from multiplicative speckle noise, typical of coherent image acquisition sensors, such as SAR systems. Therefore, a proper and accurate despeckling preprocessing step is almost mandatory to aid the interpretation and processing of SAR data by human users and computer algorithms, respectively. Very recently, a scattering-oriented version of the popular SAR Block-Matching 3D (SARBM3D) despeckling filter, named Scattering-Based (SB)-SARBM3D, was proposed. The new filter is based on the a priori knowledge of the local topography of the scene. In this paper, an experimental sensitivity analysis of the above-mentioned despeckling algorithm is carried out, and the main results are shown and discussed. In particular, the role of both electromagnetic and geometrical parameters of the surface and the impact of its scattering behavior are investigated. Furthermore, a comprehensive sensitivity analysis of the SB-SARBM3D filter against the Digital Elevation Model (DEM) resolution and the SAR image-DEM coregistration step is also provided. The sensitivity analysis shows a significant robustness of the algorithm against most of the surface parameters, while the DEM resolution plays a key role in the despeckling process. Furthermore, the SB-SARBM3D algorithm outperforms the original SARBM3D in the presence of the most realistic scattering behaviors of the surface. An actual scenario is also presented to assess the DEM role in real-life conditions.

  16. Coherent backscatter: measurement of the retroreflective BRDF peak exhibited by several surfaces relevant to ladar applications

    NASA Astrophysics Data System (ADS)

    Papetti, Thomas J.; Walker, William E.; Keffer, Charles E.; Johnson, Billy E.

    2007-09-01

    The sharp retroreflective peak that is commonly exhibited in the bidirectional reflectivity distribution function of diffuse surfaces was investigated for several materials relevant to ladar applications. The accurate prediction of target cross-sections requires target surface BRDF measurements in the vicinity of this peak. Measurements were made using the beamsplitter-based scatterometer at the U.S. Army's Advanced Measurements Optical Range (AMOR) at Redstone Arsenal, Alabama. Co-polarized and cross-polarized BRDF values at 532 nm and 1064 nm were obtained as the bistatic angle was varied for several degrees about, and including, the monostatic point with a resolution of better than 2 mrad. Measurements covered a wide range of incidence angles. Materials measured included polyurethane coated nylons (PCNs), Spectralon, a silica phenolic, and various paints. For the co-polarized case, a retroreflective peak was found to be nearly ubiquitous for high albedo materials, with relative heights as great as 1.7 times the region surrounding the peak and half-widths between 0.11° and 1.3°. The shape of the observed peaks very closely matched coherent backscattering theory, though the phenomena observed could not be positively attributed to coherent backscattering or shadow hiding alone. Several data features were noted that may be of relevance to modelers of these phenomena, including the fact that the widths of the peaks were approximately the same for 532 nm as for 1064 nm and an observation that at large incidence angles, the width of the peak usually broadened in the in-plane bistatic direction.

  17. Geiger-mode avalanche photodiode focal plane arrays for three-dimensional imaging LADAR

    NASA Astrophysics Data System (ADS)

    Itzler, Mark A.; Entwistle, Mark; Owens, Mark; Patel, Ketan; Jiang, Xudong; Slomkowski, Krystyna; Rangwala, Sabbir; Zalud, Peter F.; Senko, Tom; Tower, John; Ferraro, Joseph

    2010-09-01

    We report on the development of focal plane arrays (FPAs) employing two-dimensional arrays of InGaAsP-based Geiger-mode avalanche photodiodes (GmAPDs). These FPAs incorporate InP/InGaAs(P) Geiger-mode avalanche photodiodes (GmAPDs) to create pixels that detect single photons at shortwave infrared wavelengths with high efficiency and low dark count rates. GmAPD arrays are hybridized to CMOS read-out integrated circuits (ROICs) that enable independent laser radar (LADAR) time-of-flight measurements for each pixel, providing three-dimensional image data at frame rates approaching 200 kHz. Microlens arrays are used to maintain high fill factor of greater than 70%. We present full-array performance maps for two different types of sensors optimized for operation at 1.06 μm and 1.55 μm, respectively. For the 1.06 μm FPAs, overall photon detection efficiency of >40% is achieved at <20 kHz dark count rates with modest cooling to ~250 K using integrated thermoelectric coolers. We also describe the first evalution of these FPAs when multi-photon pulses are incident on single pixels. The effective detection efficiency for multi-photon pulses shows excellent agreement with predictions based on Poisson statistics. We also characterize the crosstalk as a function of pulse mean photon number. Relative to the intrinsic crosstalk contribution from hot carrier luminescence that occurs during avalanche current flows resulting from single incident photons, we find a modest rise in crosstalk for multi-photon incident pulses that can be accurately explained by direct optical scattering.

  18. Recent developments in DFD (depth-fused 3D) display and arc 3D display

    NASA Astrophysics Data System (ADS)

    Suyama, Shiro; Yamamoto, Hirotsugu

    2015-05-01

    We will report our recent developments in DFD (Depth-fused 3D) display and arc 3D display, both of which have smooth movement parallax. Firstly, fatigueless DFD display, composed of only two layered displays with a gap, has continuous perceived depth by changing luminance ratio between two images. Two new methods, called "Edge-based DFD display" and "Deep DFD display", have been proposed in order to solve two severe problems of viewing angle and perceived depth limitations. Edge-based DFD display, layered by original 2D image and its edge part with a gap, can expand the DFD viewing angle limitation both in 2D and 3D perception. Deep DFD display can enlarge the DFD image depth by modulating spatial frequencies of front and rear images. Secondly, Arc 3D display can provide floating 3D images behind or in front of the display by illuminating many arc-shaped directional scattering sources, for example, arcshaped scratches on a flat board. Curved Arc 3D display, composed of many directional scattering sources on a curved surface, can provide a peculiar 3D image, for example, a floating image in the cylindrical bottle. The new active device has been proposed for switching arc 3D images by using the tips of dual-frequency liquid-crystal prisms as directional scattering sources. Directional scattering can be switched on/off by changing liquid-crystal refractive index, resulting in switching of arc 3D image.

  19. Innovations in 3D printing: a 3D overview from optics to organs.

    PubMed

    Schubert, Carl; van Langeveld, Mark C; Donoso, Larry A

    2014-02-01

    3D printing is a method of manufacturing in which materials, such as plastic or metal, are deposited onto one another in layers to produce a three dimensional object, such as a pair of eye glasses or other 3D objects. This process contrasts with traditional ink-based printers which produce a two dimensional object (ink on paper). To date, 3D printing has primarily been used in engineering to create engineering prototypes. However, recent advances in printing materials have now enabled 3D printers to make objects that are comparable with traditionally manufactured items. In contrast with conventional printers, 3D printing has the potential to enable mass customisation of goods on a large scale and has relevance in medicine including ophthalmology. 3D printing has already been proved viable in several medical applications including the manufacture of eyeglasses, custom prosthetic devices and dental implants. In this review, we discuss the potential for 3D printing to revolutionise manufacturing in the same way as the printing press revolutionised conventional printing. The applications and limitations of 3D printing are discussed; the production process is demonstrated by producing a set of eyeglass frames from 3D blueprints. PMID:24288392

  20. Innovations in 3D printing: a 3D overview from optics to organs.

    PubMed

    Schubert, Carl; van Langeveld, Mark C; Donoso, Larry A

    2014-02-01

    3D printing is a method of manufacturing in which materials, such as plastic or metal, are deposited onto one another in layers to produce a three dimensional object, such as a pair of eye glasses or other 3D objects. This process contrasts with traditional ink-based printers which produce a two dimensional object (ink on paper). To date, 3D printing has primarily been used in engineering to create engineering prototypes. However, recent advances in printing materials have now enabled 3D printers to make objects that are comparable with traditionally manufactured items. In contrast with conventional printers, 3D printing has the potential to enable mass customisation of goods on a large scale and has relevance in medicine including ophthalmology. 3D printing has already been proved viable in several medical applications including the manufacture of eyeglasses, custom prosthetic devices and dental implants. In this review, we discuss the potential for 3D printing to revolutionise manufacturing in the same way as the printing press revolutionised conventional printing. The applications and limitations of 3D printing are discussed; the production process is demonstrated by producing a set of eyeglass frames from 3D blueprints.

  1. The EISCAT_3D Science Case

    NASA Astrophysics Data System (ADS)

    Tjulin, A.; Mann, I.; McCrea, I.; Aikio, A. T.

    2013-05-01

    EISCAT_3D will be a world-leading international research infrastructure using the incoherent scatter technique to study the atmosphere in the Fenno-Scandinavian Arctic and to investigate how the Earth's atmosphere is coupled to space. The EISCAT_3D phased-array multistatic radar system will be operated by EISCAT Scientific Association and thus be an integral part of an organisation that has successfully been running incoherent scatter radars for more than thirty years. The baseline design of the radar system contains a core site with transmitting and receiving capabilities located close to the intersection of the Swedish, Norwegian and Finnish borders and five receiving sites located within 50 to 250 km from the core. The EISCAT_3D project is currently in its Preparatory Phase and can smoothly transit into implementation in 2014, provided sufficient funding. Construction can start 2016 and first operations in 2018. The EISCAT_3D Science Case is prepared as part of the Preparatory Phase. It is regularly updated with annual new releases, and it aims at being a common document for the whole future EISCAT_3D user community. The areas covered by the Science Case are atmospheric physics and global change; space and plasma physics; solar system research; space weather and service applications; and radar techniques, new methods for coding and analysis. Two of the aims for EISCAT_3D are to understand the ways natural variability in the upper atmosphere, imposed by the Sun-Earth system, can influence the middle and lower atmosphere, and to improve the predictivity of atmospheric models by providing higher resolution observations to replace the current parametrised input. Observations by EISCAT_3D will also be used to monitor the direct effects from the Sun on the ionosphere-atmosphere system and those caused by solar wind magnetosphere-ionosphere interaction. In addition, EISCAT_3D will be used for remote sensing the large-scale behaviour of the magnetosphere from its

  2. Scoops3D: software to analyze 3D slope stability throughout a digital landscape

    USGS Publications Warehouse

    Reid, Mark E.; Christian, Sarah B.; Brien, Dianne L.; Henderson, Scott T.

    2015-01-01

    The computer program, Scoops3D, evaluates slope stability throughout a digital landscape represented by a digital elevation model (DEM). The program uses a three-dimensional (3D) method of columns approach to assess the stability of many (typically millions) potential landslides within a user-defined size range. For each potential landslide (or failure), Scoops3D assesses the stability of a rotational, spherical slip surface encompassing many DEM cells using a 3D version of either Bishop’s simplified method or the Ordinary (Fellenius) method of limit-equilibrium analysis. Scoops3D has several options for the user to systematically and efficiently search throughout an entire DEM, thereby incorporating the effects of complex surface topography. In a thorough search, each DEM cell is included in multiple potential failures, and Scoops3D records the lowest stability (factor of safety) for each DEM cell, as well as the size (volume or area) associated with each of these potential landslides. It also determines the least-stable potential failure for the entire DEM. The user has a variety of options for building a 3D domain, including layers or full 3D distributions of strength and pore-water pressures, simplistic earthquake loading, and unsaturated suction conditions. Results from Scoops3D can be readily incorporated into a geographic information system (GIS) or other visualization software. This manual includes information on the theoretical basis for the slope-stability analysis, requirements for constructing and searching a 3D domain, a detailed operational guide (including step-by-step instructions for using the graphical user interface [GUI] software, Scoops3D-i) and input/output file specifications, practical considerations for conducting an analysis, results of verification tests, and multiple examples illustrating the capabilities of Scoops3D. Easy-to-use software installation packages are available for the Windows or Macintosh operating systems; these packages

  3. Effect of viewing distance on 3D fatigue caused by viewing mobile 3D content

    NASA Astrophysics Data System (ADS)

    Mun, Sungchul; Lee, Dong-Su; Park, Min-Chul; Yano, Sumio

    2013-05-01

    With an advent of autostereoscopic display technique and increased needs for smart phones, there has been a significant growth in mobile TV markets. The rapid growth in technical, economical, and social aspects has encouraged 3D TV manufacturers to apply 3D rendering technology to mobile devices so that people have more opportunities to come into contact with many 3D content anytime and anywhere. Even if the mobile 3D technology leads to the current market growth, there is an important thing to consider for consistent development and growth in the display market. To put it briefly, human factors linked to mobile 3D viewing should be taken into consideration before developing mobile 3D technology. Many studies have investigated whether mobile 3D viewing causes undesirable biomedical effects such as motion sickness and visual fatigue, but few have examined main factors adversely affecting human health. Viewing distance is considered one of the main factors to establish optimized viewing environments from a viewer's point of view. Thus, in an effort to determine human-friendly viewing environments, this study aims to investigate the effect of viewing distance on human visual system when exposing to mobile 3D environments. Recording and analyzing brainwaves before and after watching mobile 3D content, we explore how viewing distance affects viewing experience from physiological and psychological perspectives. Results obtained in this study are expected to provide viewing guidelines for viewers, help ensure viewers against undesirable 3D effects, and lead to make gradual progress towards a human-friendly mobile 3D viewing.

  4. How to See Shadows in 3D

    ERIC Educational Resources Information Center

    Parikesit, Gea O. F.

    2014-01-01

    Shadows can be found easily everywhere around us, so that we rarely find it interesting to reflect on how they work. In order to raise curiosity among students on the optics of shadows, we can display the shadows in 3D, particularly using a stereoscopic set-up. In this paper we describe the optics of stereoscopic shadows using simple schematic…

  5. Spatial Visualization by Realistic 3D Views

    ERIC Educational Resources Information Center

    Yue, Jianping

    2008-01-01

    In this study, the popular Purdue Spatial Visualization Test-Visualization by Rotations (PSVT-R) in isometric drawings was recreated with CAD software that allows 3D solid modeling and rendering to provide more realistic pictorial views. Both the original and the modified PSVT-R tests were given to students and their scores on the two tests were…

  6. Virtual Representations in 3D Learning Environments

    ERIC Educational Resources Information Center

    Shonfeld, Miri; Kritz, Miki

    2013-01-01

    This research explores the extent to which virtual worlds can serve as online collaborative learning environments for students by increasing social presence and engagement. 3D environments enable learning, which simulates face-to-face encounters while retaining the advantages of online learning. Students in Education departments created avatars…

  7. 3D Cell Culture in Alginate Hydrogels

    PubMed Central

    Andersen, Therese; Auk-Emblem, Pia; Dornish, Michael

    2015-01-01

    This review compiles information regarding the use of alginate, and in particular alginate hydrogels, in culturing cells in 3D. Knowledge of alginate chemical structure and functionality are shown to be important parameters in design of alginate-based matrices for cell culture. Gel elasticity as well as hydrogel stability can be impacted by the type of alginate used, its concentration, the choice of gelation technique (ionic or covalent), and divalent cation chosen as the gel inducing ion. The use of peptide-coupled alginate can control cell–matrix interactions. Gelation of alginate with concomitant immobilization of cells can take various forms. Droplets or beads have been utilized since the 1980s for immobilizing cells. Newer matrices such as macroporous scaffolds are now entering the 3D cell culture product market. Finally, delayed gelling, injectable, alginate systems show utility in the translation of in vitro cell culture to in vivo tissue engineering applications. Alginate has a history and a future in 3D cell culture. Historically, cells were encapsulated in alginate droplets cross-linked with calcium for the development of artificial organs. Now, several commercial products based on alginate are being used as 3D cell culture systems that also demonstrate the possibility of replacing or regenerating tissue. PMID:27600217

  8. GPM 3D Flyby of Hurricane Lester

    NASA Video Gallery

    This 3-D flyby of Lester was created using GPM's Radar data. NASA/JAXA's GPM core observatory satellite flew over Hurricane Lester on August 29, 2016 at 7:21 p.m. EDT. Rain was measured by GPM's ra...

  9. Invertible authentication for 3D meshes

    NASA Astrophysics Data System (ADS)

    Dittmann, Jana; Benedens, Oliver

    2003-06-01

    Digital watermarking has become an accepted technology for enabling multimedia protection schemes. Based on the introduced media independent protocol schemes for invertible data authentication in references 2, 4 and 5 we discuss the design of a new 3D invertible labeling technique to ensure and require high data integrity. We combine digital signature schemes and digital watermarking to provide a public verifiable integrity. Furthermore the protocol steps in the other papers to ensure that the original data can only be reproduced with a secret key is adopted for 3D meshes. The goal is to show how the existing protocol can be used for 3D meshes to provide solutions for authentication watermarking. In our design concept and evaluation we see that due to the nature of 3D meshes the invertible function are different from the image and audio concepts to achieve invertibility to guaranty reversibility of the original. Therefore we introduce a concept for distortion free invertibility and a concept for adjustable minimum distortion invertibility.

  10. [3D virtual endoscopy of heart].

    PubMed

    Du, Aan; Yang, Xin; Xue, Haihong; Yao, Liping; Sun, Kun

    2012-10-01

    In this paper, we present a virtual endoscopy (VE) for diagnosis of heart diseases, which is proved efficient and affordable, easy to popularize for viewing the interior of the heart. The dual source CT (DSCT) data were used as primary data in our system. The 3D structure of virtual heart was reconstructed with 3D texture mapping technology based on graphics processing unit (GPU), and could be displayed dynamically in real time. When we displayed it in real time, we could not only observe the inside of the chambers of heart but also examine from the new angle of view by the 3D data which were already clipped according to doctor's desire. In the pattern of observation, we used both mutual interactive mode and auto mode. In the auto mode, we used Dijkstra Algorithm which treated the 3D Euler distance as weighting factor to find out the view path quickly, and, used view path to calculate the four chamber plane. PMID:23198444

  11. 3D Virtual Reality for Teaching Astronomy

    NASA Astrophysics Data System (ADS)

    Speck, Angela; Ruzhitskaya, L.; Laffey, J.; Ding, N.

    2012-01-01

    We are developing 3D virtual learning environments (VLEs) as learning materials for an undergraduate astronomy course, in which will utilize advances both in technologies available and in our understanding of the social nature of learning. These learning materials will be used to test whether such VLEs can indeed augment science learning so that it is more engaging, active, visual and effective. Our project focuses on the challenges and requirements of introductory college astronomy classes. Here we present our virtual world of the Jupiter system and how we plan to implement it to allow students to learn course material - physical laws and concepts in astronomy - while engaging them into exploration of the Jupiter's system, encouraging their imagination, curiosity, and motivation. The VLE can allow students to work individually or collaboratively. The 3D world also provides an opportunity for research in astronomy education to investigate impact of social interaction, gaming features, and use of manipulatives offered by a learning tool on students’ motivation and learning outcomes. Use of this VLE is also a valuable source for exploration of how the learners’ spatial awareness can be enhanced by working in 3D environment. We will present the Jupiter-system environment along with a preliminary study of the efficacy and usability of our Jupiter 3D VLE.

  12. 3D Cell Culture in Alginate Hydrogels

    PubMed Central

    Andersen, Therese; Auk-Emblem, Pia; Dornish, Michael

    2015-01-01

    This review compiles information regarding the use of alginate, and in particular alginate hydrogels, in culturing cells in 3D. Knowledge of alginate chemical structure and functionality are shown to be important parameters in design of alginate-based matrices for cell culture. Gel elasticity as well as hydrogel stability can be impacted by the type of alginate used, its concentration, the choice of gelation technique (ionic or covalent), and divalent cation chosen as the gel inducing ion. The use of peptide-coupled alginate can control cell–matrix interactions. Gelation of alginate with concomitant immobilization of cells can take various forms. Droplets or beads have been utilized since the 1980s for immobilizing cells. Newer matrices such as macroporous scaffolds are now entering the 3D cell culture product market. Finally, delayed gelling, injectable, alginate systems show utility in the translation of in vitro cell culture to in vivo tissue engineering applications. Alginate has a history and a future in 3D cell culture. Historically, cells were encapsulated in alginate droplets cross-linked with calcium for the development of artificial organs. Now, several commercial products based on alginate are being used as 3D cell culture systems that also demonstrate the possibility of replacing or regenerating tissue.

  13. Collaborative annotation of 3D crystallographic models.

    PubMed

    Hunter, J; Henderson, M; Khan, I

    2007-01-01

    This paper describes the AnnoCryst system-a tool that was designed to enable authenticated collaborators to share online discussions about 3D crystallographic structures through the asynchronous attachment, storage, and retrieval of annotations. Annotations are personal comments, interpretations, questions, assessments, or references that can be attached to files, data, digital objects, or Web pages. The AnnoCryst system enables annotations to be attached to 3D crystallographic models retrieved from either private local repositories (e.g., Fedora) or public online databases (e.g., Protein Data Bank or Inorganic Crystal Structure Database) via a Web browser. The system uses the Jmol plugin for viewing and manipulating the 3D crystal structures but extends Jmol by providing an additional interface through which annotations can be created, attached, stored, searched, browsed, and retrieved. The annotations are stored on a standardized Web annotation server (Annotea), which has been extended to support 3D macromolecular structures. Finally, the system is embedded within a security framework that is capable of authenticating users and restricting access only to trusted colleagues.

  14. A Rotation Invariant in 3-D Reaching

    ERIC Educational Resources Information Center

    Mitra, Suvobrata; Turvey, M. T.

    2004-01-01

    In 3 experiments, the authors investigated changes in hand orientation during a 3-D reaching task that imposed specific position and orientation requirements on the hand's initial and final postures. Instantaneous hand orientation was described using 3-element rotation vectors representing current orientation as a rotation from a fixed reference…

  15. Spacecraft 3D Augmented Reality Mobile App

    NASA Technical Reports Server (NTRS)

    Hussey, Kevin J.; Doronila, Paul R.; Kumanchik, Brian E.; Chan, Evan G.; Ellison, Douglas J.; Boeck, Andrea; Moore, Justin M.

    2013-01-01

    The Spacecraft 3D application allows users to learn about and interact with iconic NASA missions in a new and immersive way using common mobile devices. Using Augmented Reality (AR) techniques to project 3D renditions of the mission spacecraft into real-world surroundings, users can interact with and learn about Curiosity, GRAIL, Cassini, and Voyager. Additional updates on future missions, animations, and information will be ongoing. Using a printed AR Target and camera on a mobile device, users can get up close with these robotic explorers, see how some move, and learn about these engineering feats, which are used to expand knowledge and understanding about space. The software receives input from the mobile device's camera to recognize the presence of an AR marker in the camera's field of view. It then displays a 3D rendition of the selected spacecraft in the user's physical surroundings, on the mobile device's screen, while it tracks the device's movement in relation to the physical position of the spacecraft's 3D image on the AR marker.

  16. The New Realm of 3-D Vision

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Dimension Technologies Inc., developed a line of 2-D/3-D Liquid Crystal Display (LCD) screens, including a 15-inch model priced at consumer levels. DTI's family of flat panel LCD displays, called the Virtual Window(TM), provide real-time 3-D images without the use of glasses, head trackers, helmets, or other viewing aids. Most of the company initial 3-D display research was funded through NASA's Small Business Innovation Research (SBIR) program. The images on DTI's displays appear to leap off the screen and hang in space. The display accepts input from computers or stereo video sources, and can be switched from 3-D to full-resolution 2-D viewing with the push of a button. The Virtual Window displays have applications in data visualization, medicine, architecture, business, real estate, entertainment, and other research, design, military, and consumer applications. Displays are currently used for computer games, protein analysis, and surgical imaging. The technology greatly benefits the medical field, as surgical simulators are helping to increase the skills of surgical residents. Virtual Window(TM) is a trademark of Dimension Technologies Inc.

  17. NASA Sees Typhoon Rammasun in 3-D

    NASA Video Gallery

    NASA's TRMM satellite flew over on July 14, 2014 at 1819 UTC and data was used to make this 3-D flyby showing thunderstorms to heights of almost 17km (10.5 miles). Rain was measured falling at a ra...

  18. 3-D Teaching Models for All

    ERIC Educational Resources Information Center

    Bradley, Joan; Farland-Smith, Donna

    2010-01-01

    Allowing a student to "see" through touch what other students see through a microscope can be a challenging task. Therefore, author Joan Bradley created three-dimensional (3-D) models with one student's visual impairment in mind. They are meant to benefit all students and can be used to teach common high school biology topics, including the…

  19. Evolution of Archaea in 3D modeling

    NASA Astrophysics Data System (ADS)

    Pikuta, Elena V.; Tankosic, Dragana; Sheldon, Rob

    2012-11-01

    The analysis of all groups of Archaea performed in two-dimensions have demonstrated a specific distribution of Archaean species as a function of pH/temperature, temperature/salinity and pH/salinity. Work presented here is an extension of this analysis with a three dimensional (3D) modeling in logarithmic scale. As it was shown in 2D representation, the "Rules of the Diagonal" have been expressed even more clearly in 3D modeling. In this article, we used a 3D Mesh modeling to show the range of distribution of each separate group of Archaea as a function of pH, temperature, and salinity. Visible overlap and links between different groups indicate a direction of evolution in Archaea. The major direction in ancestral life (vector of evolution) has been indicated: from high temperature, acidic, and low-salinity system towards low temperature, alkaline and high salinity systems. Specifics of the geometrical coordinates and distribution of separate groups of Archaea in 3 D scale were analyzed with a mathematical description of the functions. Based on the obtained data, a new model for the origin and evolution of life on Earth is proposed. The geometry of this model is described by a hyperboloid of one sheet. Conclusions of this research are consistent with previous results derived from the two-dimensional diagrams. This approach is suggested as a new method for analyzing any biological group in accordance to its environmental parameters.

  20. Introduction to 3D Graphics through Excel

    ERIC Educational Resources Information Center

    Benacka, Jan

    2013-01-01

    The article presents a method of explaining the principles of 3D graphics through making a revolvable and sizable orthographic parallel projection of cuboid in Excel. No programming is used. The method was tried in fourteen 90 minute lessons with 181 participants, which were Informatics teachers, undergraduates of Applied Informatics and gymnasium…

  1. 3D Cell Culture in Alginate Hydrogels.

    PubMed

    Andersen, Therese; Auk-Emblem, Pia; Dornish, Michael

    2015-03-24

    This review compiles information regarding the use of alginate, and in particular alginate hydrogels, in culturing cells in 3D. Knowledge of alginate chemical structure and functionality are shown to be important parameters in design of alginate-based matrices for cell culture. Gel elasticity as well as hydrogel stability can be impacted by the type of alginate used, its concentration, the choice of gelation technique (ionic or covalent), and divalent cation chosen as the gel inducing ion. The use of peptide-coupled alginate can control cell-matrix interactions. Gelation of alginate with concomitant immobilization of cells can take various forms. Droplets or beads have been utilized since the 1980s for immobilizing cells. Newer matrices such as macroporous scaffolds are now entering the 3D cell culture product market. Finally, delayed gelling, injectable, alginate systems show utility in the translation of in vitro cell culture to in vivo tissue engineering applications. Alginate has a history and a future in 3D cell culture. Historically, cells were encapsulated in alginate droplets cross-linked with calcium for the development of artificial organs. Now, several commercial products based on alginate are being used as 3D cell culture systems that also demonstrate the possibility of replacing or regenerating tissue.

  2. PlumeSat: A Micro-Satellite Based Plume Imagery Collection Experiment

    SciTech Connect

    Ledebuhr, A.G.; Ng, L.C.

    2002-06-30

    This paper describes a technical approach to cost-effectively collect plume imagery of boosting targets using a novel micro-satellite based platform operating in low earth orbit (LEO). The plume collection Micro-satellite or PlueSat for short, will be capable of carrying an array of multi-spectral (UV through LWIR) passive and active (Imaging LADAR) sensors and maneuvering with a lateral divert propulsion system to different observation altitudes (100 to 300 km) and different closing geometries to achieve a range of aspect angles (15 to 60 degrees) in order to simulate a variety of boost phase intercept missions. The PlumeSat will be a cost effective platform to collect boost phase plume imagery from within 1 to 10 km ranges, resulting in 0.1 to 1 meter resolution imagery of a variety of potential target missiles with a goal of demonstrating reliable plume-to-hardbody handover algorithms for future boost phase intercept missions. Once deployed on orbit, the PlumeSat would perform a series phenomenology collection experiments until expends its on-board propellants. The baseline PlumeSat concept is sized to provide from 5 to 7 separate fly by data collects of boosting targets. The total number of data collects will depend on the orbital basing altitude and the accuracy in delivering the boosting target vehicle to the nominal PlumeSat fly-by volume.

  3. 3D Printed Programmable Release Capsules.

    PubMed

    Gupta, Maneesh K; Meng, Fanben; Johnson, Blake N; Kong, Yong Lin; Tian, Limei; Yeh, Yao-Wen; Masters, Nina; Singamaneni, Srikanth; McAlpine, Michael C

    2015-08-12

    The development of methods for achieving precise spatiotemporal control over chemical and biomolecular gradients could enable significant advances in areas such as synthetic tissue engineering, biotic-abiotic interfaces, and bionanotechnology. Living organisms guide tissue development through highly orchestrated gradients of biomolecules that direct cell growth, migration, and differentiation. While numerous methods have been developed to manipulate and implement biomolecular gradients, integrating gradients into multiplexed, three-dimensional (3D) matrices remains a critical challenge. Here we present a method to 3D print stimuli-responsive core/shell capsules for programmable release of multiplexed gradients within hydrogel matrices. These capsules are composed of an aqueous core, which can be formulated to maintain the activity of payload biomolecules, and a poly(lactic-co-glycolic) acid (PLGA, an FDA approved polymer) shell. Importantly, the shell can be loaded with plasmonic gold nanorods (AuNRs), which permits selective rupturing of the capsule when irradiated with a laser wavelength specifically determined by the lengths of the nanorods. This precise control over space, time, and selectivity allows for the ability to pattern 2D and 3D multiplexed arrays of enzyme-loaded capsules along with tunable laser-triggered rupture and release of active enzymes into a hydrogel ambient. The advantages of this 3D printing-based method include (1) highly monodisperse capsules, (2) efficient encapsulation of biomolecular payloads, (3) precise spatial patterning of capsule arrays, (4) "on the fly" programmable reconfiguration of gradients, and (5) versatility for incorporation in hierarchical architectures. Indeed, 3D printing of programmable release capsules may represent a powerful new tool to enable spatiotemporal control over biomolecular gradients. PMID:26042472

  4. Parallel CARLOS-3D code development

    SciTech Connect

    Putnam, J.M.; Kotulski, J.D.

    1996-02-01

    CARLOS-3D is a three-dimensional scattering code which was developed under the sponsorship of the Electromagnetic Code Consortium, and is currently used by over 80 aerospace companies and government agencies. The code has been extensively validated and runs on both serial workstations and parallel super computers such as the Intel Paragon. CARLOS-3D is a three-dimensional surface integral equation scattering code based on a Galerkin method of moments formulation employing Rao- Wilton-Glisson roof-top basis for triangular faceted surfaces. Fully arbitrary 3D geometries composed of multiple conducting and homogeneous bulk dielectric materials can be modeled. This presentation describes some of the extensions to the CARLOS-3D code, and how the operator structure of the code facilitated these improvements. Body of revolution (BOR) and two-dimensional geometries were incorporated by simply including new input routines, and the appropriate Galerkin matrix operator routines. Some additional modifications were required in the combined field integral equation matrix generation routine due to the symmetric nature of the BOR and 2D operators. Quadrilateral patched surfaces with linear roof-top basis functions were also implemented in the same manner. Quadrilateral facets and triangular facets can be used in combination to more efficiently model geometries with both large smooth surfaces and surfaces with fine detail such as gaps and cracks. Since the parallel implementation in CARLOS-3D is at high level, these changes were independent of the computer platform being used. This approach minimizes code maintenance, while providing capabilities with little additional effort. Results are presented showing the performance and accuracy of the code for some large scattering problems. Comparisons between triangular faceted and quadrilateral faceted geometry representations will be shown for some complex scatterers.

  5. 3-D Force-balanced Magnetospheric Configurations

    SciTech Connect

    Sorin Zaharia; C.Z. Cheng; K. Maezawa

    2003-02-10

    The knowledge of plasma pressure is essential for many physics applications in the magnetosphere, such as computing magnetospheric currents and deriving magnetosphere-ionosphere coupling. A thorough knowledge of the 3-D pressure distribution has however eluded the community, as most in-situ pressure observations are either in the ionosphere or the equatorial region of the magnetosphere. With the assumption of pressure isotropy there have been attempts to obtain the pressure at different locations by either (a) mapping observed data (e.g., in the ionosphere) along the field lines of an empirical magnetospheric field model or (b) computing a pressure profile in the equatorial plane (in 2-D) or along the Sun-Earth axis (in 1-D) that is in force balance with the magnetic stresses of an empirical model. However, the pressure distributions obtained through these methods are not in force balance with the empirical magnetic field at all locations. In order to find a global 3-D plasma pressure distribution in force balance with the magnetospheric magnetic field, we have developed the MAG-3D code, that solves the 3-D force balance equation J x B = (upside-down delta) P computationally. Our calculation is performed in a flux coordinate system in which the magnetic field is expressed in terms of Euler potentials as B = (upside-down delta) psi x (upside-down delta) alpha. The pressure distribution, P = P(psi,alpha), is prescribed in the equatorial plane and is based on satellite measurements. In addition, computational boundary conditions for y surfaces are imposed using empirical field models. Our results provide 3-D distributions of magnetic field and plasma pressure as well as parallel and transverse currents for both quiet-time and disturbed magnetospheric conditions.

  6. Laser printing of 3D metallic interconnects

    NASA Astrophysics Data System (ADS)

    Beniam, Iyoel; Mathews, Scott A.; Charipar, Nicholas A.; Auyeung, Raymond C. Y.; Piqué, Alberto

    2016-04-01

    The use of laser-induced forward transfer (LIFT) techniques for the printing of functional materials has been demonstrated for numerous applications. The printing gives rise to patterns, which can be used to fabricate planar interconnects. More recently, various groups have demonstrated electrical interconnects from laser-printed 3D structures. The laser printing of these interconnects takes place through aggregation of voxels of either molten metal or of pastes containing dispersed metallic particles. However, the generated 3D structures do not posses the same metallic conductivity as a bulk metal interconnect of the same cross-section and length as those formed by wire bonding or tab welding. An alternative is to laser transfer entire 3D structures using a technique known as lase-and-place. Lase-and-place is a LIFT process whereby whole components and parts can be transferred from a donor substrate onto a desired location with one single laser pulse. This paper will describe the use of LIFT to laser print freestanding, solid metal foils or beams precisely over the contact pads of discrete devices to interconnect them into fully functional circuits. Furthermore, this paper will also show how the same laser can be used to bend or fold the bulk metal foils prior to transfer, thus forming compliant 3D structures able to provide strain relief for the circuits under flexing or during motion from thermal mismatch. These interconnect "ridges" can span wide gaps (on the order of a millimeter) and accommodate height differences of tens of microns between adjacent devices. Examples of these laser printed 3D metallic bridges and their role in the development of next generation electronics by additive manufacturing will be presented.

  7. 3D microscopy for microfabrication quality control

    NASA Astrophysics Data System (ADS)

    Muller, Matthew S.; De Jean, Paul D.

    2015-03-01

    A novel stereo microscope adapter, the SweptVue, has been developed to rapidly perform quantitative 3D microscopy for cost-effective microfabrication quality control. The SweptVue adapter uses the left and right stereo channels of an Olympus SZX7 stereo microscope for sample illumination and detection, respectively. By adjusting the temporal synchronization between the illumination lines projected from a Texas Instruments DLP LightCrafter and the rolling shutter on a Point Grey Flea3 CMOS camera, micrometer-scale depth features can be easily and rapidly measured at up to 5 μm resolution on a variety of microfabricated samples. In this study, the build performance of an industrial-grade Stratasys Object 300 Connex 3D printer was examined. Ten identical parts were 3D printed with a lateral and depth resolution of 42 μm and 30 μm, respectively, using both a rigid and flexible Stratasys PolyJet material. Surface elevation precision and accuracy was examined over multiple regions of interest on plateau and hemispherical surfaces. In general, the dimensions of the examined features were reproducible across the parts built using both materials. However, significant systemic lateral and height build errors were discovered, such as: decreased heights when approaching the edges of plateaus, inaccurate height steps, and poor tolerances on channel width. For 3D printed parts to be used in functional applications requiring micro-scale tolerances, they need to conform to specification. Despite appearing identical, our 3D printed parts were found to have a variety of defects that the SweptVue adapter quickly revealed.

  8. 3D Printed Programmable Release Capsules

    PubMed Central

    Gupta, Maneesh K.; Meng, Fanben; Johnson, Blake N.; Kong, Yong Lin; Tian, Limei; Yeh, Yao-Wen; Masters, Nina; Singamaneni, Srikanth; McAlpine, Michael C.

    2015-01-01

    The development of methods for achieving precise spatiotemporal control over chemical and biomolecular gradients could enable significant advances in areas such as synthetic tissue engineering, biotic–abiotic interfaces, and bionanotechnology. Living organisms guide tissue development through highly orchestrated gradients of biomolecules that direct cell growth, migration, and differentiation. While numerous methods have been developed to manipulate and implement biomolecular gradients, integrating gradients into multiplexed, three-dimensional (3D) matrices remains a critical challenge. Here we present a method to 3D print stimuli-responsive core/shell capsules for programmable release of multiplexed gradients within hydrogel matrices. These capsules are composed of an aqueous core, which can be formulated to maintain the activity of payload biomolecules, and a poly(lactic-co-glycolic) acid (PLGA, an FDA approved polymer) shell. Importantly, the shell can be loaded with plasmonic gold nanorods (AuNRs), which permits selective rupturing of the capsule when irradiated with a laser wavelength specifically determined by the lengths of the nanorods. This precise control over space, time, and selectivity allows for the ability to pattern 2D and 3D multiplexed arrays of enzyme-loaded capsules along with tunable laser-triggered rupture and release of active enzymes into a hydrogel ambient. The advantages of this 3D printing-based method include (1) highly monodisperse capsules, (2) efficient encapsulation of biomolecular payloads, (3) precise spatial patterning of capsule arrays, (4) “on the fly” programmable reconfiguration of gradients, and (5) versatility for incorporation in hierarchical architectures. Indeed, 3D printing of programmable release capsules may represent a powerful new tool to enable spatiotemporal control over biomolecular gradients. PMID:26042472

  9. 3D Printed Programmable Release Capsules.

    PubMed

    Gupta, Maneesh K; Meng, Fanben; Johnson, Blake N; Kong, Yong Lin; Tian, Limei; Yeh, Yao-Wen; Masters, Nina; Singamaneni, Srikanth; McAlpine, Michael C

    2015-08-12

    The development of methods for achieving precise spatiotemporal control over chemical and biomolecular gradients could enable significant advances in areas such as synthetic tissue engineering, biotic-abiotic interfaces, and bionanotechnology. Living organisms guide tissue development through highly orchestrated gradients of biomolecules that direct cell growth, migration, and differentiation. While numerous methods have been developed to manipulate and implement biomolecular gradients, integrating gradients into multiplexed, three-dimensional (3D) matrices remains a critical challenge. Here we present a method to 3D print stimuli-responsive core/shell capsules for programmable release of multiplexed gradients within hydrogel matrices. These capsules are composed of an aqueous core, which can be formulated to maintain the activity of payload biomolecules, and a poly(lactic-co-glycolic) acid (PLGA, an FDA approved polymer) shell. Importantly, the shell can be loaded with plasmonic gold nanorods (AuNRs), which permits selective rupturing of the capsule when irradiated with a laser wavelength specifically determined by the lengths of the nanorods. This precise control over space, time, and selectivity allows for the ability to pattern 2D and 3D multiplexed arrays of enzyme-loaded capsules along with tunable laser-triggered rupture and release of active enzymes into a hydrogel ambient. The advantages of this 3D printing-based method include (1) highly monodisperse capsules, (2) efficient encapsulation of biomolecular payloads, (3) precise spatial patterning of capsule arrays, (4) "on the fly" programmable reconfiguration of gradients, and (5) versatility for incorporation in hierarchical architectures. Indeed, 3D printing of programmable release capsules may represent a powerful new tool to enable spatiotemporal control over biomolecular gradients.

  10. 3D Printing: 3D Printing of Highly Stretchable and Tough Hydrogels into Complex, Cellularized Structures.

    PubMed

    Hong, Sungmin; Sycks, Dalton; Chan, Hon Fai; Lin, Shaoting; Lopez, Gabriel P; Guilak, Farshid; Leong, Kam W; Zhao, Xuanhe

    2015-07-15

    X. Zhao and co-workers develop on page 4035 a new biocompatible hydrogel system that is extremely tough and stretchable and can be 3D printed into complex structures, such as the multilayer mesh shown. Cells encapsulated in the tough and printable hydrogel maintain high viability. 3D-printed structures of the tough hydrogel can sustain high mechanical loads and deformations.

  11. R3D-2-MSA: the RNA 3D structure-to-multiple sequence alignment server

    PubMed Central

    Cannone, Jamie J.; Sweeney, Blake A.; Petrov, Anton I.; Gutell, Robin R.; Zirbel, Craig L.; Leontis, Neocles

    2015-01-01

    The RNA 3D Structure-to-Multiple Sequence Alignment Server (R3D-2-MSA) is a new web service that seamlessly links RNA three-dimensional (3D) structures to high-quality RNA multiple sequence alignments (MSAs) from diverse biological sources. In this first release, R3D-2-MSA provides manual and programmatic access to curated, representative ribosomal RNA sequence alignments from bacterial, archaeal, eukaryal and organellar ribosomes, using nucleotide numbers from representative atomic-resolution 3D structures. A web-based front end is available for manual entry and an Application Program Interface for programmatic access. Users can specify up to five ranges of nucleotides and 50 nucleotide positions per range. The R3D-2-MSA server maps these ranges to the appropriate columns of the corresponding MSA and returns the contents of the columns, either for display in a web browser or in JSON format for subsequent programmatic use. The browser output page provides a 3D interactive display of the query, a full list of sequence variants with taxonomic information and a statistical summary of distinct sequence variants found. The output can be filtered and sorted in the browser. Previous user queries can be viewed at any time by resubmitting the output URL, which encodes the search and re-generates the results. The service is freely available with no login requirement at http://rna.bgsu.edu/r3d-2-msa. PMID:26048960

  12. 3-D transient analysis of pebble-bed HTGR by TORT-TD/ATTICA3D

    SciTech Connect

    Seubert, A.; Sureda, A.; Lapins, J.; Buck, M.; Bader, J.; Laurien, E.

    2012-07-01

    As most of the acceptance criteria are local core parameters, application of transient 3-D fine mesh neutron transport and thermal hydraulics coupled codes is mandatory for best estimate evaluations of safety margins. This also applies to high-temperature gas cooled reactors (HTGR). Application of 3-D fine-mesh transient transport codes using few energy groups coupled with 3-D thermal hydraulics codes becomes feasible in view of increasing computing power. This paper describes the discrete ordinates based coupled code system TORT-TD/ATTICA3D that has recently been extended by a fine-mesh diffusion solver. Based on transient analyses for the PBMR-400 design, the transport/diffusion capabilities are demonstrated and 3-D local flux and power redistribution effects during a partial control rod withdrawal are shown. (authors)

  13. The dimension added by 3D scanning and 3D printing of meteorites

    NASA Astrophysics Data System (ADS)

    de Vet, S. J.

    2016-01-01

    An overview for the 3D photodocumentation of meteorites is presented, focussing on two 3D scanning methods in relation to 3D printing. The 3D photodocumention of meteorites provides new ways for the digital preservation of culturally, historically or scientifically unique meteorites. It has the potential for becoming a new documentation standard of meteorites that can exist complementary to traditional photographic documentation. Notable applications include (i.) use of physical properties in dark flight-, strewn field-, or aerodynamic modelling; (ii.) collection research of meteorites curated by different museum collections, and (iii.) public dissemination of meteorite models as a resource for educational users. The possible applications provided by the additional dimension of 3D illustrate the benefits for the meteoritics community.

  14. 3D spatial resolution and spectral resolution of interferometric 3D imaging spectrometry.

    PubMed

    Obara, Masaki; Yoshimori, Kyu

    2016-04-01

    Recently developed interferometric 3D imaging spectrometry (J. Opt. Soc. Am A18, 765 [2001]1084-7529JOAOD610.1364/JOSAA.18.000765) enables obtainment of the spectral information and 3D spatial information for incoherently illuminated or self-luminous object simultaneously. Using this method, we can obtain multispectral components of complex holograms, which correspond directly to the phase distribution of the wavefronts propagated from the polychromatic object. This paper focuses on the analysis of spectral resolution and 3D spatial resolution in interferometric 3D imaging spectrometry. Our analysis is based on a novel analytical impulse response function defined over four-dimensional space. We found that the experimental results agree well with the theoretical prediction. This work also suggests a new criterion and estimate method regarding 3D spatial resolution of digital holography. PMID:27139648

  15. SB3D User Manual, Santa Barbara 3D Radiative Transfer Model

    SciTech Connect

    O'Hirok, William

    1999-01-01

    SB3D is a three-dimensional atmospheric and oceanic radiative transfer model for the Solar spectrum. The microphysics employed in the model are the same as used in the model SBDART. It is assumed that the user of SB3D is familiar with SBDART and IDL. SB3D differs from SBDART in that computations are conducted on media in three-dimensions rather than a single column (i.e. plane-parallel), and a stochastic method (Monte Carlo) is employed instead of a numerical approach (Discrete Ordinates) for estimating a solution to the radiative transfer equation. Because of these two differences between SB3D and SBDART, the input and running of SB3D is more unwieldy and requires compromises between model performance and computational expense. Hence, there is no one correct method for running the model and the user must develop a sense to the proper input and configuration of the model.

  16. Quasi 3D dosimetry (EPID, conventional 2D/3D detector matrices)

    NASA Astrophysics Data System (ADS)

    Bäck, A.

    2015-01-01

    Patient specific pretreatment measurement for IMRT and VMAT QA should preferably give information with a high resolution in 3D. The ability to distinguish complex treatment plans, i.e. treatment plans with a difference between measured and calculated dose distributions that exceeds a specified tolerance, puts high demands on the dosimetry system used for the pretreatment measurements and the results of the measurement evaluation needs a clinical interpretation. There are a number of commercial dosimetry systems designed for pretreatment IMRT QA measurements. 2D arrays such as MapCHECK® (Sun Nuclear), MatriXXEvolution (IBA Dosimetry) and OCTAVIOUS® 1500 (PTW), 3D phantoms such as OCTAVIUS® 4D (PTW), ArcCHECK® (Sun Nuclear) and Delta4 (ScandiDos) and software for EPID dosimetry and 3D reconstruction of the dose in the patient geometry such as EPIDoseTM (Sun Nuclear) and Dosimetry CheckTM (Math Resolutions) are available. None of those dosimetry systems can measure the 3D dose distribution with a high resolution (full 3D dose distribution). Those systems can be called quasi 3D dosimetry systems. To be able to estimate the delivered dose in full 3D the user is dependent on a calculation algorithm in the software of the dosimetry system. All the vendors of the dosimetry systems mentioned above provide calculation algorithms to reconstruct a full 3D dose in the patient geometry. This enables analyzes of the difference between measured and calculated dose distributions in DVHs of the structures of clinical interest which facilitates the clinical interpretation and is a promising tool to be used for pretreatment IMRT QA measurements. However, independent validation studies on the accuracy of those algorithms are scarce. Pretreatment IMRT QA using the quasi 3D dosimetry systems mentioned above rely on both measurement uncertainty and accuracy of calculation algorithms. In this article, these quasi 3D dosimetry systems and their use in patient specific pretreatment IMRT

  17. Interpretation of 2d and 3d Building Details on Facades and Roofs

    NASA Astrophysics Data System (ADS)

    Meixner, P.; Leberl, F.; Brédif, M.

    2011-04-01

    Current Internet-inspired mapping data are in the form of street maps, orthophotos, 3D models or street-side images and serve to support mostly search and navigation. Yet the only mapping data that currently can really be searched are the street maps via their addresses and coordinates. The orthophotos, 3D models and street-side images represent predominantly "eye candy" with little added value to the Internet-user. We are interested in characterizing the elements of the urban space from imagery. In this paper we discuss the use of street side imagery and aerial imagery to develop descriptions of urban spaces, initially of building facades and roofs. We present methods (a) to segment facades using high-overlap street side facade images, (b) to map facades and facade details from vertical aerial images, and (c) to characterize roofs by their type and details, also from aerial photography. This paper describes a method of roof segmentation with the goal of assigning each roof to a specific architectural style. Questions of the use of the attic space, or the placement of solar panels, are of interest. It is of interest that roofs have recently been mapped using LiDAR point clouds. We demonstrate that aerial images are a useful and economical alternative to LiDAR for the characterization of building roofs, and that they also contain very valuable information about facades.

  18. INCORPORATING DYNAMIC 3D SIMULATION INTO PRA

    SciTech Connect

    Steven R Prescott; Curtis Smith

    2011-07-01

    Through continued advancement in computational resources, development that was previously done by trial and error production is now performed through computer simulation. These virtual physical representations have the potential to provide accurate and valid modeling results and are being used in many different technical fields. Risk assessment now has the opportunity to use 3D simulation to improve analysis results and insights, especially for external event analysis. By using simulations, the modeler only has to determine the likelihood of an event without having to also predict the results of that event. The 3D simulation automatically determines not only the outcome of the event, but when those failures occur. How can we effectively incorporate 3D simulation into traditional PRA? Most PRA plant modeling is made up of components with different failure modes, probabilities, and rates. Typically, these components are grouped into various systems and then are modeled together (in different combinations) as a “system” with logic structures to form fault trees. Applicable fault trees are combined through scenarios, typically represented by event tree models. Though this method gives us failure results for a given model, it has limitations when it comes to time-based dependencies or dependencies that are coupled to physical processes which may themselves be space- or time-dependent. Since, failures from a 3D simulation are naturally time related, they should be used in that manner. In our simulation approach, traditional static models are converted into an equivalent state diagram representation with start states, probabilistic driven movements between states and terminal states. As the state model is run repeatedly, it converges to the same results as the PRA model in cases where time-related factors are not important. In cases where timing considerations are important (e.g., when events are dependent upon each other), then the simulation approach will typically

  19. 3D visualization of polymer nanostructure

    SciTech Connect

    Werner, James H

    2009-01-01

    Soft materials and structured polymers are extremely useful nanotechnology building blocks. Block copolymers, in particular, have served as 2D masks for nanolithography and 3D scaffolds for photonic crystals, nanoparticle fabrication, and solar cells. F or many of these applications, the precise 3 dimensional structure and the number and type of defects in the polymer is important for ultimate function. However, directly visualizing the 3D structure of a soft material from the nanometer to millimeter length scales is a significant technical challenge. Here, we propose to develop the instrumentation needed for direct 3D structure determination at near nanometer resolution throughout a nearly millimeter-cubed volume of a soft, potentially heterogeneous, material. This new capability will be a valuable research tool for LANL missions in chemistry, materials science, and nanoscience. Our approach to soft materials visualization builds upon exciting developments in super-resolution optical microscopy that have occurred over the past two years. To date, these new, truly revolutionary, imaging methods have been developed and almost exclusively used for biological applications. However, in addition to biological cells, these super-resolution imaging techniques hold extreme promise for direct visualization of many important nanostructured polymers and other heterogeneous chemical systems. Los Alamos has a unique opportunity to lead the development of these super-resolution imaging methods for problems of chemical rather than biological significance. While these optical methods are limited to systems transparent to visible wavelengths, we stress that many important functional chemicals such as polymers, glasses, sol-gels, aerogels, or colloidal assemblies meet this requirement, with specific examples including materials designed for optical communication, manipulation, or light-harvesting Our Research Goals are: (1) Develop the instrumentation necessary for imaging materials

  20. CASTLE3D - A Computer Aided System for Labelling Archaeological Excavations in 3D

    NASA Astrophysics Data System (ADS)

    Houshiar, H.; Borrmann, D.; Elseberg, J.; Nüchter, A.; Näth, F.; Winkler, S.

    2015-08-01

    Documentation of archaeological excavation sites with conventional methods and tools such as hand drawings, measuring tape and archaeological notes is time consuming. This process is prone to human errors and the quality of the documentation depends on the qualification of the archaeologist on site. Use of modern technology and methods in 3D surveying and 3D robotics facilitate and improve this process. Computer-aided systems and databases improve the documentation quality and increase the speed of data acquisition. 3D laser scanning is the state of the art in modelling archaeological excavation sites, historical sites and even entire cities or landscapes. Modern laser scanners are capable of data acquisition of up to 1 million points per second. This provides a very detailed 3D point cloud of the environment. 3D point clouds and 3D models of an excavation site provide a better representation of the environment for the archaeologist and for documentation. The point cloud can be used both for further studies on the excavation and for the presentation of results. This paper introduces a Computer aided system for labelling archaeological excavations in 3D (CASTLE3D). Consisting of a set of tools for recording and georeferencing the 3D data from an excavation site, CASTLE3D is a novel documentation approach in industrial archaeology. It provides a 2D and 3D visualisation of the data and an easy-to-use interface that enables the archaeologist to select regions of interest and to interact with the data in both representations. The 2D visualisation and a 3D orthogonal view of the data provide cuts of the environment that resemble the traditional hand drawings. The 3D perspective view gives a realistic view of the environment. CASTLE3D is designed as an easy-to-use on-site semantic mapping tool for archaeologists. Each project contains a predefined set of semantic information that can be used to label findings in the data. Multiple regions of interest can be joined under

  1. Increasing the range accuracy of three-dimensional ghost imaging ladar using optimum slicing number method

    NASA Astrophysics Data System (ADS)

    Yang, Xu; Zhang, Yong; Xu, Lu; Yang, Cheng-Hua; Wang, Qiang; Liu, Yue-Hao; Zhao, Yuan

    2015-12-01

    The range accuracy of three-dimensional (3D) ghost imaging is derived. Based on the derived range accuracy equation, the relationship between the slicing number and the range accuracy is analyzed and an optimum slicing number (OSN) is determined. According to the OSN, an improved 3D ghost imaging algorithm is proposed to increase the range accuracy. Experimental results indicate that the slicing number can affect the range accuracy significantly and the highest range accuracy can be achieved if the 3D ghost imaging system works with OSN. Project supported by the Young Scientist Fund of the National Natural Science Foundation of China (Grant No. 61108072).

  2. PLOT3D/AMES, DEC VAX VMS VERSION USING DISSPLA (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  3. PLOT3D/AMES, DEC VAX VMS VERSION USING DISSPLA (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P. G.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  4. D Surface Generation from Aerial Thermal Imagery

    NASA Astrophysics Data System (ADS)

    Khodaei, B.; Samadzadegan, F.; Dadras Javan, F.; Hasani, H.

    2015-12-01

    Aerial thermal imagery has been recently applied to quantitative analysis of several scenes. For the mapping purpose based on aerial thermal imagery, high accuracy photogrammetric process is necessary. However, due to low geometric resolution and low contrast of thermal imaging sensors, there are some challenges in precise 3D measurement of objects. In this paper the potential of thermal video in 3D surface generation is evaluated. In the pre-processing step, thermal camera is geometrically calibrated using a calibration grid based on emissivity differences between the background and the targets. Then, Digital Surface Model (DSM) generation from thermal video imagery is performed in four steps. Initially, frames are extracted from video, then tie points are generated by Scale-Invariant Feature Transform (SIFT) algorithm. Bundle adjustment is then applied and the camera position and orientation parameters are determined. Finally, multi-resolution dense image matching algorithm is used to create 3D point cloud of the scene. Potential of the proposed method is evaluated based on thermal imaging cover an industrial area. The thermal camera has 640×480 Uncooled Focal Plane Array (UFPA) sensor, equipped with a 25 mm lens which mounted in the Unmanned Aerial Vehicle (UAV). The obtained results show the comparable accuracy of 3D model generated based on thermal images with respect to DSM generated from visible images, however thermal based DSM is somehow smoother with lower level of texture. Comparing the generated DSM with the 9 measured GCPs in the area shows the Root Mean Square Error (RMSE) value is smaller than 5 decimetres in both X and Y directions and 1.6 meters for the Z direction.

  5. Automatic Building Extraction and Roof Reconstruction in 3k Imagery Based on Line Segments

    NASA Astrophysics Data System (ADS)

    Köhn, A.; Tian, J.; Kurz, F.

    2016-06-01

    We propose an image processing workflow to extract rectangular building footprints using georeferenced stereo-imagery and a derivative digital surface model (DSM) product. The approach applies a line segment detection procedure to the imagery and subsequently verifies identified line segments individually to create a footprint on the basis of the DSM. The footprint is further optimized by morphological filtering. Towards the realization of 3D models, we decompose the produced footprint and generate a 3D point cloud from DSM height information. By utilizing the robust RANSAC plane fitting algorithm, the roof structure can be correctly reconstructed. In an experimental part, the proposed approach has been performed on 3K aerial imagery.

  6. Crashworthiness simulations with DYNA3D

    SciTech Connect

    Schauer, D.A.; Hoover, C.G.; Kay, G.J.; Lee, A.S.; De Groot, A.J.

    1996-04-01

    Current progress in parallel algorithm research and applications in vehicle crash simulation is described for the explicit, finite element algorithms in DYNA3D. Problem partitioning methods and parallel algorithms for contact at material interfaces are the two challenging algorithm research problems that are addressed. Two prototype parallel contact algorithms have been developed for treating the cases of local and arbitrary contact. Demonstration problems for local contact are crashworthiness simulations with 222 locally defined contact surfaces and a vehicle/barrier collision modeled with arbitrary contact. A simulation of crash tests conducted for a vehicle impacting a U-channel small sign post embedded in soil has been run on both the serial and parallel versions of DYNA3D. A significant reduction in computational time has been observed when running these problems on the parallel version. However, to achieve maximum efficiency, complex problems must be appropriately partitioned, especially when contact dominates the computation.

  7. Automating Shallow 3D Seismic Imaging

    SciTech Connect

    Steeples, Don; Tsoflias, George

    2009-01-15

    Our efforts since 1997 have been directed toward developing ultra-shallow seismic imaging as a cost-effective method applicable to DOE facilities. This report covers the final year of grant-funded research to refine 3D shallow seismic imaging, which built on a previous 7-year grant (FG07-97ER14826) that refined and demonstrated the use of an automated method of conducting shallow seismic surveys; this represents a significant departure from conventional seismic-survey field procedures. The primary objective of this final project was to develop an automated three-dimensional (3D) shallow-seismic reflection imaging capability. This is a natural progression from our previous published work and is conceptually parallel to the innovative imaging methods used in the petroleum industry.

  8. Volumetric visualization of 3D data

    NASA Technical Reports Server (NTRS)

    Russell, Gregory; Miles, Richard

    1989-01-01

    In recent years, there has been a rapid growth in the ability to obtain detailed data on large complex structures in three dimensions. This development occurred first in the medical field, with CAT (computer aided tomography) scans and now magnetic resonance imaging, and in seismological exploration. With the advances in supercomputing and computational fluid dynamics, and in experimental techniques in fluid dynamics, there is now the ability to produce similar large data fields representing 3D structures and phenomena in these disciplines. These developments have produced a situation in which currently there is access to data which is too complex to be understood using the tools available for data reduction and presentation. Researchers in these areas are becoming limited by their ability to visualize and comprehend the 3D systems they are measuring and simulating.

  9. Fabricating 3D figurines with personalized faces.

    PubMed

    Tena, J Rafael; Mahler, Moshe; Beeler, Thabo; Grosse, Max; Hengchin Yeh; Matthews, Iain

    2013-01-01

    We present a semi-automated system for fabricating figurines with faces that are personalised to the individual likeness of the customer. The efficacy of the system has been demonstrated by commercial deployments at Walt Disney World Resort and Star Wars Celebration VI in Orlando Florida. Although the system is semi automated, human intervention is limited to a few simple tasks to maintain the high throughput and consistent quality required for commercial application. In contrast to existing systems that fabricate custom heads that are assembled to pre-fabricated plastic bodies, our system seamlessly integrates 3D facial data with a predefined figurine body into a unique and continuous object that is fabricated as a single piece. The combination of state-of-the-art 3D capture, modelling, and printing that are the core of our system provide the flexibility to fabricate figurines whose complexity is only limited by the creativity of the designer.

  10. 3D technology for intelligent trackers

    NASA Astrophysics Data System (ADS)

    Lipton, Ronald

    2010-10-01

    At Super-LHC luminosity it is expected that the standard suite of level 1 triggers for CMS will saturate. Information from the tracker will be needed to reduce trigger rates to satisfy the level 1 bandwidth. Tracking trigger modules which correlate information from closely-spaced sensor layers to form an on-detector momentum filter are being developed by several groups. We report on a trigger module design which utilizes three dimensional integrated circuit technology incorporating chips which are connected both to the top and bottom sensor, providing the ability to filter information locally. A demonstration chip, the VICTR, has been submitted to the Chartered/Tezzaron two-tier 3D run coordinated by Fermilab. We report on the 3D design concept, the status of the VICTR chip and associated sensor integration utilizing oxide bonding.

  11. Techniques for interactive 3-D scientific visualization

    SciTech Connect

    Glinert, E.P. . Dept. of Computer Science); Blattner, M.M. Hospital and Tumor Inst., Houston, TX . Dept. of Biomathematics California Univ., Davis, CA . Dept. of Applied Science Lawrence Livermore National Lab., CA ); Becker, B.G. . Dept. of Applied Science Lawrence Livermore National La

    1990-09-24

    Interest in interactive 3-D graphics has exploded of late, fueled by (a) the allure of using scientific visualization to go where no-one has gone before'' and (b) by the development of new input devices which overcome some of the limitations imposed in the past by technology, yet which may be ill-suited to the kinds of interaction required by researchers active in scientific visualization. To resolve this tension, we propose a flat 5-D'' environment in which 2-D graphics are augmented by exploiting multiple human sensory modalities using cheap, conventional hardware readily available with personal computers and workstations. We discuss how interactions basic to 3-D scientific visualization, like searching a solution space and comparing two such spaces, are effectively carried out in our environment. Finally, we describe 3DMOVE, an experimental microworld we have implemented to test out some of our ideas. 40 refs., 4 figs.

  12. 3D Technology for intelligent trackers

    SciTech Connect

    Lipton, Ronald; /Fermilab

    2010-09-01

    At Super-LHC luminosity it is expected that the standard suite of level 1 triggers for CMS will saturate. Information from the tracker will be needed to reduce trigger rates to satisfy the level 1 bandwidth. Tracking trigger modules which correlate information from closely-spaced sensor layers to form an on-detector momentum filter are being developed by several groups. We report on a trigger module design which utilizes three dimensional integrated circuit technology incorporating chips which are connected both to the top and bottom sensor, providing the ability to filter information locally. A demonstration chip, the VICTR, has been submitted to the Chartered/Tezzaron two-tier 3D run coordinated by Fermilab. We report on the 3D design concept, the status of the VICTR chip and associated sensor integration utilizing oxide bonding.

  13. Multibaseline IFSAR for 3D target reconstruction

    NASA Astrophysics Data System (ADS)

    Ertin, Emre; Moses, Randolph L.; Potter, Lee C.

    2008-04-01

    We consider three dimensional target construction from SAR data collected on multiple complete circular apertures at different elevation angle. The 3-D resolution of circular SAR systems is constrained by two factors: the sparse sampling in elevation and the limited azimuthal persistence of the reflectors in the scene. Three dimensional target reconstruction with multipass circular SAR data is further complicated by nonuniform elevation spacing in real flight paths and non-constant elevation angle throughout the circular pass. In this paper we first develop parametric spectral estimation methods that extend standard IFSAR method of height estimation to apertures at more than two elevation angles. Next, we show that linear interpolation of the phase history data leads to unsatisfactory performance in 3-D reconstruction from nonuniformly sampled elevation passes. We then present a new sparsity regularized interpolation algorithm to preprocess nonuniform elevation samples to create a virtual uniform linear array geometry. We illustrate the performance of the proposed method using simulated backscatter data.

  14. Fabricating 3D figurines with personalized faces.

    PubMed

    Tena, J Rafael; Mahler, Moshe; Beeler, Thabo; Grosse, Max; Hengchin Yeh; Matthews, Iain

    2013-01-01

    We present a semi-automated system for fabricating figurines with faces that are personalised to the individual likeness of the customer. The efficacy of the system has been demonstrated by commercial deployments at Walt Disney World Resort and Star Wars Celebration VI in Orlando Florida. Although the system is semi automated, human intervention is limited to a few simple tasks to maintain the high throughput and consistent quality required for commercial application. In contrast to existing systems that fabricate custom heads that are assembled to pre-fabricated plastic bodies, our system seamlessly integrates 3D facial data with a predefined figurine body into a unique and continuous object that is fabricated as a single piece. The combination of state-of-the-art 3D capture, modelling, and printing that are the core of our system provide the flexibility to fabricate figurines whose complexity is only limited by the creativity of the designer. PMID:24808129

  15. Sensing and compressing 3-D models

    SciTech Connect

    Krumm, J.

    1998-02-01

    The goal of this research project was to create a passive and robust computer vision system for producing 3-D computer models of arbitrary scenes. Although the authors were unsuccessful in achieving the overall goal, several components of this research have shown significant potential. Of particular interest is the application of parametric eigenspace methods for planar pose measurement of partially occluded objects in gray-level images. The techniques presented provide a simple, accurate, and robust solution to the planar pose measurement problem. In addition, the representational efficiency of eigenspace methods used with gray-level features were successfully extended to binary features, which are less sensitive to illumination changes. The results of this research are presented in two papers that were written during the course of this project. The papers are included in sections 2 and 3. The first section of this report summarizes the 3-D modeling efforts.

  16. 3D measurement using circular gratings

    NASA Astrophysics Data System (ADS)

    Harding, Kevin

    2013-09-01

    3D measurement using methods of structured light are well known in the industry. Most such systems use some variation of straight lines, either as simple lines or with some form of encoding. This geometry assumes the lines will be projected from one side and viewed from another to generate the profile information. But what about applications where a wide triangulation angle may not be practical, particularly at longer standoff distances. This paper explores the use of circular grating patterns projected from a center point to achieve 3D information. Originally suggested by John Caulfield around 1990, the method had some interesting potential, particularly if combined with alternate means of measurement from traditional triangulation including depth from focus methods. The possible advantages of a central reference point in the projected pattern may offer some different capabilities not as easily attained with a linear grating pattern. This paper will explore the pros and cons of the method and present some examples of possible applications.

  17. Azimuthally Anisotropic 3D Velocity Continuation

    DOE PAGES

    Burnett, William; Fomel, Sergey

    2011-01-01

    We extend time-domain velocity continuation to the zero-offset 3D azimuthally anisotropic case. Velocity continuation describes how a seismic image changes given a change in migration velocity. This description turns out to be of a wave propagation process, in which images change along a velocity axis. In the anisotropic case, the velocity model is multiparameter. Therefore, anisotropic image propagation is multidimensional. We use a three-parameter slowness model, which is related to azimuthal variations in velocity, as well as their principal directions. This information is useful for fracture and reservoir characterization from seismic data. We provide synthetic diffraction imaging examples to illustratemore » the concept and potential applications of azimuthal velocity continuation and to analyze the impulse response of the 3D velocity continuation operator.« less

  18. 3D Elevation Program: summary for Vermont

    USGS Publications Warehouse

    Carswell, William J.

    2015-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  19. 3D Elevation Program: summary for Nebraska

    USGS Publications Warehouse

    Carswell, William J.

    2015-01-01

    The National Enhanced Elevation Assessment evaluated multiple elevation data acquisition options to determine the optimal data quality and data replacement cycle relative to cost to meet the identified requirements of the user community. The evaluation demonstrated that lidar acquisition at quality level 2 for the conterminous United States and quality level 5 interferometric synthetic aperture radar (ifsar) data for Alaska with a 6- to 10-year acquisition cycle provided the highest benefit/cost ratios. The 3D Elevation Program (3DEP) initiative selected an 8-year acquisition cycle for the respective quality levels. 3DEP, managed by the U.S. Geological Survey, the Office of Management and Budget Circular A–16 lead agency for terrestrial elevation data, responds to the growing need for high-quality topographic data and a wide range of other 3D representations of the Nation’s natural and constructed features.

  20. 3D Multifunctional Ablative Thermal Protection System

    NASA Technical Reports Server (NTRS)

    Feldman, Jay; Venkatapathy, Ethiraj; Wilkinson, Curt; Mercer, Ken

    2015-01-01

    NASA is developing the Orion spacecraft to carry astronauts farther into the solar system than ever before, with human exploration of Mars as its ultimate goal. One of the technologies required to enable this advanced, Apollo-shaped capsule is a 3-dimensional quartz fiber composite for the vehicle's compression pad. During its mission, the compression pad serves first as a structural component and later as an ablative heat shield, partially consumed on Earth re-entry. This presentation will summarize the development of a new 3D quartz cyanate ester composite material, 3-Dimensional Multifunctional Ablative Thermal Protection System (3D-MAT), designed to meet the mission requirements for the Orion compression pad. Manufacturing development, aerothermal (arc-jet) testing, structural performance, and the overall status of material development for the 2018 EM-1 flight test will be discussed.

  1. Microfluidic 3D models of cancer

    PubMed Central

    Sung, Kyung Eun; Beebe, David J.

    2014-01-01

    Despite advances in medicine and biomedical sciences, cancer still remains a major health issue. Complex interactions between tumors and their microenvironment contribute to tumor initiation and progression and also contribute to the development of drug resistant tumor cell populations. The complexity and heterogeneity of tumors and their microenvironment make it challenging to both study and treat cancer. Traditional animal cancer models and in vitro cancer models are limited in their ability to recapitulate human structures and functions, thus hindering the identification of appropriate drug targets and therapeutic strategies. The development and application of microfluidic 3D cancer models has the potential to overcome some of the limitations inherent to traditional models. This review summarizes the progress in microfluidic 3D cancer models, their benefits, and their broad application to basic cancer biology, drug screening, and drug discovery. PMID:25017040

  2. Microfluidic 3D models of cancer.

    PubMed

    Sung, Kyung Eun; Beebe, David J

    2014-12-15

    Despite advances in medicine and biomedical sciences, cancer still remains a major health issue. Complex interactions between tumors and their microenvironment contribute to tumor initiation and progression and also contribute to the development of drug resistant tumor cell populations. The complexity and heterogeneity of tumors and their microenvironment make it challenging to both study and treat cancer. Traditional animal cancer models and in vitro cancer models are limited in their ability to recapitulate human structures and functions, thus hindering the identification of appropriate drug targets and therapeutic strategies. The development and application of microfluidic 3D cancer models have the potential to overcome some of the limitations inherent to traditional models. This review summarizes the progress in microfluidic 3D cancer models, their benefits, and their broad application to basic cancer biology, drug screening, and drug discovery.

  3. Faster Aerodynamic Simulation With Cart3D

    NASA Technical Reports Server (NTRS)

    2003-01-01

    A NASA-developed aerodynamic simulation tool is ensuring the safety of future space operations while providing designers and engineers with an automated, highly accurate computer simulation suite. Cart3D, co-winner of NASA's 2002 Software of the Year award, is the result of over 10 years of research and software development conducted by Michael Aftosmis and Dr. John Melton of Ames Research Center and Professor Marsha Berger of the Courant Institute at New York University. Cart3D offers a revolutionary approach to computational fluid dynamics (CFD), the computer simulation of how fluids and gases flow around an object of a particular design. By fusing technological advancements in diverse fields such as mineralogy, computer graphics, computational geometry, and fluid dynamics, the software provides a new industrial geometry processing and fluid analysis capability with unsurpassed automation and efficiency.

  4. 3D Geo: An Alternative Approach

    NASA Astrophysics Data System (ADS)

    Georgopoulos, A.

    2016-10-01

    The expression GEO is mostly used to denote relation to the earth. However it should not be confined to what is related to the earth's surface, as other objects also need three dimensional representation and documentation, like cultural heritage objects. They include both tangible and intangible ones. In this paper the 3D data acquisition and 3D modelling of cultural heritage assets are briefly described and their significance is also highlighted. Moreover the organization of such information, related to monuments and artefacts, into relational data bases and its use for various purposes, other than just geometric documentation is also described and presented. In order to help the reader understand the above, several characteristic examples are presented and their methodology explained and their results evaluated.

  5. Debris Dispersion Model Using Java 3D

    NASA Technical Reports Server (NTRS)

    Thirumalainambi, Rajkumar; Bardina, Jorge

    2004-01-01

    This paper describes web based simulation of Shuttle launch operations and debris dispersion. Java 3D graphics provides geometric and visual content with suitable mathematical model and behaviors of Shuttle launch. Because the model is so heterogeneous and interrelated with various factors, 3D graphics combined with physical models provides mechanisms to understand the complexity of launch and range operations. The main focus in the modeling and simulation covers orbital dynamics and range safety. Range safety areas include destruct limit lines, telemetry and tracking and population risk near range. If there is an explosion of Shuttle during launch, debris dispersion is explained. The shuttle launch and range operations in this paper are discussed based on the operations from Kennedy Space Center, Florida, USA.

  6. PLOT3D/AMES, GENERIC UNIX VERSION USING DISSPLA (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  7. PLOT3D/AMES, GENERIC UNIX VERSION USING DISSPLA (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  8. Future trends of 3D silicon sensors

    NASA Astrophysics Data System (ADS)

    Da Vià, Cinzia; Boscardin, Maurizio; Dalla Betta, Gian-Franco; Haughton, Iain; Grenier, Philippe; Grinstein, Sebastian; Hansen, Thor-Erik; Hasi, Jasmine; Kenney, Christopher; Kok, Angela; Parker, Sherwood; Pellegrini, Giulio; Povoli, Marco; Tzhnevyi, Vladislav; Watts, Stephen J.

    2013-12-01

    Vertex detectors for the next LHC experiments upgrades will need to have low mass while at the same time be radiation hard and with sufficient granularity to fulfil the physics challenges of the next decade. Based on the gained experience with 3D silicon sensors for the ATLAS IBL project and the on-going developments on light materials, interconnectivity and cooling, this paper will discuss possible solutions to these requirements.

  9. 'Berries' on the Ground 2 (3-D)

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This is the 3-D anaglyph showing a microscopic image taken of soil featuring round, blueberry-shaped rock formations on the crater floor at Meridiani Planum, Mars. This image was taken on the 13th day of the Mars Exploration Rover Opportunity's journey, before the Moessbauer spectrometer, an instrument located on the rover's instrument deployment device, or 'arm,' was pressed down to take measurements. The area in this image is approximately 3 centimeters (1.2 inches) across.

  10. Adirondack Post-Drill (3-D)

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This is a 3-D anaglyph showing a microscopic image taken of an area measuring 3 centimeters (1.2 inches) across on the rock called Adirondack. The image was taken at Gusev Crater on the 33rd day of the Mars Exploration Rover Spirit's journey (Feb. 5, 2004), after the rover used its rock abrasion tool to drill into the rock. Debris from the use of the tool is visible to the left of the hole.

  11. Teat Morphology Characterization With 3D Imaging.

    PubMed

    Vesterinen, Heidi M; Corfe, Ian J; Sinkkonen, Ville; Iivanainen, Antti; Jernvall, Jukka; Laakkonen, Juha

    2015-07-01

    The objective of this study was to visualize, in a novel way, the morphological characteristics of bovine teats to gain a better understanding of the detailed teat morphology. We applied silicone casting and 3D digital imaging in order to obtain a more detailed image of the teat structures than that seen in previous studies. Teat samples from 65 dairy cows over 12 months of age were obtained from cows slaughtered at an abattoir. The teats were classified according to the teat condition scoring used in Finland and the lengths of the teat canals were measured. Silicone molds were made from the external teat surface surrounding the teat orifice and from the internal surface of the teat consisting of the papillary duct, Fürstenberg's rosette, and distal part of the teat cistern. The external and internal surface molds of 35 cows were scanned with a 3D laser scanner. The molds and the digital 3D models were used to evaluate internal and external teat surface morphology. A number of measurements were taken from the silicone molds. The 3D models reproduced the morphology of the teats accurately with high repeatability. Breed didn't correlate with the teat classification score. The rosette was found to have significant variation in its size and number of mucosal folds. The internal surface morphology of the rosette did not correlate with the external surface morphology of the teat implying that it is relatively independent of milking parameters that may impact the teat canal and the external surface of the teat. PMID:25382725

  12. Thermomechanical properties of 3d transition metals

    SciTech Connect

    Karaoglu, B.; Rahman, S.M.M. . Dept. of Physics)

    1994-05-15

    The authors have investigated the density variation of the Einstein temperatures and elastic constants of the 3d transition metals. In this respect they have employed the transition metal (TM) pair potentials involving the sp contribution with an appropriate exchange and correlation function, the d-band broadening contribution and the d-band hybridization term. These calculations are aimed at testing the TM pair potentials in generating the quasilocal and local thermomechanical properties.

  13. Applications of 3D printing in healthcare

    PubMed Central

    2016-01-01

    3D printing is a relatively new, rapidly expanding method of manufacturing that found numerous applications in healthcare, automotive, aerospace and defense industries and in many other areas. In this review, applications in medicine that are revolutionizing the way surgeries are carried out, disrupting prosthesis and implant markets as well as dentistry will be presented. The relatively new field of bioprinting, that is printing with cells, will also be briefly discussed. PMID:27785150

  14. 3D Integration for Wireless Multimedia

    NASA Astrophysics Data System (ADS)

    Kimmich, Georg

    The convergence of mobile phone, internet, mapping, gaming and office automation tools with high quality video and still imaging capture capability is becoming a strong market trend for portable devices. High-density video encode and decode, 3D graphics for gaming, increased application-software complexity and ultra-high-bandwidth 4G modem technologies are driving the CPU performance and memory bandwidth requirements close to the PC segment. These portable multimedia devices are battery operated, which requires the deployment of new low-power-optimized silicon process technologies and ultra-low-power design techniques at system, architecture and device level. Mobile devices also need to comply with stringent silicon-area and package-volume constraints. As for all consumer devices, low production cost and fast time-to-volume production is key for success. This chapter shows how 3D architectures can bring a possible breakthrough to meet the conflicting power, performance and area constraints. Multiple 3D die-stacking partitioning strategies are described and analyzed on their potential to improve the overall system power, performance and cost for specific application scenarios. Requirements and maturity of the basic process-technology bricks including through-silicon via (TSV) and die-to-die attachment techniques are reviewed. Finally, we highlight new challenges which will arise with 3D stacking and an outlook on how they may be addressed: Higher power density will require thermal design considerations, new EDA tools will need to be developed to cope with the integration of heterogeneous technologies and to guarantee signal and power integrity across the die stack. The silicon/wafer test strategies have to be adapted to handle high-density IO arrays, ultra-thin wafers and provide built-in self-test of attached memories. New standards and business models have to be developed to allow cost-efficient assembly and testing of devices from different silicon and technology

  15. MRCK_3D contact detonation algorithm

    SciTech Connect

    Rougier, Esteban; Munjiza, Antonio

    2010-01-01

    Large-scale Combined Finite-Discrete Element Methods (FEM-DEM) and Discrete Element Methods (DEM) simulations involving contact of a large number of separate bod ies need an efficient, robust and flexible contact detection algorithm. In this work the MRCK-3D search algorithm is outlined and its main CPU perfonnances are evaluated. One of the most important aspects of this newly developed search algorithm is that it is applicable to systems consisting of many bodies of different shapes and sizes.

  16. 3D cartography of the Alpine Arc

    NASA Astrophysics Data System (ADS)

    Vouillamoz, N.; Sue, C.; Champagnac, J. D.; Calcagno, P.

    2012-04-01

    We present a 3D cartography of the alpine arc, a highly non-cylindrical mountain belt, built using the 3D GeoModeller of the BRGM (French geological survey). The model allows to handle the large-scale 3D structure of seventeen major crustal units of the belt (from the lower crust to the sedimentary cover nappes), and two main discontinuities (the Insubric line and the Crustal Penninic Front). It provides a unique document to better understand their structural relationships and to produce new sections. The study area comprises the western alpine arc, from the Jura to the Northwest, up to the Bergell granite intrusion and the Lepontine Dome to the East, and is limited to the South by the Ligurian basin. The model is limited vertically 10 km above sea level at the top, and the moho interface at the bottom. We discarded the structural relationships between the Alps sensus stricto and the surrounding geodynamic systems such as the Rhine graben or the connection with the Apennines. The 3D-model is based on the global integration of various data such as the DEM of the Alps, the moho isobaths, the simplified geological and tectonic maps of the belt, the crustal cross-sections ECORS-CROP and NFP-20, and complementary cross-sections specifically built to precise local complexities. The database has first been integrated in a GIS-project to prepare their implementation in the GeoModeller, by homogenizing the different spatial referencing systems. The global model is finally interpolated from all these data, using the potential field method. The final document is a new tri-dimentional cartography that would be used as input for further alpine studies.

  17. Monolithic 3D CMOS Using Layered Semiconductors.

    PubMed

    Sachid, Angada B; Tosun, Mahmut; Desai, Sujay B; Hsu, Ching-Yi; Lien, Der-Hsien; Madhvapathy, Surabhi R; Chen, Yu-Ze; Hettick, Mark; Kang, Jeong Seuk; Zeng, Yuping; He, Jr-Hau; Chang, Edward Yi; Chueh, Yu-Lun; Javey, Ali; Hu, Chenming

    2016-04-01

    Monolithic 3D integrated circuits using transition metal dichalcogenide materials and low-temperature processing are reported. A variety of digital and analog circuits are implemented on two sequentially integrated layers of devices. Inverter circuit operation at an ultralow supply voltage of 150 mV is achieved, paving the way to high-density, ultralow-voltage, and ultralow-power applications. PMID:26833783

  18. NGT-3D: a simple nematode cultivation system to study Caenorhabditis elegans biology in 3D

    PubMed Central

    Lee, Tong Young; Yoon, Kyoung-hye; Lee, Jin Il

    2016-01-01

    ABSTRACT The nematode Caenorhabditis elegans is one of the premier experimental model organisms today. In the laboratory, they display characteristic development, fertility, and behaviors in a two dimensional habitat. In nature, however, C. elegans is found in three dimensional environments such as rotting fruit. To investigate the biology of C. elegans in a 3D controlled environment we designed a nematode cultivation habitat which we term the nematode growth tube or NGT-3D. NGT-3D allows for the growth of both nematodes and the bacteria they consume. Worms show comparable rates of growth, reproduction and lifespan when bacterial colonies in the 3D matrix are abundant. However, when bacteria are sparse, growth and brood size fail to reach levels observed in standard 2D plates. Using NGT-3D we observe drastic deficits in fertility in a sensory mutant in 3D compared to 2D, and this defect was likely due to an inability to locate bacteria. Overall, NGT-3D will sharpen our understanding of nematode biology and allow scientists to investigate questions of nematode ecology and evolutionary fitness in the laboratory. PMID:26962047

  19. bioWeb3D: an online webGL 3D data visualisation tool

    PubMed Central

    2013-01-01

    Background Data visualization is critical for interpreting biological data. However, in practice it can prove to be a bottleneck for non trained researchers; this is especially true for three dimensional (3D) data representation. Whilst existing software can provide all necessary functionalities to represent and manipulate biological 3D datasets, very few are easily accessible (browser based), cross platform and accessible to non-expert users. Results An online HTML5/WebGL based 3D visualisation tool has been developed to allow biologists to quickly and easily view interactive and customizable three dimensional representations of their data along with multiple layers of information. Using the WebGL library Three.js written in Javascript, bioWeb3D allows the simultaneous visualisation of multiple large datasets inputted via a simple JSON, XML or CSV file, which can be read and analysed locally thanks to HTML5 capabilities. Conclusions Using basic 3D representation techniques in a technologically innovative context, we provide a program that is not intended to compete with professional 3D representation software, but that instead enables a quick and intuitive representation of reasonably large 3D datasets. PMID:23758781

  20. STAR3D: a stack-based RNA 3D structural alignment tool

    PubMed Central

    Ge, Ping; Zhang, Shaojie

    2015-01-01

    The various roles of versatile non-coding RNAs typically require the attainment of complex high-order structures. Therefore, comparing the 3D structures of RNA molecules can yield in-depth understanding of their functional conservation and evolutionary history. Recently, many powerful tools have been developed to align RNA 3D structures. Although some methods rely on both backbone conformations and base pairing interactions, none of them consider the entire hierarchical formation of the RNA secondary structure. One of the major issues is that directly applying the algorithms of matching 2D structures to the 3D coordinates is particularly time-consuming. In this article, we propose a novel RNA 3D structural alignment tool, STAR3D, to take into full account the 2D relations between stacks without the complicated comparison of secondary structures. First, the 3D conserved stacks in the inputs are identified and then combined into a tree-like consensus. Afterward, the loop regions are compared one-to-one in accordance with their relative positions in the consensus tree. The experimental results show that the prediction of STAR3D is more accurate for both non-homologous and homologous RNAs than other state-of-the-art tools with shorter running time. PMID:26184875