Science.gov

Sample records for 3-d prestack depth

  1. Prestack depth migration for 3D offshore methane hydrates data

    NASA Astrophysics Data System (ADS)

    Jang, Seonghyung; Kim, Tae-yeon

    2015-04-01

    One of the indicators for the existence of methane hydrates on seismic data is BSR (bottom simulated reflector), which shows the base of the gas hydrate stability zone. It shows a reversed phase polarity compared to that of the water bottom reflections and high amplitude reflections. It is well known acoustic velocity decrease at the contact between gas hydrates and free-gas-bearing sediments. Prestack reverse time migration (RTM) is a method for imaging the subsurface in depth domain using inner product of source wavefield extrapolation in forward and receiver wavefield extrapolation in backward. It is widely used for imaging the complex subsurface structures with keeping amplitude. We applied RTM to 3D offshore seismic data for methane hydrates exploration. The study area is 12 x 25 km with 120 survey lines offshore. The shot gathers were acquired with 2 streamers and each one has 240 channels. Shot and receiver spacing is 25 m and 12.5 m. The line spacing is 100 m. Near offset is 150 m and maximum far offset is 3137.5 m. The record length is 7 second with a sampling rate of 1 ms. Shot gathers after resampled with 4 ms were processed to enhance signal to noise ratio using conventional basic processing such as amplitude recovery, deconvolution, and band-pass filtering. Interval velocities which were calculated from conventional stack velocities were used for velocity model for RTM. The basic-processed shot gathers and a velocity model were used for input data to obtain 3D image using RTM. For RTM, 20 Hz Ricker wavelet were used and grid size of x, y and z direction is 20x20x20 m. The total number of shot gathers is 176,387 and every 10th shot gather was chosen for reducing computer times and storage. The result is 3D image with inline, cross-line and depth slice image. High amplitude events are shown around (6 km, 4 km, 2.3 km) of in-line image. Each depth slice shows amplitude variation according to different depth steps. Especially channel structure variation

  2. 3-D prestack Kirchhoff depth migration: From prototype to production in a massively parallel processor environment

    SciTech Connect

    Chang, H.; Solano, M.; VanDyke, J.P.; McMechan, G.A.; Epili, D.

    1998-03-01

    Portable, production-scale 3-D prestack Kirchhoff depth migration software capable of full-volume imaging has been successfully implemented and applied to a six-million trace (46.9 Gbyte) marine data set from a salt/subsalt play in the Gulf of Mexico. Velocity model building and updates use an image-driven strategy and were performed in a Sun Sparc environment. Images obtained by 3-D prestack migration after three velocity iterations are substantially better focused and reveal drilling targets that were not visible in images obtained from conventional 3-D poststack time migration. Amplitudes are well preserved, so anomalies associated with known reservoirs conform to the petrophysical predictions. Prototype development was on an 8-node Intel iPSC860 computer; the production version was run on an 1824-node Intel Paragon computer. The code has been successfully ported to CRAY (T3D) and Unix workstation (PVM) environments.

  3. An efficient parallel algorithm: Poststack and prestack Kirchhoff 3D depth migration using flexi-depth iterations

    NASA Astrophysics Data System (ADS)

    Rastogi, Richa; Srivastava, Abhishek; Khonde, Kiran; Sirasala, Kirannmayi M.; Londhe, Ashutosh; Chavhan, Hitesh

    2015-07-01

    This paper presents an efficient parallel 3D Kirchhoff depth migration algorithm suitable for current class of multicore architecture. The fundamental Kirchhoff depth migration algorithm exhibits inherent parallelism however, when it comes to 3D data migration, as the data size increases the resource requirement of the algorithm also increases. This challenges its practical implementation even on current generation high performance computing systems. Therefore a smart parallelization approach is essential to handle 3D data for migration. The most compute intensive part of Kirchhoff depth migration algorithm is the calculation of traveltime tables due to its resource requirements such as memory/storage and I/O. In the current research work, we target this area and develop a competent parallel algorithm for post and prestack 3D Kirchhoff depth migration, using hybrid MPI+OpenMP programming techniques. We introduce a concept of flexi-depth iterations while depth migrating data in parallel imaging space, using optimized traveltime table computations. This concept provides flexibility to the algorithm by migrating data in a number of depth iterations, which depends upon the available node memory and the size of data to be migrated during runtime. Furthermore, it minimizes the requirements of storage, I/O and inter-node communication, thus making it advantageous over the conventional parallelization approaches. The developed parallel algorithm is demonstrated and analysed on Yuva II, a PARAM series of supercomputers. Optimization, performance and scalability experiment results along with the migration outcome show the effectiveness of the parallel algorithm.

  4. 3D pre-stack depth migration of receiver functions with the fast marching method: a Kirchhoff approach

    NASA Astrophysics Data System (ADS)

    Cheng, Cheng; Bodin, Thomas; Allen, Richard M.

    2016-02-01

    We present a novel 3D pre-stack Kirchhoff depth migration (PKDM) method for teleseismic receiver functions. The proposed algorithm considers the effects of diffraction, scattering, and travel time alteration caused by 3D volumetric heterogeneities. It is therefore particularly useful for imaging complex 3D structures such as dipping discontinuities, which is hard to accomplish with traditional methods. The scheme is based on the acoustic wave migration principle, where at each time step of the receiver function, the energy is migrated back to the ensemble of potential conversion points in the image, given a smooth 3D reference model. Travel times for P and S waves are computed with an efficient Eikonal solver, the Fast Marching Method. We also consider elastic scattering patterns, where the amplitude of converted S waves depends on the angle between the incident P wave, and the scattered S wave. Synthetic experiments demonstrate the validity of the method for a variety of dipping angle discontinuities. Comparison with the widely used Common Conversion Point (CCP) stacking method reveals that our migration shows considerable improvement. For example, the effect of multiple reflections that usually produce apparent discontinuities is avoided. The proposed approach is practical, computationally efficient, and is therefore a potentially powerful alternative to standard CCP methods for imaging large-scale continental structure under dense networks.

  5. True amplitude prestack depth migration

    NASA Astrophysics Data System (ADS)

    Deng, Feng

    Reliable analysis of amplitude variation with offset (or with angle) requires accurate amplitudes from prestack migration. In routine seismic data processing, amplitude balancing and automatic gain control are often used to reduce amplitude lateral variations. However, these methods are empirical and lack a solid physical basis; thus, there are uncertainties that might produce erroneous conclusions, and hence cause economic loss. During wavefield propagation, geometrical spreading, intrinsic attenuation, transmission losses and the energy conversion significantly distort the wavefield amplitude. Most current true-amplitude migrations usually compensate only for geometrical spreading. A new prestack depth migration based on the framework of reverse-time migration in the time-space domain was developed in this dissertation with the aim of compensating all of the propagation effects in one integrated algorithm. Geometrical spreading is automatically included because of the use of full two-way wave extrapolation. Viscoelastic wave equations are solved to handle the intrinsic attenuation with a priori quality factor. Transmission losses for both up- and down-going waves are compensated using a two-pass, recursive procedure based on extracting the angle-dependent reflection/transmission coefficients from prestack migration. The losses caused by the conversion of energy from one elastic model to another are accounted for through elastic wave extrapolation; the influence of the S wave velocity contrast on the P wave reflection coefficient is implicitly included by using the Zoeppritz equations to describe the reflection and transmission at an elastic interface. Only smooth background models are assumed to be known. The contrasts/ratios of the model parameters can be estimated by fitting the compensated angle-dependent reflection coefficients obtained from data for multiple sources. This is one useful by-product of the algorithm. Numerical tests on both 2D and 3D scalar

  6. Prestack reverse time migration for 3D marine reflection seismic data

    SciTech Connect

    Jang, Seonghyung; Kim, Taeyoun

    2015-03-10

    Prestack reverse time migration (RTM) is a method for imaging the subsurface using the inner product of wavefield extrapolation in shot domain and in receiver domain. It is well known that RTM is better for preserving amplitudes and phases than other prestack migrations. Since 3D seismic data is huge data volume and it needs heavy computing works, it requires parallel computing in order to have a meaningful depth image of the 3D subsurface. We implemented a parallelized version of 3D RTM for prestack depth migration. The results of numerical example for 3D SEG/EAGE salt model showed good agreement with the original geological model. We applied RTM to offshore 3D seismic reflection data. The study area is 12 × 25 km with 120 survey lines. Shot and receiver spacing is 25 m and 12.5 m. The line spacing is 100 m. Shot gathers were preprocessed to enhance signal to noise ratio and velocity model was calculated from conventional stack velocity. Both of them were used to obtain 3D image using RTM. The results show reasonable subsurface image.

  7. Prestack depth migration of bistatic georadar data

    NASA Astrophysics Data System (ADS)

    Ferguson, R. J.; Yedlin, M. J.; Nielsen, L.

    2012-04-01

    Prestack Depth Migration (PSDM) of bi-static georadar data is computationally expensive because each georadar trace is migrated separately from the others, and then the set of migrated traces is stacked into a single image. For example, given a dataset of 1000 georadar traces, 1000 migrations are computed and then stacked. The computational effort of PSDM scales linearly with the number of traces. Zero Offset Migration (ZOM) is comparatively inexpensive. This is because no matter how many traces are collected, only a single migration is computed, and no stacking procedure is required. The computational cost of ZOM scales logarithmically with the number of traces. Further, for bi-static data with a non-zero offset, ZOM and PSDM return theoretically the same result in the deeper part of the image, and they differ only in the very shallowest part of the image. As a result, the extra cost of PSDM and it's marginal expected benefit relative to ZOM ensure that PSDM is seldom applied to georadar data. We find through numerical experiment and field trials (with 1m antennae spacing), however, that PSDM is actually much better than ZOM for not only the shallow part of the image but for the deeper part as well. We find that this difference is so significant that in many cases it will justify the extra cost of PSDM. In particular, we find that that the difference in image quality is due to a significant reduction in migration artifact strength and pervasiveness. We expect that, though our PSDM and ZOM algorithms are identical internally, the "stacking" process that distinguishes PSDM from ZOM gives rise to the image improvement. For our data examples, we have adapted a PSDM algorithm that is normally applied to seismic imaging for oil and gas exploration. It is a phase-shift based algorithm that accommodates lateral velocity variation, anisotropy, as well as irregular acquisition spacing. We find that this algorithm is efficient and easy to use.

  8. Prestack depth imaging via model-independent stacking

    NASA Astrophysics Data System (ADS)

    Druzhinin, Alexander; MacBeth, Colin; Hitchen, Ken

    1999-12-01

    Most seismic reflection imaging methods are confronted with the difficulty of accurately knowing input velocity information. To eliminate this, we develop a special prestack depth migration technique which avoids the necessity of constructing a macro-velocity model. It is based upon the weighted Kirchhoff-type migration formula expressed in terms of model-independent stacking velocity and arrival angle. This formula is applied to synthetic sub-basaltic data. Numerical results show that the method can be used to successfully image beneath basalts.

  9. Practical aspects of prestack depth migration with finite differences

    SciTech Connect

    Ober, C.C.; Oldfield, R.A.; Womble, D.E.; Romero, L.A.; Burch, C.C.

    1997-07-01

    Finite-difference, prestack, depth migrations offers significant improvements over Kirchhoff methods in imaging near or under salt structures. The authors have implemented a finite-difference prestack depth migration algorithm for use on massively parallel computers which is discussed. The image quality of the finite-difference scheme has been investigated and suggested improvements are discussed. In this presentation, the authors discuss an implicit finite difference migration code, called Salvo, that has been developed through an ACTI (Advanced Computational Technology Initiative) joint project. This code is designed to be efficient on a variety of massively parallel computers. It takes advantage of both frequency and spatial parallelism as well as the use of nodes dedicated to data input/output (I/O). Besides giving an overview of the finite-difference algorithm and some of the parallelism techniques used, migration results using both Kirchhoff and finite-difference migration will be presented and compared. The authors start out with a very simple Cartoon model where one can intuitively see the multiple travel paths and some of the potential problems that will be encountered with Kirchhoff migration. More complex synthetic models as well as results from actual seismic data from the Gulf of Mexico will be shown.

  10. Salt flank imaging by integrated prestack depth migration of VSP and surface

    NASA Astrophysics Data System (ADS)

    Jang, Seonghyung; Kim, Tae-yeon

    2014-05-01

    Since Vertical Seismic Profile (VSP) data include wavefields which can measure directly physical properties between surface and geological interfaces, it is usually used for detecting dip, anisotropy, and reflection amplitude or waveform with respect to incidence angles. Though VSP covers the vicinity of the borehole comparing to the surface seismic, it gives high resolution and it is helpful to find the precise location of a well in the 3-D image from surface seismic data. Normally VSP data are smaller Fresnel zone and wider bandwidth than surface seismic data due to less absorption of the higher frequencies. It gives high fidelity reservoir image for effective reservoir monitoring such as 4D time-lapse seismic and carbon capture and storage. Prestack reverse time migration (RTM) is widely used for imaging the complex subsurface structures. RTM is a method for imaging the subsurface in depth domain using inner product of source wavefield extrapolation in forward and receiver wavefield extrapolation in backward. Since RTM is applicable to any source-receiver geometry, we can apply the same algorithm to VSP and surface seismic data. In this study RTM is implemented the integrated depth imaging of walk-away VSP and surface seismic data in order to have high resolution salt flank image. A synthetic test example includes a schematic flank of salt body with horizontal layers. The model - 8 km wide by 4 km depth - represents a simple salt body and background with velocity of 3.0 km/s for salt body and background velocity of 2.0 km/s. The source wavelet is zero-phase with a central frequency of 10 Hz for surface seismic and 20 Hz for VSP data. VSP data were recorded in the central borehole located 4.0 km from the left side of the model and the 151 receivers in central borehole were on a 20 m spacing between the depth of 0.5 km and 3.5 km. We acquired the surface seismic data using 101 surface sources on 40 m spacing between 2.3 km and 6.3 km. The 101 receivers on the

  11. Three-dimensional pre-stack depth migration of receiver functions with the fast marching method: a Kirchhoff approach

    NASA Astrophysics Data System (ADS)

    Cheng, Cheng; Bodin, Thomas; Allen, Richard M.

    2016-05-01

    We present a novel 3-D pre-stack Kirchhoff depth migration (PKDM) method for teleseismic receiver functions. The proposed algorithm considers the effects of diffraction, scattering and traveltime alteration caused by 3-D volumetric heterogeneities. It is therefore particularly useful for imaging complex 3-D structures such as dipping discontinuities, which is hard to accomplish with traditional methods. The scheme is based on the acoustic wave migration principle, where at each time step of the receiver function, the energy is migrated back to the ensemble of potential conversion points in the image, given a smooth 3-D reference model. Traveltimes for P and S waves are computed with an efficient eikonal solver, the fast marching method. We also consider elastic scattering patterns, where the amplitude of converted S waves depends on the angle between the incident P wave and the scattered S wave. Synthetic experiments demonstrate the validity of the method for a variety of dipping angle discontinuities. Comparison with the widely used common conversion point (CCP) stacking method reveals that our migration shows considerable improvement. For example, the effect of multiple reflections that usually produce apparent discontinuities is avoided. The proposed approach is practical, computationally efficient, and is therefore a potentially powerful alternative to standard CCP methods for imaging large-scale continental structure under dense networks.

  12. Gaussian beam prestack depth migration of converted wave in TI media

    NASA Astrophysics Data System (ADS)

    Han, Jianguang; Wang, Yun; Xing, Zhantao; Lu, Jun

    2014-10-01

    Increasing amounts of multi-component seismic data are being acquired on land and offshore because more complete seismic wavefield information is beneficial for structural imaging, fluid detection, and reservoir monitoring. S-waves are typically influenced more by anisotropy in a medium than are P-waves; as a result, the anisotropy cannot be ignored during the converted PS-wave imaging. Gaussian beam migration, an elegant and efficient depth migration method, is becoming a new topic in the study of PS-wave migration; its accuracy is comparable to that of wave-equation migration, and its flexibility is comparable to that of Kirchhoff migration. In this paper, we introduce an anisotropic Gaussian beam prestack depth migration (GB-PSDM) method for the converted PS-wave, in which the anisotropic media can be a transversely isotropic (TI) medium with a vertical or tilted symmetry axis. We present the PS-wave common shot gathers GB-PSDM imaging condition and derive the ray tracing of P- and SV-waves in two-dimensional TI media. The migration impulse responses of P- and SV-propagation modes in TI media with both vertical and tilted symmetry axes are presented. The results of numerical examples indicate that the method introduced here offers significant improvements in the quality of converted PS-wave imaging compared with an isotropic algorithm.

  13. Prestack depth migration for complex 2D structure using phase-screen propagators

    SciTech Connect

    Roberts, P.; Huang, Lian-Jie; Burch, C.; Fehler, M.; Hildebrand, S.

    1997-11-01

    We present results for the phase-screen propagator method applied to prestack depth migration of the Marmousi synthetic data set. The data were migrated as individual common-shot records and the resulting partial images were superposed to obtain the final complete Image. Tests were performed to determine the minimum number of frequency components required to achieve the best quality image and this in turn provided estimates of the minimum computing time. Running on a single processor SUN SPARC Ultra I, high quality images were obtained in as little as 8.7 CPU hours and adequate images were obtained in as little as 4.4 CPU hours. Different methods were tested for choosing the reference velocity used for the background phase-shift operation and for defining the slowness perturbation screens. Although the depths of some of the steeply dipping, high-contrast features were shifted slightly the overall image quality was fairly insensitive to the choice of the reference velocity. Our jests show the phase-screen method to be a reliable and fast algorithm for imaging complex geologic structures, at least for complex 2D synthetic data where the velocity model is known.

  14. Full-Wavefield Modeling and Pre-Stack Depth Migration of Common-Source Seismic Data

    NASA Astrophysics Data System (ADS)

    Chen, How-Wei

    1992-01-01

    The goal of solving geophysical problems can be thought of as a data mapping or transform procedure. Through various techniques, the observed seismic data can be transformed into the solution domain to estimate the Earth properties. Seismic wave field simulation is a forward process used to synthesize the seismic responses of an Earth model. Seismic wave field imaging is an inverse process used to estimate the Earth parameters from observed seismic data. In this dissertation, finite-difference and pseudo-spectral computations, in two- and three-dimensional space, are used for full wave field simulations and imaging of common-source data. Numerical simulation is developed for seismic sources and multi-attribute wave fields in two-dimensional acoustic and elastic media. P- and S-waves can be primarily separated in the resulting seismograms by vector operators in simulated surface survey, Vertical Seismic Profile (VSP) and cross-hole recording geometries. Three-component displacement seismograms can be approximately simulated by treating the acoustic field as a scalar potential field. The algorithm is applied to a complex multi-component, multi-offset walkaway circular VSP data from offshore California. Numerical modeling of large-scale, wide-aperture 3-D seismic data volumes is performed using a 3-D pseudo-spectral approach. Asymmetrical source and 3-D wave propagation effects in physical model data are identified and interpreted through iterative numerical modeling. Prestack reverse-time migration algorithms based on finite-difference and pseudo-spectral wave field extrapolators are developed for acoustic media in two- and three-dimensions. The excitation time imaging condition is computed by ray tracing and by finite-difference solution of the Eikonal equation. I generalize the concept of reverse-time migration and apply it for the correction of near-surface static effects. The feasibility of using very large scale, very wide-aperture 3-D seismic data recorded on a

  15. Limited-memory BFGS based least-squares pre-stack Kirchhoff depth migration

    NASA Astrophysics Data System (ADS)

    Wu, Shaojiang; Wang, Yibo; Zheng, Yikang; Chang, Xu

    2015-08-01

    Least-squares migration (LSM) is a linearized inversion technique for subsurface reflectivity estimation. Compared to conventional migration algorithms, it can improve spatial resolution significantly with a few iterative calculations. There are three key steps in LSM, (1) calculate data residuals between observed data and demigrated data using the inverted reflectivity model; (2) migrate data residuals to form reflectivity gradient and (3) update reflectivity model using optimization methods. In order to obtain an accurate and high-resolution inversion result, the good estimation of inverse Hessian matrix plays a crucial role. However, due to the large size of Hessian matrix, the inverse matrix calculation is always a tough task. The limited-memory BFGS (L-BFGS) method can evaluate the Hessian matrix indirectly using a limited amount of computer memory which only maintains a history of the past m gradients (often m < 10). We combine the L-BFGS method with least-squares pre-stack Kirchhoff depth migration. Then, we validate the introduced approach by the 2-D Marmousi synthetic data set and a 2-D marine data set. The results show that the introduced method can effectively obtain reflectivity model and has a faster convergence rate with two comparison gradient methods. It might be significant for general complex subsurface imaging.

  16. Recent developments in DFD (depth-fused 3D) display and arc 3D display

    NASA Astrophysics Data System (ADS)

    Suyama, Shiro; Yamamoto, Hirotsugu

    2015-05-01

    We will report our recent developments in DFD (Depth-fused 3D) display and arc 3D display, both of which have smooth movement parallax. Firstly, fatigueless DFD display, composed of only two layered displays with a gap, has continuous perceived depth by changing luminance ratio between two images. Two new methods, called "Edge-based DFD display" and "Deep DFD display", have been proposed in order to solve two severe problems of viewing angle and perceived depth limitations. Edge-based DFD display, layered by original 2D image and its edge part with a gap, can expand the DFD viewing angle limitation both in 2D and 3D perception. Deep DFD display can enlarge the DFD image depth by modulating spatial frequencies of front and rear images. Secondly, Arc 3D display can provide floating 3D images behind or in front of the display by illuminating many arc-shaped directional scattering sources, for example, arcshaped scratches on a flat board. Curved Arc 3D display, composed of many directional scattering sources on a curved surface, can provide a peculiar 3D image, for example, a floating image in the cylindrical bottle. The new active device has been proposed for switching arc 3D images by using the tips of dual-frequency liquid-crystal prisms as directional scattering sources. Directional scattering can be switched on/off by changing liquid-crystal refractive index, resulting in switching of arc 3D image.

  17. Depth discrimination from occlusions in 3D clutter.

    PubMed

    Langer, Michael S; Zheng, Haomin; Rezvankhah, Shayan

    2016-09-01

    Objects such as trees, shrubs, and tall grass consist of thousands of small surfaces that are distributed over a three-dimensional (3D) volume. To perceive the depth of surfaces within 3D clutter, a visual system can use binocular stereo and motion parallax. However, such parallax cues are less reliable in 3D clutter because surfaces tend to be partly occluded. Occlusions provide depth information, but it is unknown whether visual systems use occlusion cues to aid depth perception in 3D clutter, as previous studies have addressed occlusions for simple scene geometries only. Here, we present a set of depth discrimination experiments that examine depth from occlusion cues in 3D clutter, and how these cues interact with stereo and motion parallax. We identify two probabilistic occlusion cues. The first is based on the fraction of an object that is visible. The second is based on the depth range of the occluders. We show that human observers use both of these occlusion cues. We also define ideal observers that are based on these occlusion cues. Human observer performance is close to ideal using the visibility cue but far from ideal using the range cue. A key reason for the latter is that the range cue depends on depth estimation of the clutter itself which is unreliable. Our results provide new fundamental constraints on the depth information that is available from occlusions in 3D clutter, and how the occlusion cues are combined with binocular stereo and motion parallax cues. PMID:27618514

  18. Fast Mode Decision for 3D-HEVC Depth Intracoding

    PubMed Central

    Li, Nana; Wu, Qinggang

    2014-01-01

    The emerging international standard of high efficiency video coding based 3D video coding (3D-HEVC) is a successor to multiview video coding (MVC). In 3D-HEVC depth intracoding, depth modeling mode (DMM) and high efficiency video coding (HEVC) intraprediction mode are both employed to select the best coding mode for each coding unit (CU). This technique achieves the highest possible coding efficiency, but it results in extremely large encoding time which obstructs the 3D-HEVC from practical application. In this paper, a fast mode decision algorithm based on the correlation between texture video and depth map is proposed to reduce 3D-HEVC depth intracoding computational complexity. Since the texture video and its associated depth map represent the same scene, there is a high correlation among the prediction mode from texture video and depth map. Therefore, we can skip some specific depth intraprediction modes rarely used in related texture CU. Experimental results show that the proposed algorithm can significantly reduce computational complexity of 3D-HEVC depth intracoding while maintaining coding efficiency. PMID:24963512

  19. Pre-stack depth migration for improved imaging under seafloor canyons: 2D case study of Browse Basin, Australia*

    NASA Astrophysics Data System (ADS)

    Debenham, Helen 124Westlake, Shane

    2014-06-01

    In the Browse Basin, as in many areas of the world, complex seafloor topography can cause problems with seismic imaging. This is related to complex ray paths, and sharp lateral changes in velocity. This paper compares ways in which 2D Kirchhoff imaging can be improved below seafloor canyons, using both time and depth domain processing. In the time domain, to improve on standard pre-stack time migration (PSTM) we apply removable seafloor static time shifts in order to reduce the push down effect under seafloor canyons before migration. This allows for better event continuity in the seismic imaging. However this approach does not fully solve the problem, still giving sub-optimal imaging, leaving amplitude shadows and structural distortion. Only depth domain processing with a migration algorithm that honours the paths of the seismic energy as well as a detailed velocity model can provide improved imaging under these seafloor canyons, and give confidence in the structural components of the exploration targets in this area. We therefore performed depth velocity model building followed by pre-stack depth migration (PSDM), the result of which provided a step change improvement in the imaging, and provided new insights into the area.

  20. Preference for motion and depth in 3D film

    NASA Astrophysics Data System (ADS)

    Hartle, Brittney; Lugtigheid, Arthur; Kazimi, Ali; Allison, Robert S.; Wilcox, Laurie M.

    2015-03-01

    While heuristics have evolved over decades for the capture and display of conventional 2D film, it is not clear these always apply well to stereoscopic 3D (S3D) film. Further, while there has been considerable recent research on viewer comfort in S3D media, little attention has been paid to audience preferences for filming parameters in S3D. Here we evaluate viewers' preferences for moving S3D film content in a theatre setting. Specifically we examine preferences for combinations of camera motion (speed and direction) and stereoscopic depth (IA). The amount of IA had no impact on clip preferences regardless of the direction or speed of camera movement. However, preferences were influenced by camera speed, but only in the in-depth condition where viewers preferred faster motion. Given that previous research shows that slower speeds are more comfortable for viewing S3D content, our results show that viewing preferences cannot be predicted simply from measures of comfort. Instead, it is clear that viewer response to S3D film is complex and that film parameters selected to enhance comfort may in some instances produce less appealing content.

  1. 3-D rigid body tracking using vision and depth sensors.

    PubMed

    Gedik, O Serdar; Alatan, A Aydn

    2013-10-01

    In robotics and augmented reality applications, model-based 3-D tracking of rigid objects is generally required. With the help of accurate pose estimates, it is required to increase reliability and decrease jitter in total. Among many solutions of pose estimation in the literature, pure vision-based 3-D trackers require either manual initializations or offline training stages. On the other hand, trackers relying on pure depth sensors are not suitable for AR applications. An automated 3-D tracking algorithm, which is based on fusion of vision and depth sensors via extended Kalman filter, is proposed in this paper. A novel measurement-tracking scheme, which is based on estimation of optical flow using intensity and shape index map data of 3-D point cloud, increases 2-D, as well as 3-D, tracking performance significantly. The proposed method requires neither manual initialization of pose nor offline training, while enabling highly accurate 3-D tracking. The accuracy of the proposed method is tested against a number of conventional techniques, and a superior performance is clearly observed in terms of both objectively via error metrics and subjectively for the rendered scenes. PMID:23955795

  2. Depth propagation and surface construction in 3-D vision.

    PubMed

    Georgeson, Mark A; Yates, Tim A; Schofield, Andrew J

    2009-01-01

    In stereo vision, regions with ambiguous or unspecified disparity can acquire perceived depth from unambiguous regions. This has been called stereo capture, depth interpolation or surface completion. We studied some striking induced depth effects suggesting that depth interpolation and surface completion are distinct stages of visual processing. An inducing texture (2-D Gaussian noise) had sinusoidal modulation of disparity, creating a smooth horizontal corrugation. The central region of this surface was replaced by various test patterns whose perceived corrugation was measured. When the test image was horizontal 1-D noise, shown to one eye or to both eyes without disparity, it appeared corrugated in much the same way as the disparity-modulated (DM) flanking regions. But when the test image was 2-D noise, or vertical 1-D noise, little or no depth was induced. This suggests that horizontal orientation was a key factor. For a horizontal sine-wave luminance grating, strong depth was induced, but for a square-wave grating, depth was induced only when its edges were aligned with the peaks and troughs of the DM flanking surface. These and related results suggest that disparity (or local depth) propagates along horizontal 1-D features, and then a 3-D surface is constructed from the depth samples acquired. The shape of the constructed surface can be different from the inducer, and so surface construction appears to operate on the results of a more local depth propagation process. PMID:18977239

  3. Visual fatigue evaluation based on depth in 3D videos

    NASA Astrophysics Data System (ADS)

    Wang, Feng-jiao; Sang, Xin-zhu; Liu, Yangdong; Shi, Guo-zhong; Xu, Da-xiong

    2013-08-01

    In recent years, 3D technology has become an emerging industry. However, visual fatigue always impedes the development of 3D technology. In this paper we propose some factors affecting human perception of depth as new quality metrics. These factors are from three aspects of 3D video--spatial characteristics, temporal characteristics and scene movement characteristics. They play important roles for the viewer's visual perception. If there are many objects with a certain velocity and the scene changes fast, viewers will feel uncomfortable. In this paper, we propose a new algorithm to calculate the weight values of these factors and analyses their effect on visual fatigue.MSE (Mean Square Error) of different blocks is taken into consideration from the frame and inter-frame for 3D stereoscopic videos. The depth frame is divided into a number of blocks. There are overlapped and sharing pixels (at half of the block) in the horizontal and vertical direction. Ignoring edge information of objects in the image can be avoided. Then the distribution of all these data is indicated by kurtosis with regard of regions which human eye may mainly gaze at. Weight values can be gotten by the normalized kurtosis. When the method is used for individual depth, spatial variation can be achieved. When we use it in different frames between current and previous one, we can get temporal variation and scene movement variation. Three factors above are linearly combined, so we can get objective assessment value of 3D videos directly. The coefficients of three factors can be estimated based on the liner regression. At last, the experimental results show that the proposed method exhibits high correlation with subjective quality assessment results.

  4. Seismic imaging of the Waltham Canyon fault, California: comparison of ray‐theoretical and Fresnel volume prestack depth migration

    USGS Publications Warehouse

    Bauer, Klaus; Ryberg, Trond; Fuis, Gary S.; Lüth, Stefan

    2013-01-01

    Near‐vertical faults can be imaged using reflected refractions identified in controlled‐source seismic data. Often theses phases are observed on a few neighboring shot or receiver gathers, resulting in a low‐fold data set. Imaging can be carried out with Kirchhoff prestack depth migration in which migration noise is suppressed by constructive stacking of large amounts of multifold data. Fresnel volume migration can be used for low‐fold data without severe migration noise, as the smearing along isochrones is limited to the first Fresnel zone around the reflection point. We developed a modified Fresnel volume migration technique to enhance imaging of steep faults and to suppress noise and undesired coherent phases. The modifications include target‐oriented filters to separate reflected refractions from steep‐dipping faults and reflections with hyperbolic moveout. Undesired phases like multiple reflections, mode conversions, direct P and S waves, and surface waves are suppressed by these filters. As an alternative approach, we developed a new prestack line‐drawing migration method, which can be considered as a proxy to an infinite frequency approximation of the Fresnel volume migration. The line‐drawing migration is not considering waveform information but requires significantly shorter computational time. Target‐oriented filters were extended by dip filters in the line‐drawing migration method. The migration methods were tested with synthetic data and applied to real data from the Waltham Canyon fault, California. The two techniques are applied best in combination, to design filters and to generate complementary images of steep faults.

  5. 3D hand tracking using Kalman filter in depth space

    NASA Astrophysics Data System (ADS)

    Park, Sangheon; Yu, Sunjin; Kim, Joongrock; Kim, Sungjin; Lee, Sangyoun

    2012-12-01

    Hand gestures are an important type of natural language used in many research areas such as human-computer interaction and computer vision. Hand gestures recognition requires the prior determination of the hand position through detection and tracking. One of the most efficient strategies for hand tracking is to use 2D visual information such as color and shape. However, visual-sensor-based hand tracking methods are very sensitive when tracking is performed under variable light conditions. Also, as hand movements are made in 3D space, the recognition performance of hand gestures using 2D information is inherently limited. In this article, we propose a novel real-time 3D hand tracking method in depth space using a 3D depth sensor and employing Kalman filter. We detect hand candidates using motion clusters and predefined wave motion, and track hand locations using Kalman filter. To verify the effectiveness of the proposed method, we compare the performance of the proposed method with the visual-based method. Experimental results show that the performance of the proposed method out performs visual-based method.

  6. Object-adaptive depth compensated inter prediction for depth video coding in 3D video system

    NASA Astrophysics Data System (ADS)

    Kang, Min-Koo; Lee, Jaejoon; Lim, Ilsoon; Ho, Yo-Sung

    2011-01-01

    Nowadays, the 3D video system using the MVD (multi-view video plus depth) data format is being actively studied. The system has many advantages with respect to virtual view synthesis such as an auto-stereoscopic functionality, but compression of huge input data remains a problem. Therefore, efficient 3D data compression is extremely important in the system, and problems of low temporal consistency and viewpoint correlation should be resolved for efficient depth video coding. In this paper, we propose an object-adaptive depth compensated inter prediction method to resolve the problems where object-adaptive mean-depth difference between a current block, to be coded, and a reference block are compensated during inter prediction. In addition, unique properties of depth video are exploited to reduce side information required for signaling decoder to conduct the same process. To evaluate the coding performance, we have implemented the proposed method into MVC (multiview video coding) reference software, JMVC 8.2. Experimental results have demonstrated that our proposed method is especially efficient for depth videos estimated by DERS (depth estimation reference software) discussed in the MPEG 3DV coding group. The coding gain was up to 11.69% bit-saving, and it was even increased when we evaluated it on synthesized views of virtual viewpoints.

  7. Optimizing visual comfort for stereoscopic 3D display based on color-plus-depth signals.

    PubMed

    Shao, Feng; Jiang, Qiuping; Fu, Randi; Yu, Mei; Jiang, Gangyi

    2016-05-30

    Visual comfort is a long-facing problem in stereoscopic 3D (S3D) display. In this paper, targeting to produce S3D content based on color-plus-depth signals, a general framework for depth mapping to optimize visual comfort for S3D display is proposed. The main motivation of this work is to remap the depth range of color-plus-depth signals to a new depth range that is suitable to comfortable S3D display. Towards this end, we first remap the depth range globally based on the adjusted zero disparity plane, and then present a two-stage global and local depth optimization solution to solve the visual comfort problem. The remapped depth map is used to generate the S3D output. We demonstrate the power of our approach on perceptually uncomfortable and comfortable stereoscopic images. PMID:27410090

  8. Automated Mosaicking of Multiple 3d Point Clouds Generated from a Depth Camera

    NASA Astrophysics Data System (ADS)

    Kim, H.; Yoon, W.; Kim, T.

    2016-06-01

    In this paper, we propose a method for automated mosaicking of multiple 3D point clouds generated from a depth camera. A depth camera generates depth data by using ToF (Time of Flight) method and intensity data by using intensity of returned signal. The depth camera used in this paper was a SR4000 from MESA Imaging. This camera generates a depth map and intensity map of 176 x 44 pixels. Generated depth map saves physical depth data with mm of precision. Generated intensity map contains texture data with many noises. We used texture maps for extracting tiepoints and depth maps for assigning z coordinates to tiepoints and point cloud mosaicking. There are four steps in the proposed mosaicking method. In the first step, we acquired multiple 3D point clouds by rotating depth camera and capturing data per rotation. In the second step, we estimated 3D-3D transformation relationships between subsequent point clouds. For this, 2D tiepoints were extracted automatically from the corresponding two intensity maps. They were converted into 3D tiepoints using depth maps. We used a 3D similarity transformation model for estimating the 3D-3D transformation relationships. In the third step, we converted local 3D-3D transformations into a global transformation for all point clouds with respect to a reference one. In the last step, the extent of single depth map mosaic was calculated and depth values per mosaic pixel were determined by a ray tracing method. For experiments, 8 depth maps and intensity maps were used. After the four steps, an output mosaicked depth map of 454x144 was generated. It is expected that the proposed method would be useful for developing an effective 3D indoor mapping method in future.

  9. Depth-controlled 3D TV image coding

    NASA Astrophysics Data System (ADS)

    Chiari, Armando; Ciciani, Bruno; Romero, Milton; Rossi, Ricardo

    1998-04-01

    Conventional 3D-TV codecs processing one down-compatible (either left, or right) channel may optionally include the extraction of the disparity field associated with the stereo-pairs to support the coding of the complementary channel. A two-fold improvement over such approaches is proposed in this paper by exploiting 3D features retained in the stereo-pairs to reduce the redundancies in both channels, and according to their visual sensitiveness. Through an a-priori disparity field analysis, our coding scheme separates a region of interest from the foreground/background in the volume space reproduced in order to code them selectively based on their visual relevance. Such a region of interest is here identified as the one which is focused by the shooting device. By suitably scaling the DCT coefficient n such a way that precision is reduced for the image blocks lying on less relevant areas, our approach aims at reducing the signal energy in the background/foreground patterns, while retaining finer details on the more relevant image portions. From an implementation point of view, it is worth noticing that the system proposed keeps its surplus processing power on the encoder side only. Simulation results show such improvements as a better image quality for a given transmission bit rate, or a graceful quality degradation of the reconstructed images with decreasing data-rates.

  10. Introducing the depth transfer curve for 3D capture system characterization

    NASA Astrophysics Data System (ADS)

    Goma, Sergio R.; Atanassov, Kalin; Ramachandra, Vikas

    2011-03-01

    3D technology has recently made a transition from movie theaters to consumer electronic devices such as 3D cameras and camcorders. In addition to what 2D imaging conveys, 3D content also contains information regarding the scene depth. Scene depth is simulated through the strongest brain depth cue, namely retinal disparity. This can be achieved by capturing an image by horizontally separated cameras. Objects at different depths will be projected with different horizontal displacement on the left and right camera images. These images, when fed separately to either eye, leads to retinal disparity. Since the perception of depth is the single most important 3D imaging capability, an evaluation procedure is needed to quantify the depth capture characteristics. Evaluating depth capture characteristics subjectively is a very difficult task since the intended and/or unintended side effects from 3D image fusion (depth interpretation) by the brain are not immediately perceived by the observer, nor do such effects lend themselves easily to objective quantification. Objective evaluation of 3D camera depth characteristics is an important tool that can be used for "black box" characterization of 3D cameras. In this paper we propose a methodology to evaluate the 3D cameras' depth capture capabilities.

  11. Priority depth fusion for the 2D to 3D conversion system

    NASA Astrophysics Data System (ADS)

    Chang, Yu-Lin; Chen, Wei-Yin; Chang, Jing-Ying; Tsai, Yi-Min; Lee, Chia-Lin; Chen, Liang-Gee

    2008-02-01

    For the sake of providing 3D contents for up-coming 3D display devices, a real-time automatic depth fusion 2D-to-3D conversion system is needed on the home multimedia platform. We proposed a priority depth fusion algorithm with a 2D-to-3D conversion system which generates the depth map from most of the commercial video sequences. The results from different kinds of depth reconstruction methods are integrated into one depth map by the proposed priority depth fusion algorithm. Then the depth map and the original 2D image are converted to stereo images for showing on the 3D display devices. In this paper, a 2D-to-3D conversion algorithm set is combined with the proposed depth fusion algorithm to show the improved results. With the converted 3D contents, the needs for 3D display devices will also increase. As long as the two technologies evolve, the 3D-TV era will come as soon as possible.

  12. Subpixel Resolution In Depth Perceived Via 3-D Television

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B.; Von Sydow, Marika; Fender, Derek H.

    1993-01-01

    Report describes experiment in which two black vertical bars on featureless white background placed near intersection of optical axes of two charge-coupled-device video cameras positioned to give stereoscopic views. Trained human observers found to perceive depths at subpixel resolutions in stereoscopic television images. This finding significant for remote stereoscopic monitoring, expecially during precise maneuvers of remotely controlled manipulators. Also significant for research in processing of visual information by human brain.

  13. Combining depth and color data for 3D object recognition

    NASA Astrophysics Data System (ADS)

    Joergensen, Thomas M.; Linneberg, Christian; Andersen, Allan W.

    1997-09-01

    This paper describes the shape recognition system that has been developed within the ESPRIT project 9052 ADAS on automatic disassembly of TV-sets using a robot cell. Depth data from a chirped laser radar are fused with color data from a video camera. The sensor data is pre-processed in several ways and the obtained representation is used to train a RAM neural network (memory based reasoning approach) to detect different components within TV-sets. The shape recognizing architecture has been implemented and tested in a demonstration setup.

  14. Increasing the depth of field in Multiview 3D images

    NASA Astrophysics Data System (ADS)

    Lee, Beom-Ryeol; Son, Jung-Young; Yano, Sumio; Jung, Ilkwon

    2016-06-01

    A super-multiview condition simulator which can project up to four different view images to each eye is introduced. This simulator with the image having both disparity and perspective informs that the depth of field (DOF) will be extended to more than the default DOF values as the number of simultaneously but separately projected different view images to each eye increase. The DOF range can be extended to near 2 diopters with the four simultaneous view images. However, the DOF value increments are not prominent as the image with both disparity and perspective with the image with disparity only.

  15. Volume Attenuation and High Frequency Loss as Auditory Depth Cues in Stereoscopic 3D Cinema

    NASA Astrophysics Data System (ADS)

    Manolas, Christos; Pauletto, Sandra

    2014-09-01

    Assisted by the technological advances of the past decades, stereoscopic 3D (S3D) cinema is currently in the process of being established as a mainstream form of entertainment. The main focus of this collaborative effort is placed on the creation of immersive S3D visuals. However, with few exceptions, little attention has been given so far to the potential effect of the soundtrack on such environments. The potential of sound both as a means to enhance the impact of the S3D visual information and to expand the S3D cinematic world beyond the boundaries of the visuals is large. This article reports on our research into the possibilities of using auditory depth cues within the soundtrack as a means of affecting the perception of depth within cinematic S3D scenes. We study two main distance-related auditory cues: high-end frequency loss and overall volume attenuation. A series of experiments explored the effectiveness of these auditory cues. Results, although not conclusive, indicate that the studied auditory cues can influence the audience judgement of depth in cinematic 3D scenes, sometimes in unexpected ways. We conclude that 3D filmmaking can benefit from further studies on the effectiveness of specific sound design techniques to enhance S3D cinema.

  16. Depth enhancement of S3D content and the psychological effects

    NASA Astrophysics Data System (ADS)

    Hirahara, Masahiro; Shiraishi, Saki; Kawai, Takashi

    2012-03-01

    Stereoscopic 3D (S3D) imaging technologies are widely used recently to create content for movies, TV programs, games, etc. Although S3D content differs from 2D content by the use of binocular parallax to induce depth sensation, the relationship between depth control and the user experience remains unclear. In this study, the user experience was subjectively and objectively evaluated in order to determine the effectiveness of depth control, such as an expansion or reduction or a forward or backward shift in the range of maximum parallactic angles in the cross and uncross directions (depth bracket). Four types of S3D content were used in the subjective and objective evaluations. The depth brackets of comparison stimuli were modified in order to enhance the depth sensation corresponding to the content. Interpretation Based Quality (IBQ) methodology was used for the subjective evaluation and the heart rate was measured to evaluate the physiological effect. The results of the evaluations suggest the following two points. (1) Expansion/reduction of the depth bracket affects preference and enhances positive emotions to the S3D content. (2) Expansion/reduction of the depth bracket produces above-mentioned effects more notable than shifting the cross/uncross directions.

  17. Micro-optical system based 3D imaging for full HD depth image capturing

    NASA Astrophysics Data System (ADS)

    Park, Yong-Hwa; Cho, Yong-Chul; You, Jang-Woo; Park, Chang-Young; Yoon, Heesun; Lee, Sang-Hun; Kwon, Jong-Oh; Lee, Seung-Wan

    2012-03-01

    20 Mega-Hertz-switching high speed image shutter device for 3D image capturing and its application to system prototype are presented. For 3D image capturing, the system utilizes Time-of-Flight (TOF) principle by means of 20MHz high-speed micro-optical image modulator, so called 'optical shutter'. The high speed image modulation is obtained using the electro-optic operation of the multi-layer stacked structure having diffractive mirrors and optical resonance cavity which maximizes the magnitude of optical modulation. The optical shutter device is specially designed and fabricated realizing low resistance-capacitance cell structures having small RC-time constant. The optical shutter is positioned in front of a standard high resolution CMOS image sensor and modulates the IR image reflected from the object to capture a depth image. Suggested novel optical shutter device enables capturing of a full HD depth image with depth accuracy of mm-scale, which is the largest depth image resolution among the-state-of-the-arts, which have been limited up to VGA. The 3D camera prototype realizes color/depth concurrent sensing optical architecture to capture 14Mp color and full HD depth images, simultaneously. The resulting high definition color/depth image and its capturing device have crucial impact on 3D business eco-system in IT industry especially as 3D image sensing means in the fields of 3D camera, gesture recognition, user interface, and 3D display. This paper presents MEMS-based optical shutter design, fabrication, characterization, 3D camera system prototype and image test results.

  18. Imaging the SE1 reflector near the Continental Deep Drilling Site (KTB, Germany) with coherence-based prestack-depth migration

    NASA Astrophysics Data System (ADS)

    Hellwig, O.; Hlousek, F.; Buske, S.

    2013-12-01

    Kirchhoff prestack depth migration algorithms are widely used to image geological structures. There are a variety of Kirchhoff-type methods, such as Fresnel-Volume-Migration (FVM), that try to overcome the incapability of standard Kirchhoff migration to image steeply dipping reflectors or to produce clear and artifact-free seismic images if only a small number of seismic traces is available. All of these modified Kirchhoff migration algorithms employ additional weighting factors to confine the migration operator and to limit the seismic image to the actual position along the two-way travel time isochrone where diffractions and reflections originate. Coherence-based prestack-depth migration (CBM) uses a weighting factor obtained directly from the input data by evaluating a normalized coherence measure defined over neighboring traces and a time window around the particular time sample to be imaged. This coherence measure and the corresponding weighting factor are high if the differences in the arrival times of a coherent event at nearby receivers can be explained by the differences in the travel times along the ray paths from the source position to a certain image point on the two-way travel time isochrone, and from there to the receiver group. In turn, a small weighting factor is obtained if the travel time differences cannot be explained by a certain combination of source, image point and the selected receiver group. Thereby it is possible to suppress random noise and to obtain artifact-free seismic images even with a small number of seismic traces. This method is applied to a single shot from the Instruct-93 data recorded at the Continental Deep Drilling Site (KTB) near Windischeschenbach (Germany). This seismic experiment was designed to illuminate the steeply dipping SE1-reflector, that was known from earlier seismic investigations, at a target depth of about 8 to 9 km. For this purpose the shot point and the 120 receivers were placed approximately 10 km away

  19. Blind Depth-variant Deconvolution of 3D Data in Wide-field Fluorescence Microscopy.

    PubMed

    Kim, Boyoung; Naemura, Takeshi

    2015-01-01

    This paper proposes a new deconvolution method for 3D fluorescence wide-field microscopy. Most previous methods are insufficient in terms of restoring a 3D cell structure, since a point spread function (PSF) is simply assumed as depth-invariant, whereas a PSF of microscopy changes significantly along the optical axis. A few methods that consider a depth-variant PSF have been proposed; however, they are impractical, since they are non-blind approaches that use a known PSF in a pre-measuring condition, whereas an imaging condition of a target image is different from that of the pre-measuring. To solve these problems, this paper proposes a blind approach to estimate depth-variant specimen-dependent PSF and restore 3D cell structure. It is shown by experiments on that the proposed method outperforms the previous ones in terms of suppressing axial blur. The proposed method is composed of the following three steps: First, a non-parametric averaged PSF is estimated by the Richardson Lucy algorithm, whose initial parameter is given by the central depth prediction from intensity analysis. Second, the estimated PSF is fitted to Gibson's parametric PSF model via optimization, and depth-variant PSFs are generated. Third, a 3D cell structure is restored by using a depth-variant version of a generalized expectation-maximization. PMID:25950821

  20. Efficient and high speed depth-based 2D to 3D video conversion

    NASA Astrophysics Data System (ADS)

    Somaiya, Amisha Himanshu; Kulkarni, Ramesh K.

    2013-09-01

    Stereoscopic video is the new era in video viewing and has wide applications such as medicine, satellite imaging and 3D Television. Such stereo content can be generated directly using S3D cameras. However, this approach requires expensive setup and hence converting monoscopic content to S3D becomes a viable approach. This paper proposes a depth-based algorithm for monoscopic to stereoscopic video conversion by using the y axis co-ordinates of the bottom-most pixels of foreground objects. This code can be used for arbitrary videos without prior database training. It does not face the limitations of single monocular depth cues nor does it combine depth cues, thus consuming less processing time without affecting the efficiency of the 3D video output. The algorithm, though not comparable to real-time, is faster than the other available 2D to 3D video conversion techniques in the average ratio of 1:8 to 1:20, essentially qualifying as high-speed. It is an automatic conversion scheme, hence directly gives the 3D video output without human intervention and with the above mentioned features becomes an ideal choice for efficient monoscopic to stereoscopic video conversion. [Figure not available: see fulltext.

  1. Rate-constrained 3D surface estimation from noise-corrupted multiview depth videos.

    PubMed

    Sun, Wenxiu; Cheung, Gene; Chou, Philip A; Florencio, Dinei; Zhang, Cha; Au, Oscar C

    2014-07-01

    Transmitting compactly represented geometry of a dynamic 3D scene from a sender can enable a multitude of imaging functionalities at a receiver, such as synthesis of virtual images at freely chosen viewpoints via depth-image-based rendering. While depth maps—projections of 3D geometry onto 2D image planes at chosen camera viewpoints-can nowadays be readily captured by inexpensive depth sensors, they are often corrupted by non-negligible acquisition noise. Given depth maps need to be denoised and compressed at the encoder for efficient network transmission to the decoder, in this paper, we consider the denoising and compression problems jointly, arguing that doing so will result in a better overall performance than the alternative of solving the two problems separately in two stages. Specifically, we formulate a rate-constrained estimation problem, where given a set of observed noise-corrupted depth maps, the most probable (maximum a posteriori (MAP)) 3D surface is sought within a search space of surfaces with representation size no larger than a prespecified rate constraint. Our rate-constrained MAP solution reduces to the conventional unconstrained MAP 3D surface reconstruction solution if the rate constraint is loose. To solve our posed rate-constrained estimation problem, we propose an iterative algorithm, where in each iteration the structure (object boundaries) and the texture (surfaces within the object boundaries) of the depth maps are optimized alternately. Using the MVC codec for compression of multiview depth video and MPEG free viewpoint video sequences as input, experimental results show that rate-constrained estimated 3D surfaces computed by our algorithm can reduce coding rate of depth maps by up to 32% compared with unconstrained estimated surfaces for the same quality of synthesized virtual views at the decoder. PMID:24876124

  2. Automatic 3-D gravity modeling of sedimentary basins with density contrast varying parabolically with depth

    NASA Astrophysics Data System (ADS)

    Chakravarthi, V.; Sundararajan, N.

    2004-07-01

    A method to model 3-D sedimentary basins with density contrast varying with depth is presented along with a code GRAV3DMOD. The measured gravity fields, reduced to a horizontal plane, are assumed to be available at grid nodes of a rectangular/square mesh. Juxtaposed 3-D rectangular/square blocks with their geometrical epicenters on top coincide with grid nodes of a mesh to approximate a sedimentary basin. The algorithm based on Newton's forward difference formula automatically calculates the initial depth estimates of a sedimentary basin assuming that 2-D infinite horizontal slabs among which, the density contrast varies with depth could generate the measured gravity fields. Forward modeling is realized through an available code GR3DPRM, which computes the theoretical gravity field of a 3-D block. The lower boundary of a sedimentary basin is formulated by estimating the depth values of the 3-D blocks within predetermined limits. The algorithm is efficient in the sense that it automatically generates the grid files of the interpreted results that can be viewed in the form of respective contour maps. Measured gravity fields pertaining to the Chintalpudi sub-basin, India and the Los Angeles basin, California, USA in which the density contrast varies with depth are interpreted to show the applicability of the method.

  3. Integration of 3D Structure from Disparity into Biological Motion Perception Independent of Depth Awareness

    PubMed Central

    Wang, Ying; Jiang, Yi

    2014-01-01

    Images projected onto the retinas of our two eyes come from slightly different directions in the real world, constituting binocular disparity that serves as an important source for depth perception - the ability to see the world in three dimensions. It remains unclear whether the integration of disparity cues into visual perception depends on the conscious representation of stereoscopic depth. Here we report evidence that, even without inducing discernible perceptual representations, the disparity-defined depth information could still modulate the visual processing of 3D objects in depth-irrelevant aspects. Specifically, observers who could not discriminate disparity-defined in-depth facing orientations of biological motions (i.e., approaching vs. receding) due to an excessive perceptual bias nevertheless exhibited a robust perceptual asymmetry in response to the indistinguishable facing orientations, similar to those who could consciously discriminate such 3D information. These results clearly demonstrate that the visual processing of biological motion engages the disparity cues independent of observers’ depth awareness. The extraction and utilization of binocular depth signals thus can be dissociable from the conscious representation of 3D structure in high-level visual perception. PMID:24586622

  4. Crustal Thickness and Moho Character of the Fast-Spreading East Pacific Rise Between 9º37.5'N and 9º57'N From Poststack and Prestack Time Migrated 3D MCS data

    NASA Astrophysics Data System (ADS)

    Nedimovic, M. R.; Aghaei, O.; Carbotte, S. M.; Carton, H. D.; Canales, J. P.

    2014-12-01

    We measured crustal thickness and mapped Moho transition zone (MTZ) character over an 880 km2 section of the fast-spreading East Pacific Rise (EPR) using the first full 3D multichannel seismic (MCS) dataset collected across a mid-ocean ridge (MOR). The 9°42'-9°57'N area was initially investigated using 3D poststack time migration, which was followed by application of 3D prestack time migration (PSTM) to the whole dataset. This first attempt at applying 3D PSTM to MCS data from a MOR environment resulted in the most detailed reflection images of a spreading center to date. MTZ reflections are for the first time imaged below the ridge axis away from axial discontinuities indicating that Moho is formed at zero age at least at some sections of the MOR system. The average crustal thickness and crustal velocity derived from PSTM are 5920±320 m and 6320±290 m/s, respectively. The average crustal thickness varies little from Pacific to Cocos plate suggesting mostly uniform crustal production in the last ~180 Ka. However, the crust thins by ~400 m from south to north. The MTZ reflections were imaged within ~92% of the study area, with ~66% of the total characterized by impulsive reflections interpreted to originate from a thin MTZ and 26% characterized by diffusive reflections interpreted to originate from a thick MTZ. The MTZ is dominantly diffusive at the southern (9°37.5'-9°40'N) and northern (9°51'-9°57'N) ends of the study area, and it is impulsive in the central region (9°42'-9°51'N). No data were collected between 9°40'N and 9°42'N. More efficient mantle melt extraction is inferred within the central region with greater proportion of the lower crust accreted from the axial magma lens than within the northern and southern sections. This along-axis variation in the crustal accretion style may be caused by interaction between the melt sources for the ridge and the local seamounts, which are present within the northern and southern survey sections. Third

  5. Integration of multiple view plus depth data for free viewpoint 3D display

    NASA Astrophysics Data System (ADS)

    Suzuki, Kazuyoshi; Yoshida, Yuko; Kawamoto, Tetsuya; Fujii, Toshiaki; Mase, Kenji

    2014-03-01

    This paper proposes a method for constructing a reasonable scale of end-to-end free-viewpoint video system that captures multiple view and depth data, reconstructs three-dimensional polygon models of objects, and display them on virtual 3D CG spaces. This system consists of a desktop PC and four Kinect sensors. First, multiple view plus depth data at four viewpoints are captured by Kinect sensors simultaneously. Then, the captured data are integrated to point cloud data by using camera parameters. The obtained point cloud data are sampled to volume data that consists of voxels. Since volume data that are generated from point cloud data are sparse, those data are made dense by using global optimization algorithm. Final step is to reconstruct surfaces on dense volume data by discrete marching cubes method. Since accuracy of depth maps affects to the quality of 3D polygon model, a simple inpainting method for improving depth maps is also presented.

  6. A new method to enlarge a range of continuously perceived depth in DFD (depth-fused 3D) display

    NASA Astrophysics Data System (ADS)

    Tsunakawa, Atsuhiro; Soumiya, Tomoki; Horikawa, Yuta; Yamamoto, Hirotsugu; Suyama, Shiro

    2013-03-01

    We can successfully solve the problem in DFD display that the maximum depth difference of front and rear planes is limited because depth fusing from front and rear images to one 3-D image becomes impossible. The range of continuously perceived depth was estimated as depth difference of front and rear planes increases. When the distance was large enough, perceived depth was near front plane at 0~40 % of rear luminance and near rear plane at 60~100 % of rear luminance. This maximum depth range can be successfully enlarged by spatial-frequency modulation of front and rear images. The change of perceived depth dependence was evaluated when high frequency component of front and rear images is cut off using Fourier Transformation at the distance between front and rear plane of 5 and 10 cm (4.9 and 9.4 minute of arc). When high frequency component does not cut off enough at the distance of 5 cm, perceived depth was separated to near front plane and near rear plane. However, when the images are blurred enough by cutting high frequency component, the perceived depth has a linear dependency on luminance ratio. When the images are not blurred at the distance of 10 cm, perceived depth is separated to near front plane at 0~30% of rear luminance, near rear plane at 80~100 % and near midpoint at 40~70 %. However, when the images are blurred enough, perceived depth successfully has a linear dependency on luminance ratio.

  7. Display depth analyses with the wave aberration for the auto-stereoscopic 3D display

    NASA Astrophysics Data System (ADS)

    Gao, Xin; Sang, Xinzhu; Yu, Xunbo; Chen, Duo; Chen, Zhidong; Zhang, Wanlu; Yan, Binbin; Yuan, Jinhui; Wang, Kuiru; Yu, Chongxiu; Dou, Wenhua; Xiao, Liquan

    2016-07-01

    Because the aberration severely affects the display performances of the auto-stereoscopic 3D display, the diffraction theory is used to analyze the diffraction field distribution and the display depth through aberration analysis. Based on the proposed method, the display depth of central and marginal reconstructed images is discussed. The experimental results agree with the theoretical analyses. Increasing the viewing distance or decreasing the lens aperture can improve the display depth. Different viewing distances and the LCD with two lens-arrays are used to verify the conclusion.

  8. Non-obstructing 3D depth cues influence reach-to-grasp kinematics.

    PubMed

    Worssam, Christopher J; Meade, Lewis C; Connolly, Jason D

    2015-02-01

    It has been demonstrated that both visual feedback and the presence of certain types of non-target objects in the workspace can affect kinematic measures and the trajectory path of the moving hand during reach-to-grasp movements. Yet no study to date has examined the possible effect of providing non-obstructing three-dimensional (3D) depth cues within the workspace and with consistent retinal inputs and whether or not these alter manual prehension movements. Participants performed a series of reach-to-grasp movements in both open- (without visual feedback) and closed-loop (with visual feedback) conditions in the presence of one of three possible 3D depth cues. Here, it is reported that preventing online visual feedback (or not) and the presence of a particular depth cue had a profound effect on kinematic measures for both the reaching and grasping components of manual prehension-despite the fact that the 3D depth cues did not act as a physical obstruction at any point. The depth cues modulated the trajectory of the reaching hand when the target block was located on the left side of the workspace but not on the right. These results are discussed in relation to previous reports and implications for brain-computer interface decoding algorithms are provided. PMID:25311388

  9. 3D Radiative Aspects of the Increased Aerosol Optical Depth Near Clouds

    NASA Technical Reports Server (NTRS)

    Marshak, Alexander; Wen, Guoyong; Remer, Lorraine; Cahalan, Robert; Coakley, Jim

    2007-01-01

    To characterize aerosol-cloud interactions it is important to correctly retrieve aerosol optical depth in the vicinity of clouds. It is well reported in the literature that aerosol optical depth increases with cloud cover. Part of the increase comes from real physics as humidification; another part, however, comes from 3D cloud effects in the remote sensing retrievals. In many cases it is hard to say whether the retrieved increased values of aerosol optical depth are remote sensing artifacts or real. In the presentation, we will discuss how the 3D cloud affects can be mitigated. We will demonstrate a simple model that can assess the enhanced illumination of cloud-free columns in the vicinity of clouds. This model is based on the assumption that the enhancement in the cloud-free column radiance comes from the enhanced Rayleigh scattering due to presence of surrounding clouds. A stochastic cloud model of broken cloudiness is used to simulate the upward flux.

  10. Correction of a Depth-Dependent Lateral Distortion in 3D Super-Resolution Imaging.

    PubMed

    Carlini, Lina; Holden, Seamus J; Douglass, Kyle M; Manley, Suliana

    2015-01-01

    Three-dimensional (3D) localization-based super-resolution microscopy (SR) requires correction of aberrations to accurately represent 3D structure. Here we show how a depth-dependent lateral shift in the apparent position of a fluorescent point source, which we term `wobble`, results in warped 3D SR images and provide a software tool to correct this distortion. This system-specific, lateral shift is typically > 80 nm across an axial range of ~ 1 μm. A theoretical analysis based on phase retrieval data from our microscope suggests that the wobble is caused by non-rotationally symmetric phase and amplitude aberrations in the microscope's pupil function. We then apply our correction to the bacterial cytoskeletal protein FtsZ in live bacteria and demonstrate that the corrected data more accurately represent the true shape of this vertically-oriented ring-like structure. We also include this correction method in a registration procedure for dual-color, 3D SR data and show that it improves target registration error (TRE) at the axial limits over an imaging depth of 1 μm, yielding TRE values of < 20 nm. This work highlights the importance of correcting aberrations in 3D SR to achieve high fidelity between the measurements and the sample. PMID:26600467

  11. Correction of a Depth-Dependent Lateral Distortion in 3D Super-Resolution Imaging

    PubMed Central

    Manley, Suliana

    2015-01-01

    Three-dimensional (3D) localization-based super-resolution microscopy (SR) requires correction of aberrations to accurately represent 3D structure. Here we show how a depth-dependent lateral shift in the apparent position of a fluorescent point source, which we term `wobble`, results in warped 3D SR images and provide a software tool to correct this distortion. This system-specific, lateral shift is typically > 80 nm across an axial range of ~ 1 μm. A theoretical analysis based on phase retrieval data from our microscope suggests that the wobble is caused by non-rotationally symmetric phase and amplitude aberrations in the microscope’s pupil function. We then apply our correction to the bacterial cytoskeletal protein FtsZ in live bacteria and demonstrate that the corrected data more accurately represent the true shape of this vertically-oriented ring-like structure. We also include this correction method in a registration procedure for dual-color, 3D SR data and show that it improves target registration error (TRE) at the axial limits over an imaging depth of 1 μm, yielding TRE values of < 20 nm. This work highlights the importance of correcting aberrations in 3D SR to achieve high fidelity between the measurements and the sample. PMID:26600467

  12. Clinically Normal Stereopsis Does Not Ensure a Performance Benefit from Stereoscopic 3D Depth Cues

    NASA Astrophysics Data System (ADS)

    McIntire, John P.; Havig, Paul R.; Harrington, Lawrence K.; Wright, Steve T.; Watamaniuk, Scott N. J.; Heft, Eric L.

    2014-09-01

    To investigate the effect of manipulating disparity on task performance and viewing comfort, twelve participants were tested on a virtual object precision placement task while viewing a stereoscopic 3D (S3D) display. All participants had normal or corrected-to-normal visual acuity, passed the Titmus stereovision clinical test, and demonstrated normal binocular function, including phorias and binocular fusion ranges. Each participant completed six experimental sessions with different maximum binocular disparity limits. The results for ten of the twelve participants were generally as expected, demonstrating a large performance advantage when S3D cues were provided. The sessions with the larger disparity limits typically resulted in the best performance, and the sessions with no S3D cues the poorest performance. However, one participant demonstrated poorer performance in sessions with smaller disparity limits but improved performance in sessions with the larger disparity limits. Another participant's performance declined whenever any S3D cues were provided. Follow-up testing suggested that the phenomenon of pseudo-stereoanomaly may account for one viewer's atypical performance, while the phenomenon of stereoanomaly might account for the other. Overall, the results demonstrate that a subset of viewers with clinically normal binocular and stereoscopic vision may have difficulty performing depth-related tasks on S3D displays. The possibility of the vergence-accommodation conflict contributing to individual performance differences is also discussed.

  13. Depth-based coding of MVD data for 3D video extension of H.264/AVC

    NASA Astrophysics Data System (ADS)

    Rusanovskyy, Dmytro; Hannuksela, Miska M.; Su, Wenyi

    2013-06-01

    This paper describes a novel approach of using depth information for advanced coding of associated video data in Multiview Video plus Depth (MVD)-based 3D video systems. As a possible implementation of this conception, we describe two coding tools that have been developed for H.264/AVC based 3D Video Codec as response to Moving Picture Experts Group (MPEG) Call for Proposals (CfP). These tools are Depth-based Motion Vector Prediction (DMVP) and Backward View Synthesis Prediction (BVSP). Simulation results conducted under JCT-3V/MPEG 3DV Common Test Conditions show, that proposed in this paper tools reduce bit rate of coded video data by 15% of average delta bit rate reduction, which results in 13% of bit rate savings on total for the MVD data over the state-of-the-art MVC+D coding. Moreover, presented in this paper conception of depth-based coding of video has been further developed by MPEG 3DV and JCT-3V and this work resulted in even higher compression efficiency, bringing about 20% of delta bit rate reduction on total for coded MVD data over the reference MVC+D coding. Considering significant gains, proposed in this paper coding approach can be beneficial for development of new 3D video coding standards. [Figure not available: see fulltext.

  14. Estimation of foot pressure from human footprint depths using 3D scanner

    NASA Astrophysics Data System (ADS)

    Wibowo, Dwi Basuki; Haryadi, Gunawan Dwi; Priambodo, Agus

    2016-03-01

    The analysis of normal and pathological variation in human foot morphology is central to several biomedical disciplines, including orthopedics, orthotic design, sports sciences, and physical anthropology, and it is also important for efficient footwear design. A classic and frequently used approach to study foot morphology is analysis of the footprint shape and footprint depth. Footprints are relatively easy to produce and to measure, and they can be preserved naturally in different soils. In this study, we need to correlate footprint depth with corresponding foot pressure of individual using 3D scanner. Several approaches are used for modeling and estimating footprint depths and foot pressures. The deepest footprint point is calculated from z max coordinate-z min coordinate and the average of foot pressure is calculated from GRF divided to foot area contact and identical with the average of footprint depth. Evaluation of footprint depth was found from importing 3D scanner file (dxf) in AutoCAD, the z-coordinates than sorted from the highest to the lowest value using Microsoft Excel to make footprinting depth in difference color. This research is only qualitatif study because doesn't use foot pressure device as comparator, and resulting the maximum pressure on calceneus is 3.02 N/cm2, lateral arch is 3.66 N/cm2, and metatarsal and hallux is 3.68 N/cm2.

  15. Joint pre-stack depth migration and travel-time tomography applied to a deep seismic profile across the northern Barents Sea igneous province

    NASA Astrophysics Data System (ADS)

    Minakov, Alexander; Faleide, Jan Inge; Sakulina, Tamara; Krupnova, Natalia; Dergunov, Nikolai

    2015-04-01

    The mainly Permo-Triassic North Barents Sea Basin is considered as a superdeep intracratonic basin containing over 20 km of sedimentary material. This basin was strongly affected by magmatism attributed to the formation of the Early Cretaceous High Arctic Large Igneous Province. Dolerite dikes, sills, and lava flows are observed in the northern Barents Sea and on the islands of Svalbard and Franz Josef Land. Some dike swarms can be traced over hundreds of kilometers using high-resolution airborne magnetic data. In the North Barents Sea Basin, the dikes fed giant sill complex emplaced into organic-rich Triassic siliciclastic rocks. The sill complex creates a major challenge for seismic imaging masking the underlying strata. In this contribution, we first perform refraction and reflection travel-time tomography using wide-angle ocean-bottom seismometer data (with receivers deployed every 10 km) along the 4-AR profile (Sakulina et al. 2007, Ivanova et al. 2011). The resulting tomographic model is then used to construct a background velocity model for the pre-stack depth migration. We show that the use of a combined velocity model for the time and depth imaging based on travel-time tomography and RMS velocities constitutes a substantial improvement with respect to a standard processing workflow providing a more coherent seismic structure of this volcanic province. The interpretation of multichannel seismic and high-resolution magnetic data together with P-wave velocity and density anomalies allow to create a model for the system of magmatic feeders in the crystalline basement of the northern Barents Sea region. Sakulina, T.S., Verba, M.L., Ivanova, N.M., Krupnova, N.A., Belyaev I.V., 2007. Deep structure of the north Barents-Kara Region along 4AR transect (Taimyr Peninsula - Franz Joseph Land). In: Models of the Earth's crust and upper mantle after deep seismic profiling. Proceedings of the international scientific-practical seminar. Rosnedra, VSEGEI. St

  16. A Novel 2D-to-3D Video Conversion Method Using Time-Coherent Depth Maps

    PubMed Central

    Yin, Shouyi; Dong, Hao; Jiang, Guangli; Liu, Leibo; Wei, Shaojun

    2015-01-01

    In this paper, we propose a novel 2D-to-3D video conversion method for 3D entertainment applications. 3D entertainment is getting more and more popular and can be found in many contexts, such as TV and home gaming equipment. 3D image sensors are a new method to produce stereoscopic video content conveniently and at a low cost, and can thus meet the urgent demand for 3D videos in the 3D entertaiment market. Generally, 2D image sensor and 2D-to-3D conversion chip can compose a 3D image sensor. Our study presents a novel 2D-to-3D video conversion algorithm which can be adopted in a 3D image sensor. In our algorithm, a depth map is generated by combining global depth gradient and local depth refinement for each frame of 2D video input. Global depth gradient is computed according to image type while local depth refinement is related to color information. As input 2D video content consists of a number of video shots, the proposed algorithm reuses the global depth gradient of frames within the same video shot to generate time-coherent depth maps. The experimental results prove that this novel method can adapt to different image types, reduce computational complexity and improve the temporal smoothness of generated 3D video. PMID:26131674

  17. A Novel 2D-to-3D Video Conversion Method Using Time-Coherent Depth Maps.

    PubMed

    Yin, Shouyi; Dong, Hao; Jiang, Guangli; Liu, Leibo; Wei, Shaojun

    2015-01-01

    In this paper, we propose a novel 2D-to-3D video conversion method for 3D entertainment applications. 3D entertainment is getting more and more popular and can be found in many contexts, such as TV and home gaming equipment. 3D image sensors are a new method to produce stereoscopic video content conveniently and at a low cost, and can thus meet the urgent demand for 3D videos in the 3D entertaiment market. Generally, 2D image sensor and 2D-to-3D conversion chip can compose a 3D image sensor. Our study presents a novel 2D-to-3D video conversion algorithm which can be adopted in a 3D image sensor. In our algorithm, a depth map is generated by combining global depth gradient and local depth refinement for each frame of 2D video input. Global depth gradient is computed according to image type while local depth refinement is related to color information. As input 2D video content consists of a number of video shots, the proposed algorithm reuses the global depth gradient of frames within the same video shot to generate time-coherent depth maps. The experimental results prove that this novel method can adapt to different image types, reduce computational complexity and improve the temporal smoothness of generated 3D video. PMID:26131674

  18. ROI-preserving 3D video compression method utilizing depth information

    NASA Astrophysics Data System (ADS)

    Ti, Chunli; Xu, Guodong; Guan, Yudong; Teng, Yidan

    2015-09-01

    Efficiently transmitting the extra information of three dimensional (3D) video is becoming a key issue of the development of 3DTV. 2D plus depth format not only occupies the smaller bandwidth and is compatible transmission under the condition of the existing channel, but also can provide technique support for advanced 3D video compression in some extend. This paper proposes an ROI-preserving compression scheme to further improve the visual quality at a limited bit rate. According to the connection between the focus of Human Visual System (HVS) and depth information, region of interest (ROI) can be automatically selected via depth map progressing. The main improvement from common method is that a meanshift based segmentation is executed to the depth map before foreground ROI selection to keep the integrity of scene. Besides, the sensitive areas along the edges are also protected. The Spatio-temporal filtering adapting to H.264 is used to the non-ROI of both 2D video and depth map before compression. Experiments indicate that, the ROI extracted by this method is more undamaged and according with subjective feeling, and the proposed method can keep the key high-frequency information more effectively while the bit rate is reduced.

  19. Rifting-to-drifting transition of the South China Sea: early Cenozoic syn-rifting deposition imaged with prestack depth migration

    NASA Astrophysics Data System (ADS)

    Song, T.; Li, C.; Li, J.

    2012-12-01

    One of the major unsolved questions of the opening of the South China Sea (SCS) is its opening sequences and episodes. It has been suggested, for example, that the opening of the East and Northwest Sub-basins predated, or at least synchronized with, that of the Southwest Sub-basin, a model contrasting with some others in which an earlier opening in the Southwest Sub-basin is preferred. Difficulties in understanding the perplexing relationships between different sub-basins are often compounded by contradicting evidences leading to different interpretations. Here we carry out pre-stack depth migration of a recently acquired multichannel reflection seismic profile from the Southwest Sub-basin of the SCS in order to reveal complicated subsurface structures and strong lateral velocity variations associated with a thick syn-rifting sequence on the southern margin of the Southwest Sub-basin. Combined with gravimetric and magnetic inversion and modeling, this depth section helps us understand the complicated transitional processes from continental rifting to seafloor spreading. This syn-rifting sequence is found to be extremely thick, over 2 seconds in two-way travel time, and is located directly within the continent-ocean transition zone. It is bounded landwards by a seaward dipping fault, and tapers out seaward. The top of this sequence is an erosional truncation, representing mainly the Oligocene-Miocene unconformity landward but slightly an older unconformity on the seaward side. Stronger erosions of this sequence are found toward the ocean basin. The sequence itself is severely faulted by a group of seaward dipping faults developed mainly within the sequence. The overall deformation style suggests a successive episode of rifting, faulting, compression, tilting, and erosion, prior to seafloor spreading. Integrating information from gravity anomalies and seismic velocities, we interpret that this sequence represents a syn-rifting sequence developed during a long period

  20. Depth-based representations: Which coding format for 3D video broadcast applications?

    NASA Astrophysics Data System (ADS)

    Kerbiriou, Paul; Boisson, Guillaume; Sidibé, Korian; Huynh-Thu, Quan

    2011-03-01

    3D Video (3DV) delivery standardization is currently ongoing in MPEG. Now time is to choose 3DV data representation format. What is at stake is the final quality for end-users, i.e. synthesized views' visual quality. We focus on two major rival depth-based formats, namely Multiview Video plus Depth (MVD) and Layered Depth Video (LDV). MVD can be considered as the basic depth-based 3DV format, generated by disparity estimation from multiview sequences. LDV is more sophisticated, with the compaction of multiview data into color- and depth-occlusions layers. We compare final views quality using MVD2 and LDV (both containing two color channels plus two depth components) coded with MVC at various compression ratios. Depending on the format, the appropriate synthesis process is performed to generate final stereoscopic pairs. Comparisons are provided in terms of SSIM and PSNR with respect to original views and to synthesized references (obtained without compression). Eventually, LDV outperforms significantly MVD when using state-of-the-art reference synthesis algorithms. Occlusions management before encoding is advantageous in comparison with handling redundant signals at decoder side. Besides, we observe that depth quantization does not induce much loss on the final view quality until a significant degradation level. Improvements in disparity estimation and view synthesis algorithms are therefore still expected during the remaining standardization steps.

  1. Monocular display unit for 3D display with correct depth perception

    NASA Astrophysics Data System (ADS)

    Sakamoto, Kunio; Hosomi, Takashi

    2009-11-01

    A study of virtual-reality system has been popular and its technology has been applied to medical engineering, educational engineering, a CAD/CAM system and so on. The 3D imaging display system has two types in the presentation method; one is a 3-D display system using a special glasses and the other is the monitor system requiring no special glasses. A liquid crystal display (LCD) recently comes into common use. It is possible for this display unit to provide the same size of displaying area as the image screen on the panel. A display system requiring no special glasses is useful for a 3D TV monitor, but this system has demerit such that the size of a monitor restricts the visual field for displaying images. Thus the conventional display can show only one screen, but it is impossible to enlarge the size of a screen, for example twice. To enlarge the display area, the authors have developed an enlarging method of display area using a mirror. Our extension method enables the observers to show the virtual image plane and to enlarge a screen area twice. In the developed display unit, we made use of an image separating technique using polarized glasses, a parallax barrier or a lenticular lens screen for 3D imaging. The mirror can generate the virtual image plane and it enlarges a screen area twice. Meanwhile the 3D display system using special glasses can also display virtual images over a wide area. In this paper, we present a monocular 3D vision system with accommodation mechanism, which is useful function for perceiving depth.

  2. 3D silicon sensors with variable electrode depth for radiation hard high resolution particle tracking

    NASA Astrophysics Data System (ADS)

    Da Vià, C.; Borri, M.; Dalla Betta, G.; Haughton, I.; Hasi, J.; Kenney, C.; Povoli, M.; Mendicino, R.

    2015-04-01

    3D sensors, with electrodes micro-processed inside the silicon bulk using Micro-Electro-Mechanical System (MEMS) technology, were industrialized in 2012 and were installed in the first detector upgrade at the LHC, the ATLAS IBL in 2014. They are the radiation hardest sensors ever made. A new idea is now being explored to enhance the three-dimensional nature of 3D sensors by processing collecting electrodes at different depths inside the silicon bulk. This technique uses the electric field strength to suppress the charge collection effectiveness of the regions outside the p-n electrodes' overlap. Evidence of this property is supported by test beam data of irradiated and non-irradiated devices bump-bonded with pixel readout electronics and simulations. Applications include High-Luminosity Tracking in the high multiplicity LHC forward regions. This paper will describe the technical advantages of this idea and the tracking application rationale.

  3. Multi-layer 3D imaging using a few viewpoint images and depth map

    NASA Astrophysics Data System (ADS)

    Suginohara, Hidetsugu; Sakamoto, Hirotaka; Yamanaka, Satoshi; Suyama, Shiro; Yamamoto, Hirotsugu

    2015-03-01

    In this paper, we propose a new method that makes multi-layer images from a few viewpoint images to display a 3D image by the autostereoscopic display that has multiple display screens in the depth direction. We iterate simple "Shift and Subtraction" processes to make each layer image alternately. The image made in accordance with depth map like a volume slicing by gradations is used as the initial solution of iteration process. Through the experiments using the prototype stacked two LCDs, we confirmed that it was enough to make multi-layer images from three viewpoint images to display a 3D image. Limiting the number of viewpoint images, the viewing area that allows stereoscopic view becomes narrow. To broaden the viewing area, we track the head motion of the viewer and update screen images in real time so that the viewer can maintain correct stereoscopic view within +/- 20 degrees area. In addition, we render pseudo multiple viewpoint images using depth map, then we can generate motion parallax at the same time.

  4. Fusion of 3D laser scanner and depth images for obstacle recognition in mobile applications

    NASA Astrophysics Data System (ADS)

    Budzan, Sebastian; Kasprzyk, Jerzy

    2016-02-01

    The problem of obstacle detection and recognition or, generally, scene mapping is one of the most investigated problems in computer vision, especially in mobile applications. In this paper a fused optical system using depth information with color images gathered from the Microsoft Kinect sensor and 3D laser range scanner data is proposed for obstacle detection and ground estimation in real-time mobile systems. The algorithm consists of feature extraction in the laser range images, processing of the depth information from the Kinect sensor, fusion of the sensor information, and classification of the data into two separate categories: road and obstacle. Exemplary results are presented and it is shown that fusion of information gathered from different sources increases the effectiveness of the obstacle detection in different scenarios, and it can be used successfully for road surface mapping.

  5. Optimizing penetration depth, contrast, and resolution in 3D dermatologic OCT

    NASA Astrophysics Data System (ADS)

    Aneesh, Alex; Považay, Boris; Hofer, Bernd; Zhang, Edward Z.; Kendall, Catherine; Laufer, Jan; Popov, Sergei; Glittenberg, Carl; Binder, Susanne; Stone, Nicholas; Beard, Paul C.; Drexler, Wolfgang

    2010-02-01

    High speed, three-dimensional optical coherence tomography (3D OCT) at 800nm, 1060nm and 1300nm with approximately 4μm, 7μm and 6μm axial and less than 15μm transverse resolution is demonstrated to investigate the optimum wavelength region for in vivo human skin imaging in terms of contrast, dynamic range and penetration depth. 3D OCT at 1300nm provides deeper penetration, while images obtained at 800nm were better in terms of contrast and speckle noise. 1060nm region was a compromise between 800nm and 1300nm in terms of penetration depth and image contrast. Optimizing sensitivity, penetration and contrast enabled unprecedented visualization of micro-structural morphology underneath the glabrous skin, hairy skin and in scar tissue. Higher contrast obtained at 800 nm appears to be critical in the in vitro tumor study. A multimodal approach combining OCT and PA helped to obtain morphological as well as vascular information from deeper regions of skin.

  6. Improvement of 3d Monte Carlo Localization Using a Depth Camera and Terrestrial Laser Scanner

    NASA Astrophysics Data System (ADS)

    Kanai, S.; Hatakeyama, R.; Date, H.

    2015-05-01

    Effective and accurate localization method in three-dimensional indoor environments is a key requirement for indoor navigation and lifelong robotic assistance. So far, Monte Carlo Localization (MCL) has given one of the promising solutions for the indoor localization methods. Previous work of MCL has been mostly limited to 2D motion estimation in a planar map, and a few 3D MCL approaches have been recently proposed. However, their localization accuracy and efficiency still remain at an unsatisfactory level (a few hundreds millimetre error at up to a few FPS) or is not fully verified with the precise ground truth. Therefore, the purpose of this study is to improve an accuracy and efficiency of 6DOF motion estimation in 3D MCL for indoor localization. Firstly, a terrestrial laser scanner is used for creating a precise 3D mesh model as an environment map, and a professional-level depth camera is installed as an outer sensor. GPU scene simulation is also introduced to upgrade the speed of prediction phase in MCL. Moreover, for further improvement, GPGPU programming is implemented to realize further speed up of the likelihood estimation phase, and anisotropic particle propagation is introduced into MCL based on the observations from an inertia sensor. Improvements in the localization accuracy and efficiency are verified by the comparison with a previous MCL method. As a result, it was confirmed that GPGPU-based algorithm was effective in increasing the computational efficiency to 10-50 FPS when the number of particles remain below a few hundreds. On the other hand, inertia sensor-based algorithm reduced the localization error to a median of 47mm even with less number of particles. The results showed that our proposed 3D MCL method outperforms the previous one in accuracy and efficiency.

  7. Blind deconvolution of 3D fluorescence microscopy using depth-variant asymmetric PSF.

    PubMed

    Kim, Boyoung; Naemura, Takeshi

    2016-06-01

    The 3D wide-field fluorescence microscopy suffers from depth-variant asymmetric blur. The depth-variance and axial asymmetry are due to refractive index mismatch between the immersion and the specimen layer. The radial asymmetry is due to lens imperfections and local refractive index inhomogeneities in the specimen. To obtain the PSF that has these characteristics, there were PSF premeasurement trials. However, they are useless since imaging conditions such as camera position and refractive index of the specimen are changed between the premeasurement and actual imaging. In this article, we focus on removing unknown depth-variant asymmetric blur in such an optical system under the assumption of refractive index homogeneities in the specimen. We propose finding few parameters in the mathematical PSF model from observed images in which the PSF model has a depth-variant asymmetric shape. After generating an initial PSF from the analysis of intensities in the observed image, the parameters are estimated based on a maximum likelihood estimator. Using the estimated PSF, we implement an accelerated GEM algorithm for image deconvolution. Deconvolution result shows the superiority of our algorithm in terms of accuracy, which quantitatively evaluated by FWHM, relative contrast, standard deviation values of intensity peaks and FWHM. Microsc. Res. Tech. 79:480-494, 2016. © 2016 Wiley Periodicals, Inc. PMID:27062314

  8. Plasma penetration depth and mechanical properties of atmospheric plasma-treated 3D aramid woven composites

    NASA Astrophysics Data System (ADS)

    Chen, X.; Yao, L.; Xue, J.; Zhao, D.; Lan, Y.; Qian, X.; Wang, C. X.; Qiu, Y.

    2008-12-01

    Three-dimensional aramid woven fabrics were treated with atmospheric pressure plasmas, on one side or both sides to determine the plasma penetration depth in the 3D fabrics and the influences on final composite mechanical properties. The properties of the fibers from different layers of the single side treated fabrics, including surface morphology, chemical composition, wettability and adhesion properties were investigated using scanning electron microscopy (SEM), X-ray photoelectron spectroscopy (XPS), contact angle measurement and microbond tests. Meanwhile, flexural properties of the composites reinforced with the fabrics untreated and treated on both sides were compared using three-point bending tests. The results showed that the fibers from the outer most surface layer of the fabric had a significant improvement in their surface roughness, chemical bonding, wettability and adhesion properties after plasma treatment; the treatment effect gradually diminished for the fibers in the inner layers. In the third layer, the fiber properties remained approximately the same to those of the control. In addition, three-point bending tests indicated that the 3D aramid composite had an increase of 11% in flexural strength and 12% in flexural modulus after the plasma treatment. These results indicate that composite mechanical properties can be improved by the direct fabric treatment instead of fiber treatment with plasmas if the fabric is less than four layers thick.

  9. 3-D resistivity imaging of buried concrete infrastructure with application to unknown bridge foundation depth determination

    NASA Astrophysics Data System (ADS)

    Everett, M. E.; Arjwech, R.; Briaud, J.; Hurlebaus, S.; Medina-Cetina, Z.; Tucker, S.; Yousefpour, N.

    2010-12-01

    Bridges are always vulnerable to scour and those mainly older ones with unknown foundations constitute a significant risk to public safety. Geophysical testing of bridge foundations using 3-D resistivity imaging is a promising non-destructive technology but its execution and reliable interpretation remains a challenging task. A major difficulty to diagnosing foundation depth is that a single linear electrode profile generally does not provide adequate 3—D illumination to provide a useful image of the bottom of the foundation. To further explore the capabilities of resistivity tomography, we conducted a 3—D resistivity survey at a geotechnical test area which includes groups of buried, steel—reinforced concrete structures, such as slabs and piles, with cylindrical and square cross—sections that serve as proxies for bridge foundations. By constructing a number of 3—D tomograms using selected data subsets and comparing the resulting images, we have identified efficient combinations of data acquired in the vicinity of a given foundation which enable the most cost-effective and reliable depth determination. The numerous issues that are involved in adapting this methodology to actual bridge sites is discussed.

  10. The impact of stereo 3D sports TV broadcasts on user's depth perception and spatial presence experience

    NASA Astrophysics Data System (ADS)

    Weigelt, K.; Wiemeyer, J.

    2014-03-01

    This work examines the impact of content and presentation parameters in 2D versus 3D on depth perception and spatial presence, and provides guidelines for stereoscopic content development for 3D sports TV broadcasts and cognate subjects. Under consideration of depth perception and spatial presence experience, a preliminary study with 8 participants (sports: soccer and boxing) and a main study with 31 participants (sports: soccer and BMX-Miniramp) were performed. The dimension (2D vs. 3D) and camera position (near vs. far) were manipulated for soccer and boxing. In addition for soccer, the field of view (small vs. large) was examined. Moreover, the direction of motion (horizontal vs. depth) was considered for BMX-Miniramp. Subjective assessments, behavioural tests and qualitative interviews were implemented. The results confirm a strong effect of 3D on both depth perception and spatial presence experience as well as selective influences of camera distance and field of view. The results can improve understanding of the perception and experience of 3D TV as a medium. Finally, recommendations are derived on how to use various 3D sports ideally as content for TV broadcasts.

  11. INTEGRATED APPROACH FOR THE PETROPHYSICAL INTERPRETATION OF POST- AND PRE-STACK 3-D SEISMIC DATA, WELL-LOG DATA, CORE DATA, GEOLOGICAL INFORMATION AND RESERVOIR PRODUCTION DATA VIA BAYESIAN STOCHASTIC INVERSION

    SciTech Connect

    Carlos Torres-Verdin; Mrinal K. Sen

    2004-03-01

    The present report summarizes the work carried out between September 30, 2002 and August 30, 2003 under DOE research contract No. DE-FC26-00BC15305. During the third year of work for this project we focused primarily on improving the efficiency of inversion algorithms and on developing algorithms for direct estimation of petrophysical parameters. The full waveform inversion algorithm for elastic property estimation was tested rigorously on a personal computer cluster. For sixteen nodes on the cluster the parallel algorithm was found to be scalable with a near linear speedup. This enabled us to invert a 2D seismic line in less than five hours of CPU time. We were invited to write a paper on our results that was subsequently accepted for publication. We also carried out a rigorous study to examine the sensitivity and resolution of seismic data to petrophysical parameters. In other words, we developed a full waveform inversion algorithm that estimates petrophysical parameters such as porosity and saturation from pre-stack seismic waveform data. First we used a modified Biot-Gassmann equation to relate petrophysical parameters to elastic parameters. The transformation was validated with a suite of well logs acquired in the deepwater Gulf of Mexico. As a part of this study, we carried out a sensitivity analysis and found that the porosity is very well resolved while the fluid saturation remains insensitive to seismic wave amplitudes. Finally we conducted a joint inversion of pre-stack seismic waveform and production history data. To overcome the computational difficulties we used a simpler waveform modeling algorithm together with an efficient subspace approach. The algorithm was tested on a realistic synthetic data set. We observed that the use of pre-stack seismic data helps tremendously to improve horizontal resolution of porosity maps. Finally, we submitted four publications to refereed technical journals, two refereed extended abstracts to technical conferences

  12. Fast 3-D seismic modeling and prestack depth migration using generalized screen methods. Final report for period January 1, 1998 - December 31, 2000

    SciTech Connect

    Toksoz, M. Nafi

    2001-03-31

    Completed a theoretical analysis of phase screen propagators to answer several critical questions: the existence of a singularity in the Green's function for the case of a zero vertical wavenumber, the stability and accuracy of such propagators, and the effects of backscattering for large contrast heterogeneous media. The theory is based on separating the wavefield into forescattering and backscattering parts. The approach is robust and appropriate for earth structures with high velocity contrast. This theory also resolves the apparent singularity problem that has persisted in generalized screen propagator formulations. With this formulation we studied the effects of the commonly used approximations as a function of the degree of velocity contrast in the media.

  13. Head pose estimation from a 2D face image using 3D face morphing with depth parameters.

    PubMed

    Kong, Seong G; Mbouna, Ralph Oyini

    2015-06-01

    This paper presents estimation of head pose angles from a single 2D face image using a 3D face model morphed from a reference face model. A reference model refers to a 3D face of a person of the same ethnicity and gender as the query subject. The proposed scheme minimizes the disparity between the two sets of prominent facial features on the query face image and the corresponding points on the 3D face model to estimate the head pose angles. The 3D face model used is morphed from a reference model to be more specific to the query face in terms of the depth error at the feature points. The morphing process produces a 3D face model more specific to the query image when multiple 2D face images of the query subject are available for training. The proposed morphing process is computationally efficient since the depth of a 3D face model is adjusted by a scalar depth parameter at feature points. Optimal depth parameters are found by minimizing the disparity between the 2D features of the query face image and the corresponding features on the morphed 3D model projected onto 2D space. The proposed head pose estimation technique was evaluated on two benchmarking databases: 1) the USF Human-ID database for depth estimation and 2) the Pointing'04 database for head pose estimation. Experiment results demonstrate that head pose estimation errors in nodding and shaking angles are as low as 7.93° and 4.65° on average for a single 2D input face image. PMID:25706638

  14. Ankh in the depth - Subdermal 3D art implants: Radiological identification with body modification.

    PubMed

    Schaerli, Sarah; Berger, Florian; Thali, Michael J; Gascho, Dominic

    2016-05-01

    One of the core tasks in forensic medico-legal investigations is the identification of the deceased. Radiological identification using postmortem computed tomography (PMCT) is a powerful technique. In general, the implementation of forensic PMCT is rising worldwide. In addition to specific anatomical structures, medical implants or prostheses serve as markers for the comparison of antemortem and postmortem images to identify the deceased. However, non-medical implants, such as subdermal three-dimensional (3D) art implants, also allow for radiological identification. These implants are a type of body modification that have become increasingly popular over the last several decades and will therefore be employed more frequently in radiological identification in the future. To the best of our knowledge, this is the first case of radiological identification with a subdermal 3D art implant. Further, the present case shows the characteristics of a silicone 3D art implant on computed tomography, magnetic resonance imaging and X-rays. PMID:27161914

  15. Double depth-enhanced 3D integral imaging in projection-type system without diffuser

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Jiao, Xiao-xue; Sun, Yu; Xie, Yan; Liu, Shao-peng

    2015-05-01

    Integral imaging is a three dimensional (3D) display technology without any additional equipment. A new system is proposed in this paper which consists of the elemental images of real images in real mode (RIRM) and the ones of virtual images in real mode (VIRM). The real images in real mode are the same as the conventional integral images. The virtual images in real mode are obtained by changing the coordinates of the corresponding points in elemental images which can be reconstructed by the lens array in virtual space. In order to reduce the spot size of the reconstructed images, the diffuser in conventional integral imaging is given up in the proposed method. Then the spot size is nearly 1/20 of that in the conventional system. And an optical integral imaging system is constructed to confirm that our proposed method opens a new way for the application of the passive 3D display technology.

  16. Lapse-time dependent coda-wave depth sensitivity to local velocity perturbations in 3-D heterogeneous elastic media

    NASA Astrophysics Data System (ADS)

    Obermann, Anne; Planès, Thomas; Hadziioannou, Céline; Campillo, Michel

    2016-07-01

    In the context of seismic monitoring, recent studies made successful use of seismic coda waves to locate medium changes on the horizontal plane. Locating the depth of the changes, however, remains a challenge. In this paper, we use 3-D wavefield simulations to address two problems: firstly, we evaluate the contribution of surface and body wave sensitivity to a change at depth. We introduce a thin layer with a perturbed velocity at different depths and measure the apparent relative velocity changes due to this layer at different times in the coda and for different degrees of heterogeneity of the model. We show that the depth sensitivity can be modelled as a linear combination of body- and surface-wave sensitivity. The lapse-time dependent sensitivity ratio of body waves and surface waves can be used to build 3-D sensitivity kernels for imaging purposes. Secondly, we compare the lapse-time behavior in the presence of a perturbation in horizontal and vertical slabs to address, for instance, the origin of the velocity changes detected after large earthquakes.

  17. 3D scene's object detection and recognition using depth layers and SIFT-based machine learning

    NASA Astrophysics Data System (ADS)

    Kounalakis, T.; Triantafyllidis, G. A.

    2011-09-01

    This paper presents a novel system that is fusing efficient and state-of-the-art techniques of stereo vision and machine learning, aiming at object detection and recognition. To this goal, the system initially creates depth maps by employing the Graph-Cut technique. Then, the depth information is used for object detection by separating the objects from the whole scene. Next, the Scale-Invariant Feature Transform (SIFT) is used, providing the system with unique object's feature key-points, which are employed in training an Artificial Neural Network (ANN). The system is then able to classify and recognize the nature of these objects, creating knowledge from the real world. [Figure not available: see fulltext.

  18. Fabrication of 3D Templates Using a Large Depth of Focus Femtosecond Laser

    NASA Astrophysics Data System (ADS)

    Li, Xiao-Fan; Winfield, Richard; O'Brien, Shane; Chen, Liang-Yao

    2009-09-01

    We report the use of a large depth of focus Bessel beam in the fabrication of cell structures. Two axicon lenses are investigated in the formation of high aspect ratio line structures. A sol-gel resin, with good mechanical strength, is polymerised in a modified two-photon polymerisation system. Examples of different two-dimensional grids are presented to show that the lateral resolution can be maintained even in the rapid fabrication of high-sided structures.

  19. Assessing nest-building behavior of mice using a 3D depth camera.

    PubMed

    Okayama, Tsuyoshi; Goto, Tatsuhiko; Toyoda, Atsushi

    2015-08-15

    We developed a novel method to evaluate the nest-building behavior of mice using an inexpensive depth camera. The depth camera clearly captured nest-building behavior. Using three-dimensional information from the depth camera, we obtained objective features for assessing nest-building behavior, including "volume," "radius," and "mean height". The "volume" represents the change in volume of the nesting material, a pressed cotton square that a mouse shreds and untangles in order to build its nest. During the nest-building process, the total volume of cotton fragments is increased. The "radius" refers to the radius of the circle enclosing the fragments of cotton. It describes the extent of nesting material dispersion. The "radius" averaged approximately 60mm when a nest was built. The "mean height" represents the change in the mean height of objects. If the nest walls were high, the "mean height" was also high. These features provided us with useful information for assessment of nest-building behavior, similar to conventional methods for the assessment of nest building. However, using the novel method, we found that JF1 mice built nests with higher walls than B6 mice, and B6 mice built nests faster than JF1 mice. Thus, our novel method can evaluate the differences in nest-building behavior that cannot be detected or quantified by conventional methods. In future studies, we will evaluate nest-building behaviors of genetically modified, as well as several inbred, strains of mice, with several nesting materials. PMID:26051553

  20. 3D high-efficiency video coding for multi-view video and depth data.

    PubMed

    Muller, Karsten; Schwarz, Heiko; Marpe, Detlev; Bartnik, Christian; Bosse, Sebastian; Brust, Heribert; Hinz, Tobias; Lakshman, Haricharan; Merkle, Philipp; Rhee, Franz Hunn; Tech, Gerhard; Winken, Martin; Wiegand, Thomas

    2013-09-01

    This paper describes an extension of the high efficiency video coding (HEVC) standard for coding of multi-view video and depth data. In addition to the known concept of disparity-compensated prediction, inter-view motion parameter, and inter-view residual prediction for coding of the dependent video views are developed and integrated. Furthermore, for depth coding, new intra coding modes, a modified motion compensation and motion vector coding as well as the concept of motion parameter inheritance are part of the HEVC extension. A novel encoder control uses view synthesis optimization, which guarantees that high quality intermediate views can be generated based on the decoded data. The bitstream format supports the extraction of partial bitstreams, so that conventional 2D video, stereo video, and the full multi-view video plus depth format can be decoded from a single bitstream. Objective and subjective results are presented, demonstrating that the proposed approach provides 50% bit rate savings in comparison with HEVC simulcast and 20% in comparison with a straightforward multi-view extension of HEVC without the newly developed coding tools. PMID:23715605

  1. Real-Depth imaging: a new (no glasses) 3D imaging technology with video/data projection applications

    NASA Astrophysics Data System (ADS)

    Dolgoff, Eugene

    1997-05-01

    Floating Images, Inc. has developed the software and hardware for anew, patent pending, 'floating 3D, off-the- screen-experience' display technology. This technology has the potential to become the next standard for home and arcade video games, computers, corporate presentations, Internet/Intranet viewing, and television. Current '3D Graphics' technologies are actually flat on screen. Floating Images technology actually produce images at different depths from any display, such as CRT and LCD, for television, computer, projection, and other formats. In addition, unlike stereoscopic 3D imaging, no glasses, headgear, or other viewing aids are used. And, unlike current autostereoscopic imaging technologies, there is virtually no restriction on where viewers can sit to view the images, with no 'bad' or 'dead' zones, flipping, or pseudoscopy. In addition to providing traditional depth cues such as perspective and background image occlusion, the new technology also provides both horizontal and vertical binocular parallax and accommodation which coincides with convergence. Since accommodation coincides with convergence, viewing these images doesn't produce headaches, fatigue, or eye-strain, regardless of how long they are viewed. The imagery must either be formatted for the Floating Images platform when written, or existing software can be reformatted without much difficult. The optical hardware system can be made to accommodate virtually any projection system to produce Floating Images for the Boardroom, video arcade, stage shows, or the classroom.

  2. Fully integrated system-on-chip for pixel-based 3D depth and scene mapping

    NASA Astrophysics Data System (ADS)

    Popp, Martin; De Coi, Beat; Thalmann, Markus; Gancarz, Radoslav; Ferrat, Pascal; Dürmüller, Martin; Britt, Florian; Annese, Marco; Ledergerber, Markus; Catregn, Gion-Pol

    2012-03-01

    We present for the first time a fully integrated system-on-chip (SoC) for pixel-based 3D range detection suited for commercial applications. It is based on the time-of-flight (ToF) principle, i.e. measuring the phase difference of a reflected pulse train. The product epc600 is fabricated using a dedicated process flow, called Espros Photonic CMOS. This integration makes it possible to achieve a Quantum Efficiency (QE) of >80% in the full wavelength band from 520nm up to 900nm as well as very high timing precision in the sub-ns range which is needed for exact detection of the phase delay. The SoC features 8x8 pixels and includes all necessary sub-components such as ToF pixel array, voltage generation and regulation, non-volatile memory for configuration, LED driver for active illumination, digital SPI interface for easy communication, column based 12bit ADC converters, PLL and digital data processing with temporary data storage. The system can be operated at up to 100 frames per second.

  3. 3D motion artifact compenstation in CT image with depth camera

    NASA Astrophysics Data System (ADS)

    Ko, Youngjun; Baek, Jongduk; Shim, Hyunjung

    2015-02-01

    Computed tomography (CT) is a medical imaging technology that projects computer-processed X-rays to acquire tomographic images or the slices of specific organ of body. A motion artifact caused by patient motion is a common problem in CT system and may introduce undesirable artifacts in CT images. This paper analyzes the critical problems in motion artifacts and proposes a new CT system for motion artifact compensation. We employ depth cameras to capture the patient motion and account it for the CT image reconstruction. In this way, we achieve the significant improvement in motion artifact compensation, which is not possible by previous techniques.

  4. Operational Retrieval of aerosol optical depth over Indian subcontinent and Indian Ocean using INSAT-3D/Imager product validation

    NASA Astrophysics Data System (ADS)

    Mishra, M. K.; Rastogi, G.; Chauhan, P.

    2014-11-01

    Aerosol optical depth (AOD) over Indian subcontinent and Indian Ocean region is derived operationally for the first time from the geostationary earth orbit (GEO) satellite INSAT-3D Imager data at 0.65 μm wavelength. Single visible channel algorithm based on clear sky composites gives larger retrieval error in AOD than other multiple channel algorithms due to errors in estimating surface reflectance and atmospheric property. However, since MIR channel signal is insensitive to the presence of most aerosols, therefore in present study, AOD retrieval algorithm employs both visible (centred at 0.65 μm) and mid-infrared (MIR) band (centred at 3.9 μm) measurements, and allows us to monitor transport of aerosols at higher temporal resolution. Comparisons made between INSAT-3D derived AOD (τI) and MODIS derived AOD (τM) co-located in space (at 1° resolution) and time during January, February and March (JFM) 2014 encompasses 1165, 1052 and 900 pixels, respectively. Good agreement found between τI and τM during JFM 2014 with linear correlation coefficients (R) of 0.87, 0.81 and 0.76, respectively. The extensive validation made during JFM 2014 encompasses 215 co-located AOD in space and time derived by INSAT 3D (τI) and 10 sun-photometers (τA) that includes 9 AERONET (Aerosol Robotic Network) and 1 handheld sun-photometer site. INSAT-3D derived AOD i.e. τI, is found within the retrieval errors of τI = ±0.07 ±0.15τA with linear correlation coefficient (R) of 0.90 and root mean square error equal (RMSE) to 0.06. Present work shows that INSAT-3D aerosol products can be used quantitatively in many applications with caution for possible residual clouds, snow/ice, and water contamination.

  5. Investigating the San Andreas Fault System in the Northern Salton Trough by a Combination of Seismic Tomography and Pre-stack Depth Migration: Results from the Salton Seismic Imaging Project (SSIP)

    NASA Astrophysics Data System (ADS)

    Bauer, K.; Ryberg, T.; Fuis, G. S.; Goldman, M.; Catchings, R.; Rymer, M. J.; Hole, J. A.; Stock, J. M.

    2013-12-01

    The Salton Trough in southern California is a tectonically active pull-apart basin which was formed in migrating step-overs between strike-slip faults, of which the San Andreas fault (SAF) and the Imperial fault are current examples. It is located within the large-scale transition between the onshore SAF strike-slip system to the north and the marine rift system of the Gulf of California to the south. Crustal stretching and sinking formed the distinct topographic features and sedimentary successions of the Salton Trough. The active SAF and related fault systems can produce potentially large damaging earthquakes. The Salton Seismic Imaging Project (SSIP), funded by NSF and USGS, was undertaken to generate seismic data and images to improve the knowledge of fault geometry and seismic velocities within the sedimentary basins and underlying crystalline crust around the SAF in this key region. The results from these studies are required as input for modeling of earthquake scenarios and prediction of strong ground motion in the surrounding populated areas and cities. We present seismic data analysis and results from tomography and pre-stack depth migration for a number of seismic profiles (Lines 1, 4-7) covering mainly the northern Salton Trough. The controlled-source seismic data were acquired in 2011. The seismic lines have lengths ranging from 37 to 72 km. On each profile, 9-17 explosion sources with charges of 110-460 kg were recorded by 100-m spaced vertical component receivers. On Line 7, additional OBS data were acquired within the Salton Sea. Travel times of first arrivals were picked and inverted for initial 1D velocity models. Alternatively, the starting models were derived from the crustal-scale velocity models developed by the Southern California Earthquake Center. The final 2D velocity models were obtained using the algorithm of Hole (1992; JGR). We have also tested the tomography packages FAST and SIMUL2000, resulting in similar velocity structures. An

  6. Simulating hydroplaning of submarine landslides by quasi 3D depth averaged finite element method

    NASA Astrophysics Data System (ADS)

    De Blasio, Fabio; Battista Crosta, Giovanni

    2014-05-01

    G.B. Crosta, H. J. Chen, and F.V. De Blasio Dept. Of Earth and Environmental Sciences, Università degli Studi di Milano Bicocca, Milano, Italy Klohn Crippen Berger, Calgary, Canada Subaqueous debris flows/submarine landslides, both in the open ocean as well as in fresh waters, exhibit extremely high mobility, quantified by a ratio between vertical to horizontal displacement of the order 0.01 or even much less. It is possible to simulate subaqueous debris flows with small-scale experiments along a flume or a pool using a cohesive mixture of clay and sand. The results have shown a strong enhancement of runout and velocity compared to the case in which the same debris flow travels without water, and have indicated hydroplaning as a possible explanation (Mohrig et al. 1998). Hydroplaning is started when the snout of the debris flow travels sufficiently fast. This generates lift forces on the front of the debris flow exceeding the self-weight of the sediment, which so begins to travel detached from the bed, literally hovering instead of flowing. Clearly, the resistance to flow plummets because drag stress against water is much smaller than the shear strength of the material. The consequence is a dramatic increase of the debris flow speed and runout. Does the process occur also for subaqueous landslides and debris flows in the ocean, something twelve orders of magnitude larger than the experimental ones? Obviously, no experiment will ever be capable to replicate this size, one needs to rely on numerical simulations. Results extending a depth-integrated numerical model for debris flows (Imran et al., 2001) indicate that hydroplaning is possible (De Blasio et al., 2004), but more should be done especially with alternative numerical methodologies. In this work, finite element methods are used to simulate hydroplaning using the code MADflow (Chen, 2014) adopting a depth averaged solution. We ran some simulations on the small scale of the laboratory experiments, and secondly

  7. Robust incremental compensation of the light attenuation with depth in 3D fluorescence microscopy.

    PubMed

    Kervrann, C; Legland, D; Pardini, L

    2004-06-01

    Summary Fluorescent signal intensities from confocal laser scanning microscopes (CLSM) suffer from several distortions inherent to the method. Namely, layers which lie deeper within the specimen are relatively dark due to absorption and scattering of both excitation and fluorescent light, photobleaching and/or other factors. Because of these effects, a quantitative analysis of images is not always possible without correction. Under certain assumptions, the decay of intensities can be estimated and used for a partial depth intensity correction. In this paper we propose an original robust incremental method for compensating the attenuation of intensity signals. Most previous correction methods are more or less empirical and based on fitting a decreasing parametric function to the section mean intensity curve computed by summing all pixel values in each section. The fitted curve is then used for the calculation of correction factors for each section and a new compensated sections series is computed. However, these methods do not perfectly correct the images. Hence, the algorithm we propose for the automatic correction of intensities relies on robust estimation, which automatically ignores pixels where measurements deviate from the decay model. It is based on techniques adopted from the computer vision literature for image motion estimation. The resulting algorithm is used to correct volumes acquired in CLSM. An implementation of such a restoration filter is discussed and examples of successful restorations are given. PMID:15157197

  8. Depth-kymography of vocal fold vibrations: part II. Simulations and direct comparisons with 3D profile measurements

    NASA Astrophysics Data System (ADS)

    de Mul, Frits F. M.; George, Nibu A.; Qiu, Qingjun; Rakhorst, Gerhard; Schutte, Harm K.

    2009-07-01

    We report novel direct quantitative comparisons between 3D profiling measurements and simulations of human vocal fold vibrations. Until now, in human vocal folds research, only imaging in a horizontal plane was possible. However, for the investigation of several diseases, depth information is needed, especially when the two folds act differently, e.g. in the case of tumour growth. Recently, with our novel depth-kymographic laryngoscope, we obtained calibrated data about the horizontal and vertical positions of the visible surface of the vibrating vocal folds. In order to find relations with physical parameters such as elasticity and damping constants, we numerically simulated the horizontal and vertical positions and movements of the human vocal folds while vibrating and investigated the effect of varying several parameters on the characteristics of the phonation: the masses and their dimensions, the respective forces and pressures, and the details of the vocal tract compartments. Direct one-to-one comparison with measured 3D positions presents—for the first time—a direct means of validation of these calculations. This may start a new field in vocal folds research.

  9. Nanometer depth resolution in 3D topographic analysis of drug-loaded nanofibrous mats without sample preparation.

    PubMed

    Paaver, Urve; Heinämäki, Jyrki; Kassamakov, Ivan; Hæggström, Edward; Ylitalo, Tuomo; Nolvi, Anton; Kozlova, Jekaterina; Laidmäe, Ivo; Kogermann, Karin; Veski, Peep

    2014-02-28

    We showed that scanning white light interferometry (SWLI) can provide nanometer depth resolution in 3D topographic analysis of electrospun drug-loaded nanofibrous mats without sample preparation. The method permits rapidly investigating geometric properties (e.g. fiber diameter, orientation and morphology) and surface topography of drug-loaded nanofibers and nanomats. Electrospun nanofibers of a model drug, piroxicam (PRX), and hydroxypropyl methylcellulose (HPMC) were imaged. Scanning electron microscopy (SEM) served as a reference method. SWLI 3D images featuring 29 nm by 29 nm active pixel size were obtained of a 55 μm × 40 μm area. The thickness of the drug-loaded non-woven nanomats was uniform, ranging from 2.0 μm to 3.0 μm (SWLI), and independent of the ratio between HPMC and PRX. The average diameters (n=100, SEM) for drug-loaded nanofibers were 387 ± 125 nm (HPMC and PRX 1:1), 407 ± 144 nm (HPMC and PRX 1:2), and 290 ± 100 nm (HPMC and PRX 1:4). We found advantages and limitations in both techniques. SWLI permits rapid non-contacting and non-destructive characterization of layer orientation, layer thickness, porosity, and surface morphology of electrospun drug-loaded nanofibers and nanomats. Such analysis is important because the surface topography affects the performance of nanomats in pharmaceutical and biomedical applications. PMID:24378328

  10. Click-assembled, oxygen sensing nanoconjugates for depth-resolved, near-infrared imaging in a 3D cancer model

    PubMed Central

    Nichols, Alexander J.; Roussakis, Emmanuel; Klein, Oliver J.

    2014-01-01

    Hypoxia is an important factor that contributes to the development of drug-resistant cancer, yet few non-perturbative tools exist for studying oxygen in tissue. While progress has been made in the development of chemical probes for optical oxygen mapping, penetration into poorly perfused or avascular tumor regions remains problematic. Here we report a Click-Assembled Oxygen Sensing (CAOS) nanoconjugate and demonstrate its properties in an in vitro 3D spheroid cancer model. Our synthesis relies on sequential click-based ligation of poly(amidoamine)-like subunits for rapid assembly. Using near-infrared confocal phosphorescence microscopy, we demonstrate the ability of CAOS nanoconjugates to penetrate hundreds of microns into spheroids within hours and show their sensitivity to oxygen changes throughout the nodule. This proof-of-concept study demonstrates a modular approach that is readily extensible to a wide variety of oxygen and cellular sensors for depth-resolved imaging in tissue and tissue models. PMID:24590700

  11. Psychophysical estimation of 3D virtual depth of united, synthesized and mixed type stereograms by means of simultaneous observation

    NASA Astrophysics Data System (ADS)

    Iizuka, Masayuki; Ookuma, Yoshio; Nakashima, Yoshio; Takamatsu, Mamoru

    2007-02-01

    Recently, many types of computer-generated stereograms (CGSs), i.e. various works of art produced by using computer are published for hobby and entertainment. It is said that activation of brain, improvement of visual eye sight, decrease of mental stress, effect of healing, etc. are expected when properly appreciating a kind of CGS as the stereoscopic view. There is a lot of information on the internet web site concerning all aspects of stereogram history, science, social organization, various types of stereograms, and free software for generating CGS. Generally, the CGS is classified into nine types: (1) stereo pair type, (2) anaglyph type, (3) repeated pattern type, (4) embedded type, (5) random dot stereogram (RDS), (6) single image stereogram (SIS), (7) united stereogram, (8) synthesized stereogram, and (9) mixed or multiple type stereogram. Each stereogram has advantages and disadvantages when viewing directly the stereogram with two eyes by training with a little patience. In this study, the characteristics of united, synthesized and mixed type stereograms, the role and composition of depth map image (DMI) called hidden image or picture, and the effect of irregular shift of texture pattern image called wall paper are discussed from the viewpoint of psychophysical estimation of 3D virtual depth and visual quality of virtual image by means of simultaneous observation in the case of the parallel viewing method.

  12. Depth to the Juan De Fuca slab beneath the Cascadia subduction margin - a 3-D model for sorting earthquakes

    USGS Publications Warehouse

    McCrory, Patricia A.; Blair, J. Luke; Oppenheimer, David H.; Walter, Stephen R.

    2004-01-01

    We present an updated model of the Juan de Fuca slab beneath southern British Columbia, Washington, Oregon, and northern California, and use this model to separate earthquakes occurring above and below the slab surface. The model is based on depth contours previously published by Fluck and others (1997). Our model attempts to rectify a number of shortcomings in the original model and update it with new work. The most significant improvements include (1) a gridded slab surface in geo-referenced (ArcGIS) format, (2) continuation of the slab surface to its full northern and southern edges, (3) extension of the slab surface from 50-km depth down to 110-km beneath the Cascade arc volcanoes, and (4) revision of the slab shape based on new seismic-reflection and seismic-refraction studies. We have used this surface to sort earthquakes and present some general observations and interpretations of seismicity patterns revealed by our analysis. For example, deep earthquakes within the Juan de Fuca Plate beneath western Washington define a linear trend that may mark a tear within the subducting plate Also earthquakes associated with the northern stands of the San Andreas Fault abruptly terminate at the inferred southern boundary of the Juan de Fuca slab. In addition, we provide files of earthquakes above and below the slab surface and a 3-D animation or fly-through showing a shaded-relief map with plate boundaries, the slab surface, and hypocenters for use as a visualization tool.

  13. Vegetation Height Estimation Near Power transmission poles Via satellite Stereo Images using 3D Depth Estimation Algorithms

    NASA Astrophysics Data System (ADS)

    Qayyum, A.; Malik, A. S.; Saad, M. N. M.; Iqbal, M.; Abdullah, F.; Rahseed, W.; Abdullah, T. A. R. B. T.; Ramli, A. Q.

    2015-04-01

    Monitoring vegetation encroachment under overhead high voltage power line is a challenging problem for electricity distribution companies. Absence of proper monitoring could result in damage to the power lines and consequently cause blackout. This will affect electric power supply to industries, businesses, and daily life. Therefore, to avoid the blackouts, it is mandatory to monitor the vegetation/trees near power transmission lines. Unfortunately, the existing approaches are more time consuming and expensive. In this paper, we have proposed a novel approach to monitor the vegetation/trees near or under the power transmission poles using satellite stereo images, which were acquired using Pleiades satellites. The 3D depth of vegetation has been measured near power transmission lines using stereo algorithms. The area of interest scanned by Pleiades satellite sensors is 100 square kilometer. Our dataset covers power transmission poles in a state called Sabah in East Malaysia, encompassing a total of 52 poles in the area of 100 km. We have compared the results of Pleiades satellite stereo images using dynamic programming and Graph-Cut algorithms, consequently comparing satellites' imaging sensors and Depth-estimation Algorithms. Our results show that Graph-Cut Algorithm performs better than dynamic programming (DP) in terms of accuracy and speed.

  14. 3D seismic imaging on massively parallel computers

    SciTech Connect

    Womble, D.E.; Ober, C.C.; Oldfield, R.

    1997-02-01

    The ability to image complex geologies such as salt domes in the Gulf of Mexico and thrusts in mountainous regions is a key to reducing the risk and cost associated with oil and gas exploration. Imaging these structures, however, is computationally expensive. Datasets can be terabytes in size, and the processing time required for the multiple iterations needed to produce a velocity model can take months, even with the massively parallel computers available today. Some algorithms, such as 3D, finite-difference, prestack, depth migration remain beyond the capacity of production seismic processing. Massively parallel processors (MPPs) and algorithms research are the tools that will enable this project to provide new seismic processing capabilities to the oil and gas industry. The goals of this work are to (1) develop finite-difference algorithms for 3D, prestack, depth migration; (2) develop efficient computational approaches for seismic imaging and for processing terabyte datasets on massively parallel computers; and (3) develop a modular, portable, seismic imaging code.

  15. Depth Camera-Based 3D Hand Gesture Controls with Immersive Tactile Feedback for Natural Mid-Air Gesture Interactions

    PubMed Central

    Kim, Kwangtaek; Kim, Joongrock; Choi, Jaesung; Kim, Junghyun; Lee, Sangyoun

    2015-01-01

    Vision-based hand gesture interactions are natural and intuitive when interacting with computers, since we naturally exploit gestures to communicate with other people. However, it is agreed that users suffer from discomfort and fatigue when using gesture-controlled interfaces, due to the lack of physical feedback. To solve the problem, we propose a novel complete solution of a hand gesture control system employing immersive tactile feedback to the user's hand. For this goal, we first developed a fast and accurate hand-tracking algorithm with a Kinect sensor using the proposed MLBP (modified local binary pattern) that can efficiently analyze 3D shapes in depth images. The superiority of our tracking method was verified in terms of tracking accuracy and speed by comparing with existing methods, Natural Interaction Technology for End-user (NITE), 3D Hand Tracker and CamShift. As the second step, a new tactile feedback technology with a piezoelectric actuator has been developed and integrated into the developed hand tracking algorithm, including the DTW (dynamic time warping) gesture recognition algorithm for a complete solution of an immersive gesture control system. The quantitative and qualitative evaluations of the integrated system were conducted with human subjects, and the results demonstrate that our gesture control with tactile feedback is a promising technology compared to a vision-based gesture control system that has typically no feedback for the user's gesture inputs. Our study provides researchers and designers with informative guidelines to develop more natural gesture control systems or immersive user interfaces with haptic feedback. PMID:25580901

  16. Depth camera-based 3D hand gesture controls with immersive tactile feedback for natural mid-air gesture interactions.

    PubMed

    Kim, Kwangtaek; Kim, Joongrock; Choi, Jaesung; Kim, Junghyun; Lee, Sangyoun

    2015-01-01

    Vision-based hand gesture interactions are natural and intuitive when interacting with computers, since we naturally exploit gestures to communicate with other people. However, it is agreed that users suffer from discomfort and fatigue when using gesture-controlled interfaces, due to the lack of physical feedback. To solve the problem, we propose a novel complete solution of a hand gesture control system employing immersive tactile feedback to the user's hand. For this goal, we first developed a fast and accurate hand-tracking algorithm with a Kinect sensor using the proposed MLBP (modified local binary pattern) that can efficiently analyze 3D shapes in depth images. The superiority of our tracking method was verified in terms of tracking accuracy and speed by comparing with existing methods, Natural Interaction Technology for End-user (NITE), 3D Hand Tracker and CamShift. As the second step, a new tactile feedback technology with a piezoelectric actuator has been developed and integrated into the developed hand tracking algorithm, including the DTW (dynamic time warping) gesture recognition algorithm for a complete solution of an immersive gesture control system. The quantitative and qualitative evaluations of the integrated system were conducted with human subjects, and the results demonstrate that our gesture control with tactile feedback is a promising technology compared to a vision-based gesture control system that has typically no feedback for the user's gesture inputs. Our study provides researchers and designers with informative guidelines to develop more natural gesture control systems or immersive user interfaces with haptic feedback. PMID:25580901

  17. SU-C-213-04: Application of Depth Sensing and 3D-Printing Technique for Total Body Irradiation (TBI) Patient Measurement and Treatment Planning

    SciTech Connect

    Lee, M; Suh, T; Han, B; Xing, L; Jenkins, C

    2015-06-15

    Purpose: To develop and validate an innovative method of using depth sensing cameras and 3D printing techniques for Total Body Irradiation (TBI) treatment planning and compensator fabrication. Methods: A tablet with motion tracking cameras and integrated depth sensing was used to scan a RANDOTM phantom arranged in a TBI treatment booth to detect and store the 3D surface in a point cloud (PC) format. The accuracy of the detected surface was evaluated by comparison to extracted measurements from CT scan images. The thickness, source to surface distance and off-axis distance of the phantom at different body section was measured for TBI treatment planning. A 2D map containing a detailed compensator design was calculated to achieve uniform dose distribution throughout the phantom. The compensator was fabricated using a 3D printer, silicone molding and tungsten powder. In vivo dosimetry measurements were performed using optically stimulated luminescent detectors (OSLDs). Results: The whole scan of the anthropomorphic phantom took approximately 30 seconds. The mean error for thickness measurements at each section of phantom compare to CT was 0.44 ± 0.268 cm. These errors resulted in approximately 2% dose error calculation and 0.4 mm tungsten thickness deviation for the compensator design. The accuracy of 3D compensator printing was within 0.2 mm. In vivo measurements for an end-to-end test showed the overall dose difference was within 3%. Conclusion: Motion cameras and depth sensing techniques proved to be an accurate and efficient tool for TBI patient measurement and treatment planning. 3D printing technique improved the efficiency and accuracy of the compensator production and ensured a more accurate treatment delivery.

  18. Tilt scanning interferometry: a 3D k-space representation for depth-resolved structure and displacement measurement in scattering materials

    NASA Astrophysics Data System (ADS)

    Galizzi, Gustavo E.; Coupland, Jeremy M.; Ruiz, Pablo D.

    2010-09-01

    Tilt Scanning Interferometry (TSI) has been recently developed as an experimental method to measure multi-component displacement fields inside the volume of semitransparent scattering materials. It can be considered as an extension of speckle interferometry in 3D, in which the illumination angle is tilted to provide depth information, or as an optical diffraction tomography technique with phase detection. It relies on phase measurements to extract the displacement information, as in the usual 2D counterparts. A numerical model to simulate the speckle fields recorded in TSI has been recently developed to enable the study on how the phase and amplitude are affected by factors such as refraction, absorption, scattering, dispersion, stress-optic coupling and spatial variations of the refractive index, all of which may lead to spurious displacements. In order to extract depth-resolved structure and phase information from TSI data, the approach had been to use Fourier Transformation of the intensity modulation signal along the illumination angle axis. However, it turns out that a more complete description of the imaging properties of the system for tomographic optical diffraction can be achieved using a 3D representation of the transfer function in k-space. According to this formalism, TSI is presented as a linear filtering operation. In this paper we describe the transfer function of TSI in 3D k-space, evaluate the 3D point spread function and present simulated results.

  19. Extended depth-of-field 3D endoscopy with synthetic aperture integral imaging using an electrically tunable focal-length liquid-crystal lens.

    PubMed

    Wang, Yu-Jen; Shen, Xin; Lin, Yi-Hsin; Javidi, Bahram

    2015-08-01

    Conventional synthetic-aperture integral imaging uses a lens array to sense the three-dimensional (3D) object or scene that can then be reconstructed digitally or optically. However, integral imaging generally suffers from a fixed and limited range of depth of field (DOF). In this Letter, we experimentally demonstrate a 3D integral-imaging endoscopy with tunable DOF by using a single large-aperture focal-length-tunable liquid crystal (LC) lens. The proposed system can provide high spatial resolution and an extended DOF in synthetic-aperture integral imaging 3D endoscope. In our experiments, the image plane in the integral imaging pickup process can be tuned from 18 to 38 mm continuously using a large-aperture LC lens, and the total DOF is extended from 12 to 51 mm. To the best of our knowledge, this is the first report on synthetic aperture integral imaging 3D endoscopy with a large-aperture LC lens that can provide high spatial resolution 3D imaging with an extend DOF. PMID:26258358

  20. Wavefront construction in 3-D

    SciTech Connect

    Chilcoat, S.R. Hildebrand, S.T.

    1995-12-31

    Travel time computation in inhomogeneous media is essential for pre-stack Kirchhoff imaging in areas such as the sub-salt province in the Gulf of Mexico. The 2D algorithm published by Vinje, et al, has been extended to 3D to compute wavefronts in complicated inhomogeneous media. The 3D wavefront construction algorithm provides many advantages over conventional ray tracing and other methods of computing travel times in 3D. The algorithm dynamically maintains a reasonably consistent ray density without making a priori guesses at the number of rays to shoot. The determination of caustics in 3D is a straight forward geometric procedure. The wavefront algorithm also enables the computation of multi-valued travel time surfaces.

  1. Finite-difference solutions of the 3-D eikonal equation

    SciTech Connect

    Fei, Tong; Fehler, M.C.; Hildebrand, S.T.

    1995-12-31

    Prestack Kirchhoff depth migration requires the computation of traveltimes from surface source and receiver locations to subsurface image locations. In 3-D problems, computational efficiency becomes important. Finite-difference solutions of the eikonal equation provide computationally efficient methods for generating the traveltime information. Here, a novel finite-difference solutions of the eikonal equation provide computationally efficient methods for generating the traveltime information. Here, a novel finite-difference method for computing the first arrival traveltime by solving the eikonal equation has been developed in Cartesian coordinates. The method, which is unconditionally stable and computationally efficient, can handle instabilities due to caustics and provide information about head waves. The comparison of finite-difference solutions of the acoustic wave equation with the traveltime solutions from the eikonal equation in various structure models demonstrate that the method developed here can provide correct first arrival traveltime information even in areas of complex velocity structure.

  2. Reducing depth induced spherical aberration in 3D widefield fluorescence microscopy by wavefront coding using the SQUBIC phase mask

    NASA Astrophysics Data System (ADS)

    Patwary, Nurmohammed; Doblas, Ana; King, Sharon V.; Preza, Chrysanthe

    2014-03-01

    Imaging thick biological samples introduces spherical aberration (SA) due to refractive index (RI) mismatch between specimen and imaging lens immersion medium. SA increases with the increase of either depth or RI mismatch. Therefore, it is difficult to find a static compensator for SA1. Different wavefront coding methods2,3 have been studied to find an optimal way of static wavefront correction to reduce depth-induced SA. Inspired by a recent design of a radially symmetric squared cubic (SQUBIC) phase mask that was tested for scanning confocal microscopy1 we have modified the pupil using the SQUBIC mask to engineer the point spread function (PSF) of a wide field fluorescence microscope. In this study, simulated images of a thick test object were generated using a wavefront encoded engineered PSF (WFEPSF) and were restored using space-invariant (SI) and depth-variant (DV) expectation maximization (EM) algorithms implemented in the COSMOS software4. Quantitative comparisons between restorations obtained with both the conventional and WFE PSFs are presented. Simulations show that, in the presence of SA, the use of the SIEM algorithm and a single SQUBIC encoded WFE-PSF can yield adequate image restoration. In addition, in the presence of a large amount of SA, it is possible to get adequate results using the DVEM with fewer DV-PSFs than would typically be required for processing images acquired with a clear circular aperture (CCA) PSF. This result implies that modification of a widefield system with the SQUBIC mask renders the system less sensitive to depth-induced SA and suitable for imaging samples at larger optical depths.

  3. Ryukyu Subduction Zone: 3D Geodynamic Simulations of the Effects of Slab Shape and Depth on Lattice-Preferred Orientation (LPO) and Seismic Anisotropy

    NASA Astrophysics Data System (ADS)

    Tarlow, S.; Tan, E.; Billen, M. I.

    2015-12-01

    At the Ryukyu subduction zone, seismic anisotropy observations suggest that there may be strong trench-parallel flow within the mantle wedge driven by complex 3D slab geometry. However, previous simulations have either failed to account for 3D flow or used the infinite strain axis (ISA) approximation for LPO, which is known to be inaccurate in complex flow fields. Additionally, both the slab depth and shape of the Ryukyu slab are contentious. Development of strong trench-parallel flow requires low viscosity to decouple the mantle wedge from entrainment by the sinking slab. Therefore, understanding the relationship between seismic anisotropy and the accompanying flow field will better constrain the material and dynamic properties of the mantle near subduction zones. In this study, we integrate a kinematic model for calculation of LPO (D-Rex) into a buoyancy-driven, instantaneous 3D flow simulation (ASPECT), using composite non-Newtonian rheology to investigate the dependence of LPO on slab geometry and depth at the Ryukyu Trench. To incorporate the 3D flow effects, the trench and slab extends from the southern tip of Japan to the western edge of Taiwan and the model region is approximately 1/4 of a spherical shell extending from the surface to the core-mantle boundary. In the southern-most region we vary the slab depth and shape to test for the effects of the uncertainties in the observations. We also investigate the effect of adding locally hydrated regions above the slab that affect both the mantle rheology and development of LPO through the consequent changes in mantle flow and dominate (weakest) slip system. We characterize how changes in the simulation conditions affect the LPO within the mantle wedge, subducting slab and sub-slab mantle and relate these to surface observations of seismic anisotropy.

  4. Depth-varying density and organization of chondrocytes in immature and mature bovine articular cartilage assessed by 3d imaging and analysis

    NASA Technical Reports Server (NTRS)

    Jadin, Kyle D.; Wong, Benjamin L.; Bae, Won C.; Li, Kelvin W.; Williamson, Amanda K.; Schumacher, Barbara L.; Price, Jeffrey H.; Sah, Robert L.

    2005-01-01

    Articular cartilage is a heterogeneous tissue, with cell density and organization varying with depth from the surface. The objectives of the present study were to establish a method for localizing individual cells in three-dimensional (3D) images of cartilage and quantifying depth-associated variation in cellularity and cell organization at different stages of growth. Accuracy of nucleus localization was high, with 99% sensitivity relative to manual localization. Cellularity (million cells per cm3) decreased from 290, 310, and 150 near the articular surface in fetal, calf, and adult samples, respectively, to 120, 110, and 50 at a depth of 1.0 mm. The distance/angle to the nearest neighboring cell was 7.9 microm/31 degrees , 7.1 microm/31 degrees , and 9.1 microm/31 degrees for cells at the articular surface of fetal, calf, and adult samples, respectively, and increased/decreased to 11.6 microm/31 degrees , 12.0 microm/30 degrees , and 19.2 microm/25 degrees at a depth of 0.7 mm. The methodologies described here may be useful for analyzing the 3D cellular organization of cartilage during growth, maturation, aging, degeneration, and regeneration.

  5. Breaking the Crowther limit: combining depth-sectioning and tilt tomography for high-resolution, wide-field 3D reconstructions.

    PubMed

    Hovden, Robert; Ercius, Peter; Jiang, Yi; Wang, Deli; Yu, Yingchao; Abruña, Héctor D; Elser, Veit; Muller, David A

    2014-05-01

    To date, high-resolution (<1 nm) imaging of extended objects in three-dimensions (3D) has not been possible. A restriction known as the Crowther criterion forces a tradeoff between object size and resolution for 3D reconstructions by tomography. Further, the sub-Angstrom resolution of aberration-corrected electron microscopes is accompanied by a greatly diminished depth of field, causing regions of larger specimens (>6 nm) to appear blurred or missing. Here we demonstrate a three-dimensional imaging method that overcomes both these limits by combining through-focal depth sectioning and traditional tilt-series tomography to reconstruct extended objects, with high-resolution, in all three dimensions. The large convergence angle in aberration corrected instruments now becomes a benefit and not a hindrance to higher quality reconstructions. A through-focal reconstruction over a 390 nm 3D carbon support containing over 100 dealloyed and nanoporous PtCu catalyst particles revealed with sub-nanometer detail the extensive and connected interior pore structure that is created by the dealloying instability. PMID:24636875

  6. A new 3D Moho depth model for Iran based on the terrestrial gravity data and EGM2008 model

    NASA Astrophysics Data System (ADS)

    Kiamehr, R.; Gómez-Ortiz, D.

    2009-04-01

    Knowledge of the variation of crustal thickness is essential in many applications, such as forward dynamic modelling, numerical heat flow calculations and seismologic applications. Dehghani in 1984 estimated the first Moho depth model over the Iranian plateau using the simple profiling method and Bouguer gravity data. However, these data are high deficiencies and lack of coverage in most part of the region. To provide a basis for an accurate analysis of the region's lithospheric stresses, we develop an up to date three dimensional crustal thickness model of the Iranian Plateau using Parker-Oldenburg iterative method. This method is based on a relationship between the Fourier transform of the gravity anomaly and the sum of the Fourier transform of the interface topography. The new model is based on the new and most complete gravity database of Iran which is produced by Kiamehr for computation of the high resolution geoid model for Iran. Total number of 26125 gravity data were collected from different sources and used for generation an outlier-free 2x2 minutes gravity database for Iran. At the mean time, the Earth Gravitational Model (EGM2008) up to degree 2160 has been developed and published by National Geospatial Intelligence Agency. EGM2008 incorporates improved 5x5 minutes gravity anomalies and has benefited from the latest GRACE based satellite solutions. The major benefit of the EGM2008 is its ability to provide precise and uniform gravity data with global data coverage. Two different Moho depth models have been computed based on the terrestrial and EGM2008 datasets. The minimum and maximum Moho depths for land and EGM2008 models are 10.85-53.86 and 15.41-51.43 km, respectively. In general, we found a good agreement between the Moho geometry obtained using both land and EGM2008 datasets with the RMS of 2.7 km. Also, we had a comparison between these gravimetric Moho models versus global seismic crustal models CRUST 2.0. The differences between EGM2008 and land

  7. Implementation of wireless 3D stereo image capture system and synthesizing the depth of region of interest

    NASA Astrophysics Data System (ADS)

    Ham, Woonchul; Song, Chulgyu; Kwon, Hyeokjae; Badarch, Luubaatar

    2014-05-01

    In this paper, we introduce the mobile embedded system implemented for capturing stereo image based on two CMOS camera module. We use WinCE as an operating system and capture the stereo image by using device driver for CMOS camera interface and Direct Draw API functions. We send the raw captured image data to the host computer by using WiFi wireless communication and then use GPU hardware and CUDA programming for implementation of real time three-dimensional stereo image by synthesizing the depth of ROI(region of interest). We also try to find and declare the mechanism of deblurring of CMOS camera module based on the Kirchhoff diffraction formula and propose a deblurring model. Synthesized stereo image is real time monitored on the shutter glass type three-dimensional LCD monitor and disparity values of each segment are analyzed to prove the validness of emphasizing effect of ROI.

  8. 2D and 3D Shear-Wave Velocity Structure to >1 Km Depth from Ambient and Active Surface Waves: Three "Deep Remi" Case Studies

    NASA Astrophysics Data System (ADS)

    Louie, J. N.; Pancha, A.; Pullammanappallil, S. K.

    2014-12-01

    Refraction microtermor routinely assesses 1D and 2D velocity-depth profiles to shallow depths of approximately 100 m primarily for engineering applications. Estimation of both shallow and deep (>100 m) shear-velocity structure are key elements in the assessment of urban areas for potential earthquake ground shaking, damage, and the calibration of recorded ground motions. Three independent studies investigated the ability of the refraction microtremor technology to image deep velocity structure, to depths exceeding 1 km (Deep ReMi). In the first study, we were able to delineate basin thicknesses of up to 900 m and the deep-basin velocity structure beneath the Reno-area basin. Constraints on lateral velocity changes in 3D as well as on velocity profiles extended down to 1500 m, and show a possible fault offset. This deployment used 30 stand-alone wireless instruments mated to 4.5 Hz geophones, along each of five arrays 2.9 to 5.8 km long. Rayleigh-wave dispersion was clear at frequencies as low as 0.5 Hz using 120 sec ambient urban noise records. The results allowed construction of a 3D velocity model, vetted by agreement with gravity studies. In a second test, a 5.8 km array delimited the southern edge of the Tahoe Basin, with constraints from gravity. There, bedrock depth increased by 250 m in thickness over a distance of 1600 m, with definition of the velocity of the deeper basin sediments. The third study delineated the collapse region of an underground nuclear explosion within a thick sequence of volcanic extrusives, using a shear-wave minivibe in a radial direction, and horizontal geophones. Analysis showed the cavity extends to 620 m depth, with a width of 180 m and a height of 220 m. Our results demonstrate that deep velocity structure can be recovered using ambient noise. This technique offers the ability to define 2D and 3D structural representations essential for seismic hazard evaluation.

  9. Real-Depth imaging: a new 3D imaging technology with inexpensive direct-view (no glasses) video and other applications

    NASA Astrophysics Data System (ADS)

    Dolgoff, Eugene

    1997-05-01

    Floating Images, Inc. has developed the software and hardware for a new, patent pending, 'floating 3-D, off-the-screen- experience' display technology. This technology has the potential to become the next standard for home and arcade video games, computers, corporate presentations, Internet/Intranet viewing, and television. Current '3-D graphics' technologies are actually flat on screen. Floating ImagesTM technology actually produce images at different depths from any display, such as CRT and LCD, for television, computer, projection, and other formats. In addition, unlike stereoscopic 3-D imaging, no glasses, headgear, or other viewing aids are used. And, unlike current autostereoscopic imaging technologies, there is virtually no restriction on where viewers can sit to view the images, with no 'bad' or 'dead' zones, flipping, or pseudoscopy. In addition to providing traditional depth cues such as perspective and background image occlusion, the new technology also provides both horizontal and vertical binocular parallax (the ability to look around foreground objects to see previously hidden background objects, with each eye seeing a different view at all times) and accommodation (the need to re-focus one's eyes when shifting attention from a near object to a distant object) which coincides with convergence (the need to re-aim one's eyes when shifting attention from a near object to a distant object). Since accommodation coincides with convergence, viewing these images doesn't produce headaches, fatigue, or eye-strain, regardless of how long they are viewed (unlike stereoscopic and autostereoscopic displays). The imagery (video or computer generated) must either be formatted for the Floating ImagesTM platform when written or existing software can be re-formatted without much difficulty.

  10. Impacts of 3-D radiative effects on satellite cloud detection and their consequences on cloud fraction and aerosol optical depth retrievals

    NASA Astrophysics Data System (ADS)

    Yang, Yuekui; di Girolamo, Larry

    2008-02-01

    We present the first examination on how 3-D radiative transfer impacts satellite cloud detection that uses a single visible channel threshold. The 3-D radiative transfer through predefined heterogeneous cloud fields embedded in a range of horizontally homogeneous aerosol fields have been carried out to generate synthetic nadir-viewing satellite images at a wavelength of 0.67 μm. The finest spatial resolution of the cloud field is 30 m. We show that 3-D radiative effects cause significant histogram overlap between the radiance distribution of clear and cloudy pixels, the degree to which depends on many factors (resolution, solar zenith angle, surface reflectance, aerosol optical depth (AOD), cloud top variability, etc.). This overlap precludes the existence of a threshold that can correctly separate all clear pixels from cloudy pixels. The region of clear/cloud radiance overlap includes moderately large (up to 5 in our simulations) cloud optical depths. Purpose-driven cloud masks, defined by different thresholds, are applied to the simulated images to examine their impact on retrieving cloud fraction and AOD. Large (up to 100s of %) systematic errors were observed that depended on the type of cloud mask and the factors that influence the clear/cloud radiance overlap, with a strong dependence on solar zenith angle. Different strategies in computing domain-averaged AOD were performed showing that the domain-averaged BRF from all clear pixels produced the smallest AOD biases with the weakest (but still large) dependence on solar zenith angle. The large dependence of the bias on solar zenith angle has serious implications for climate research that uses satellite cloud and aerosol products.

  11. Prestack reverse time migration for tilted transversely isotropic media

    NASA Astrophysics Data System (ADS)

    Jang, Seonghyung; Hien, Doan Huy

    2013-04-01

    According to having interest in unconventional resource plays, anisotropy problem is naturally considered as an important step for improving the seismic image quality. Although it is well known prestack depth migration for the seismic reflection data is currently one of the powerful tools for imaging complex geological structures, it may lead to migration error without considering anisotropy. Asymptotic analysis of wave propagation in transversely isotropic (TI) media yields a dispersion relation of couple P- and SV wave modes that can be converted to a fourth order scalar partial differential equation (PDE). By setting the shear wave velocity equal zero, the fourth order PDE, called an acoustic wave equation for TI media, can be reduced to couple of second order PDE systems and we try to solve the second order PDE by the finite difference method (FDM). The result of this P wavefield simulation is kinematically similar to elastic and anisotropic wavefield simulation. We develop prestack depth migration algorithm for tilted transversely isotropic media using reverse time migration method (RTM). RTM is a method for imaging the subsurface using inner product of source wavefield extrapolation in forward and receiver wavefield extrapolation in backward. We show the subsurface image in TTI media using the inner product of partial derivative wavefield with respect to physical parameters and observation data. Since the partial derivative wavefields with respect to the physical parameters require extremely huge computing time, so we implemented the imaging condition by zero lag crosscorrelation of virtual source and back propagating wavefield instead of partial derivative wavefields. The virtual source is calculated directly by solving anisotropic acoustic wave equation, the back propagating wavefield on the other hand is calculated by the shot gather used as the source function in the anisotropic acoustic wave equation. According to the numerical model test for a simple

  12. Integrating depth functions and hyper-scale terrain analysis for 3D soil organic carbon modeling in agricultural fields at regional scale

    NASA Astrophysics Data System (ADS)

    Ramirez-Lopez, L.; van Wesemael, B.; Stevens, A.; Doetterl, S.; Van Oost, K.; Behrens, T.; Schmidt, K.

    2012-04-01

    different depth functions, ii. The use of different machine learning approaches for modeling the parameters of the fitted depth functions using the ConMap features and iii. The influence of different spatial scales on the SOC profile distribution variability. Keywords: 3D modeling, Digital soil mapping, Depth functions, Terrain analysis. Reference Behrens, T., K. Schmidt, K., Zhu, A.X. Scholten, T. 2010. The ConMap approach for terrain-based digital soil mapping. European Journal of Soil Science, v. 61, p.133-143.

  13. 3D and 4D Seismic Imaging in the Oilfield; the state of the art

    NASA Astrophysics Data System (ADS)

    Strudley, A.

    2005-05-01

    Seismic imaging in the oilfield context has seen enormous changes over the last 20 years driven by a combination of improved subsurface illumination (2D to 3D), increased computational power and improved physical understanding. Today Kirchhoff Pre-stack migration (in time or depth) is the norm with anisotropic parameterisation and finite difference methods being increasingly employed. In the production context Time-Lapse (4D) Seismic is of growing importance as a tool for monitoring reservoir changes to facilitate increased productivity and recovery. In this paper we present an overview of state of the art technology in 3D and 4D seismic and look at future trends. Pre-stack Kirchhoff migration in time or depth is the imaging tool of choice for the majority of contemporary 3D datasets. Recent developments in 3D pre-stack imaging have been focussed around finite difference solutions to the acoustic wave equation, the so-called Wave Equation Migration methods (WEM). Application of finite difference solutions to imaging is certainly not new, however 3D pre-stack migration using these schemes is a relatively recent development driven by the need for imaging complex geologic structures such as sub salt, and facilitated by increased computational resources. Finally there are a class of imaging methods referred to as beam migration. These methods may be based on either the wave equation or rays, but all operate on a localised (in space and direction) part of the wavefield. These methods offer a bridge between the computational efficiency of Kirchhoff schemes and the improved image quality of WEM methods. Just as 3D seismic has had a radical impact on the quality of the static model of the reservoir, 4D seismic is having a dramatic impact on the dynamic model. Repeat shooting of seismic surveys after a period of production (typically one to several years) reveals changes in pressure and saturation through changes in the seismic response. The growth in interest in 4D seismic

  14. Comparison of publically available Moho depth and crustal thickness grids with newly derived grids by 3D gravity inversion for the High Arctic region.

    NASA Astrophysics Data System (ADS)

    Lebedeva-Ivanova, Nina; Gaina, Carmen; Minakov, Alexander; Kashubin, Sergey

    2016-04-01

    We derived Moho depth and crustal thickness for the High Arctic region by 3D forward and inverse gravity modelling method in the spectral domain (Minakov et al. 2012) using lithosphere thermal gravity anomaly correction (Alvey et al., 2008); a vertical density variation for the sedimentary layer and lateral crustal variation density. Recently updated grids of bathymetry (Jakobsson et al., 2012), gravity anomaly (Gaina et al, 2011) and dynamic topography (Spasojevic & Gurnis, 2012) were used as input data for the algorithm. TeMAr sedimentary thickness grid (Petrov et al., 2013) was modified according to the most recently published seismic data, and was re-gridded and utilized as input data. Other input parameters for the algorithm were calibrated using seismic crustal scale profiles. The results are numerically compared with publically available grids of the Moho depth and crustal thickness for the High Arctic region (CRUST 1 and GEMMA global grids; the deep Arctic Ocean grids by Glebovsky et al., 2013) and seismic crustal scale profiles. The global grids provide coarser resolution of 0.5-1.0 geographic degrees and not focused on the High Arctic region. Our grids better capture all main features of the region and show smaller error in relation to the seismic crustal profiles compare to CRUST 1 and GEMMA grids. Results of 3D gravity modelling by Glebovsky et al. (2013) with separated geostructures approach show also good fit with seismic profiles; however these grids cover the deep part of the Arctic Ocean only. Alvey A, Gaina C, Kusznir NJ, Torsvik TH (2008). Integrated crustal thickness mapping and plate recon-structions for the high Arctic. Earth Planet Sci Lett 274:310-321. Gaina C, Werner SC, Saltus R, Maus S (2011). Circum-Arctic mapping project: new magnetic and gravity anomaly maps of the Arctic. Geol Soc Lond Mem 35, 39-48. Glebovsky V.Yu., Astafurova E.G., Chernykh A.A., Korneva M.A., Kaminsky V.D., Poselov V.A. (2013). Thickness of the Earth's crust in the

  15. Click-assembled, oxygen-sensing nanoconjugates for depth-resolved, near-infrared imaging in a 3D cancer model.

    PubMed

    Nichols, Alexander J; Roussakis, Emmanuel; Klein, Oliver J; Evans, Conor L

    2014-04-01

    Hypoxia is an important contributing factor to the development of drug-resistant cancer, yet few nonperturbative tools exist for studying oxygenation in tissues. While progress has been made in the development of chemical probes for optical oxygen mapping, penetration of such molecules into poorly perfused or avascular tumor regions remains problematic. A click-assembled oxygen-sensing (CAOS) nanoconjugate is reported and its properties demonstrated in an in vitro 3D spheroid cancer model. The synthesis relies on the sequential click-based ligation of poly(amidoamine)-like subunits for rapid assembly. Near-infrared confocal phosphorescence microscopy was used to demonstrate the ability of the CAOS nanoconjugates to penetrate hundreds of micrometers into spheroids within hours and to show their sensitivity to oxygen changes throughout the nodule. This proof-of-concept study demonstrates a modular approach that is readily extensible to a wide variety of oxygen and cellular sensors for depth-resolved imaging in tissue and tissue models. PMID:24590700

  16. Relative-amplitude-preserving prestack time migration by the equivalent offset method

    NASA Astrophysics Data System (ADS)

    Geiger, Hugh Douglas

    The kinematics of prestack time migration by the equivalent offset method (EOM) are well established as a simple reformulation of the double-square-root equation of seismic imaging. EOM is implemented as a nonrecursive diffraction stack, where samples in the data space are weighted, filtered, and summed to produce samples in the image space. In this dissertation, I determine the exact optimum weighting function that produces an image as a stack of angle-dependent reflectivities, and suggest practical alternatives that are appropriate for imaging using prestack time migrations. The imaging problem is treated as an inverse problem consisting of an estimation problem and an appraisal problem. As is typical in geophysical inverse problems, a quantitative solution is provided for the estimation problem, and the appraisal problem is replaced by a validation process. A framework for qualitative validation of prestack time migration is described in terms of accuracy of focusing, accuracy of relative positioning, and accuracy of absolute positioning. Quantitative validation is achieved by testing the weighting functions using synthetic seismic data. The theoretical basis for acoustic wavefield extrapolation is developed from first principles. The Kirchhoff-Helmholtz integral representation, the fundamental equation of wavefield extrapolation and imaging, provides a mathematical description of Huygens' principle, yields simplified formulae for forward and inverse extrapolation from planar and non-planar interfaces, and gives reciprocity relations for Green's functions and acoustic pressure. Two methods of depth imaging are developed, Kirchhoff-approximate migration and Kirchhoff-approximate migration/inversion. Both rely on the Kirchhoff approximation at the reflecting surface. The second method, determined from Born-approximate inversion, provides exact expressions for constant-wavespeed common-offset migration/inversion required for relative amplitude preserving prestack

  17. Characterisation of natural organic matter (NOM) in depth profile of Mediterranean Sea by 3D-Fluorescence following with PARAFAC treatment

    NASA Astrophysics Data System (ADS)

    Huiyu, Z.; Durrieu, G.; Redon, R.; Heimbuerger, L.; Mounier, S.

    2009-12-01

    A periodic series of samplings have made during one year(2008) organized by Ifremer into the central Ligurian Sea(DYFAMED site, 43°25’N, 07°52’E, Mediterranean Sea). Spectra were mesured by spectrofluorimetry(HITACHI 4500) at excitation wavelengths from 250nm to 500nm and emission wavelengths from 200nm to 550nm, both wavelength slits for 5nm, scan speed is 2400nm/min. Parallel factors analysis(PARAFAC) software is a powerful statistical technique to treat the 3D-fluorescence spectra leading to the decomposition by a number of independent fluorescent compounds 1 and 2. Found 4 fluorescent components representing the fluorescence maxima of previously identified moieties: [Tyr] maximal excitation wavelength and emission wavelength 265nm/305nm (tyrosine-like); [Trp] maximal λEX/λEM=280nm/340nm(Peak T, tryptophan-like group); [M] maximal λEX/λEM=295nm/410nm(Peak M, marine humic-like substance) and a double maximum component [CA] with maximal λEX/λEM=335nm/445nm(Peak C, visible humic-like group) and λEX/λEM=250nm/445nm(Peak A, UV humic-like substance). Fluorescence contribution of each component at different logarithmic depths(Fig.2) shows that the most concentrated fluorophores zone is deeper than 100m, which is different from the results of dissolved organic carbon(DOC) concentration which the most concentrated zone is on the seasurface(B.Avril,2002).The humic-like substances are generally less fluorescent, particularly the M compound. An important peak contribution of marine humic-like substance has appeared in May at the profound 100m and 2200m, although the other fluorophores kept their values reasonable. The intensity maxima was closed to 100m, while an augmentation of protein substances in the deep sea(about 400 m) following by a shut immediate at 600 m in the months July, August and September. It is probably due to the sufficient heat from the sea surface; micro-organism could modify their position in the depth profile in the seawater. Thanks to

  18. Potential field Modeling of the 3-D Geologic Structure of the San Andreas Fault Observatory at Depth (SAFOD) at Parkfield, California

    NASA Astrophysics Data System (ADS)

    McPhee, D. K.

    2003-12-01

    Gravity and magnetic data, along with other geophysical and geological constraints, are used to develop 2-D models that we use to characterize the 3-D geological structure of the San Andreas fault (SAF) zone in the vicinity of SAFOD near Parkfield, CA. The gravity data, reduced to isostatic anomalies, comprise a compilation of three different data sets with a maximum of 1.6 km grid spacing for the scattered data and closely spaced ( ˜40 m) stations along one SW-NE profile crossing the SAFOD pilot hole. Aeromagnetic data were flown at a nominal 300 m above the terrain along SW-NE flight lines perpendicular to the San Andreas Fault. Data were recorded at ˜50 m spacing along flight lines approximately 800 m apart. Ground magnetic data recorded every 5 m along lines ˜300 m apart cover a 3 x 5 km area surrounding the SAFOD pilot hole. Previous modeling showed that magnetic granitic basement rocks southwest of the SAF are divided by an inferred steep fault sub-parallel to the SAF. We compute 2-D crustal models along 5 km-long southwest-northeast profiles, one of which extends through the SAFOD pilot hole near and along the high-resolution seismic refraction/reflection survey completed in 1998 (Catchings et al., 2002). Our models are constrained by pilot hole measurements, where we see a boundary between sediment and granitic basement at ˜770 m and an order of magnitude increase in magnetic susceptibility at ˜1400 m, possibly the same depth at which the SW dipping Buzzard Canyon Fault intersects the pilot hole. Regional gravity, magnetic and geologic data indicate two very distinct basement blocks separated by a steeply dipping SAF. The shallowly dipping sedimentary section SW of the SAF coincides with the low velocity zone observed with seismic measurements. Shallow slivers of magnetic sandstone on the NE side of the SAF explain higher frequency features in the magnetic data. In addition, we show a flat lying, tabular body of serpentinite sandwiched between 2 blocks

  19. Detailed Velocity and Density models of the Cascadia Subduction Zone from Prestack Full-Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Fortin, W.; Holbrook, W. S.; Mallick, S.; Everson, E. D.; Tobin, H. J.; Keranen, K. M.

    2014-12-01

    Understanding the geologic composition of the Cascadia Subduction Zone (CSZ) is critically important in assessing seismic hazards in the Pacific Northwest. Despite being a potential earthquake and tsunami threat to millions of people, key details of the structure and fault mechanisms remain poorly understood in the CSZ. In particular, the position and character of the subduction interface remains elusive due to its relative aseismicity and low seismic reflectivity, making imaging difficult for both passive and active source methods. Modern active-source reflection seismic data acquired as part of the COAST project in 2012 provide an opportunity to study the transition from the Cascadia basin, across the deformation front, and into the accretionary prism. Coupled with advances in seismic inversion methods, this new data allow us to produce detailed velocity models of the CSZ and accurate pre-stack depth migrations for studying geologic structure. While still computationally expensive, current computing clusters can perform seismic inversions at resolutions that match that of the seismic image itself. Here we present pre-stack full waveform inversions of the central seismic line of the COAST survey offshore Washington state. The resultant velocity model is produced by inversion at every CMP location, 6.25 m laterally, with vertical resolution of 0.2 times the dominant seismic frequency. We report a good average correlation value above 0.8 across the entire seismic line, determined by comparing synthetic gathers to the real pre-stack gathers. These detailed velocity models, both Vp and Vs, along with the density model, are a necessary step toward a detailed porosity cross section to be used to determine the role of fluids in the CSZ. Additionally, the P-velocity model is used to produce a pre-stack depth migration image of the CSZ.

  20. 3D seismics for geothermal reservoir characterization - a case study from Schneeberg (Germany)

    NASA Astrophysics Data System (ADS)

    Hlousek, F.; Hellwig, O.; Buske, S.

    2013-12-01

    We present the results of a 3D seismic survey acquired near Schneeberg in the western Erzgebirge (Germany). The aim of the project is to use seismic exploration methods to image and to characterize a major fault zone in crystalline rock which could be used as a geothermal reservoir at a target depth of about 5-6 km with expected temperatures between 160-180°C. For this purpose a high resolution 3D Vibroseis survey with more than 5300 source and approximately 8000 receiver locations was performed at the end of 2012 and covered an area of approximately 10 km x 13 km. The 3D survey was complemented by an additional wide-angle seismic survey using explosives along eleven profile lines radially centered at the target area. The region itself is dominated by the NW-SE striking Gera-Jáchymov fault system. The main geological features in the survey area are well known from intensive mining activities down to a depth of about 2 km. The seismic investigations aimed at imaging the partly steeply dipping fault branches at greater depth, in particular a dominant steeply NE dipping fault in the central part of the survey area. Beside this main structure, the Gera-Jáchymov fault zone consists of a couple of steeply SW dipping conjugated faults. Advanced processing and imaging methods have been applied to the data set. 3D Kirchhoff prestack depth migration delivered a clear image of the structure of the various fault branches at depths of around 2-5 km. Furthermore, focusing migration methods (e.g. coherency migration) have been applied and even sharpened the image such that the 3D seismic result allows for a profound characterization of this potential geothermal reservoir in crystalline rock.

  1. Pre-stack migration of three-dimensional seismic data

    SciTech Connect

    Fehler, M.; Brickner, R.; Cheng, N.; Higginbotham, J.; House, L.; Roberts, P.; Sukup, D.

    1996-07-01

    This is the final report of a two-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The project sought to develop and test a three-dimensional pre-stack migration code to run on the Los Alamos CM5. This work was done in collaboration with Texaco. The authors implemented a version of Texaco`s phase-shift with interpolation algorithm on the CM5. The authors also tested the algorithm on the Cray T3D in collaboration with Cray participants. Processing of seismic data is extremely computer and I/O intensive. The authors developed methods for efficiently performing both I/O and computing as appropriate for a large three-dimensional seismic dataset. The result was improved capability to image subsurface structures in the earth. The emphasis was on structures that are beneath salt in the US Gulf Coast, where many oil and gas reserves are known to exist but where identifying them from surface seismic data is currently difficult due to the large impedance contrast between the salt and surrounding strata.

  2. If you watch it move, you'll recognize it in 3D: Transfer of depth cues between encoding and retrieval.

    PubMed

    Papenmeier, Frank; Schwan, Stephan

    2016-02-01

    Viewing objects with stereoscopic displays provides additional depth cues through binocular disparity supporting object recognition. So far, it was unknown whether this results from the representation of specific stereoscopic information in memory or a more general representation of an object's depth structure. Therefore, we investigated whether continuous object rotation acting as depth cue during encoding results in a memory representation that can subsequently be accessed by stereoscopic information during retrieval. In Experiment 1, we found such transfer effects from continuous object rotation during encoding to stereoscopic presentations during retrieval. In Experiments 2a and 2b, we found that the continuity of object rotation is important because only continuous rotation and/or stereoscopic depth but not multiple static snapshots presented without stereoscopic information caused the extraction of an object's depth structure into memory. We conclude that an object's depth structure and not specific depth cues are represented in memory. PMID:26765253

  3. Depth

    PubMed Central

    Koenderink, Jan J; van Doorn, Andrea J; Wagemans, Johan

    2011-01-01

    Depth is the feeling of remoteness, or separateness, that accompanies awareness in human modalities like vision and audition. In specific cases depths can be graded on an ordinal scale, or even measured quantitatively on an interval scale. In the case of pictorial vision this is complicated by the fact that human observers often appear to apply mental transformations that involve depths in distinct visual directions. This implies that a comparison of empirically determined depths between observers involves pictorial space as an integral entity, whereas comparing pictorial depths as such is meaningless. We describe the formal structure of pictorial space purely in the phenomenological domain, without taking recourse to the theories of optics which properly apply to physical space—a distinct ontological domain. We introduce a number of general ways to design and implement methods of geodesy in pictorial space, and discuss some basic problems associated with such measurements. We deal mainly with conceptual issues. PMID:23145244

  4. Prestack elastic generalized-screen migration for multicomponent data

    NASA Astrophysics Data System (ADS)

    Kim, Byoung Yeop; Seol, Soon Jee; Lee, Ho-Young; Byun, Joongmoo

    2016-03-01

    An efficient prestack depth migration method based on the elastic one-way wave equation was developed using an improved elastic generalized-screen propagator, which effectively describes the behavior of elastic waves with mode conversion at the interfaces and efficiently computes wave propagation in media with lateral velocity variations. The elastic propagator presented in this study was improved from the elastic generalized-screen propagator. Several terms of the vertical slowness right symbol with orders are corrected from the original formulation, and the vertical slowness operator in the propagator was expanded up to the 2nd order which yields a more accurate approximation. In each screen propagation step, the multicomponent wavefields are automatically separated into the P and S wavefields by the P-S decomposition operator included in the elastic generalized-screen propagator. This process facilitates the imaging of the P and S waves separately without any additional P and S separation process after the wavefield extrapolation. Impulse response tests of the improved elastic generalized-screen propagator in a uniform-property medium proved that propagation accuracy increases with order, even when large medium perturbations are assigned. Migration tests using the developed algorithm on a simple layered model and two complex models (the SEG/EAGE salt and elastic Marmousi-2 model) demonstrated the functional advantages and capabilities of the algorithm compared with elastic migration using the scalar wave equation.

  5. True-Depth: a new type of true 3D volumetric display system suitable for CAD, medical imaging, and air-traffic control

    NASA Astrophysics Data System (ADS)

    Dolgoff, Eugene

    1998-04-01

    Floating Images, Inc. is developing a new type of volumetric monitor capable of producing a high-density set of points in 3D space. Since the points of light actually exist in space, the resulting image can be viewed with continuous parallax, both vertically and horizontally, with no headache or eyestrain. These 'real' points in space are always viewed with a perfect match between accommodation and convergence. All scanned points appear to the viewer simultaneously, making this display especially suitable for CAD, medical imaging, air-traffic control, and various military applications. This system has the potential to display imagery so accurately that a ruler could be placed within the aerial image to provide precise measurement in any direction. A special virtual imaging arrangement allows the user to superimpose 3D images on a solid object, making the object look transparent. This is particularly useful for minimally invasive surgery in which the internal structure of a patient is visible to a surgeon in 3D. Surgical procedures can be carried out through the smallest possible hole while the surgeon watches the procedure from outside the body as if the patient were transparent. Unlike other attempts to produce volumetric imaging, this system uses no massive rotating screen or any screen at all, eliminating down time due to breakage and possible danger due to potential mechanical failure. Additionally, it is also capable of displaying very large images.

  6. 3-D seismic imaging of complex geologies

    SciTech Connect

    Womble, D.E.; Dosanjh, S.S.; VanDyke, J.P.; Oldfield, R.A.; Greenberg, D.S.

    1995-02-01

    We present three codes for the Intel Paragon that address the problem of three-dimensional seismic imaging of complex geologies. The first code models acoustic wave propagation and can be used to generate data sets to calibrate and validate seismic imaging codes. This code reported the fastest timings for acoustic wave propagation codes at a recent SEG (Society of Exploration Geophysicists) meeting. The second code implements a Kirchhoff method for pre-stack depth migration. Development of this code is almost complete, and preliminary results are presented. The third code implements a wave equation approach to seismic migration and is a Paragon implementation of a code from the ARCO Seismic Benchmark Suite.

  7. Traveltime computation and imaging from rugged topography in 3D TTI media

    NASA Astrophysics Data System (ADS)

    Liu, Shaoyong; Wang, Huazhong; Yang, Qinyong; Fang, Wubao

    2014-02-01

    Foothill areas with rugged topography are of great potential for oil and gas seismic exploration, but subsurface imaging in these areas is very challenging. Seismic acquisition with larger offset and wider azimuth is necessary for seismic imaging in complex areas. However, the scale anisotropy in this case must be taken into account. To generalize the pre-stack depth migration (PSDM) to 3D transversely isotropic media with vertical symmetry axes (VTI) and tilted symmetry axes (TTI) from rugged topography, a new dynamic programming approach for the first-arrival traveltime computation method is proposed. The first-arrival time on every uniform mesh point is calculated based on Fermat's principle with simple calculus techniques and a systematic mapping scheme. In order to calculate the minimum traveltime, a set of nonlinear equations is solved on each mesh point, where the group velocity is determined by the group angle. Based on the new first-arrival time calculation method, the corresponding PSDM and migration velocity analysis workflow for 3D anisotropic media from rugged surface is developed. Numerical tests demonstrate that the proposed traveltime calculation method is effective in both VTI and TTI media. The migration results for 3D field data show that it is necessary to choose a smooth datum to remove the high wavenumber move-out components for PSDM with rugged topography and take anisotropy into account to achieve better images.

  8. Fracture prediction using prestack Q calculation and attenuation anisotropy

    NASA Astrophysics Data System (ADS)

    An, Yong

    2015-09-01

    The analysis of fractured reservoirs is very important to hydrocarbon exploration. The quality factor Q is a parameter used to characterize the attenuation of seismic waves in subsurface media. Q not only reflects the inherent properties of the medium but also is used to make predictions regarding reservoir fractures. Compared with poststack seismic data, prestack seismic data contain detailed stratigraphic information of seismic attributes and data inversion in reservoirs. The extraction of absorption parameters from prestack data improves the accuracy of attenuation estimates. In this study, I present a new method for calculating Q based on the modified S transform (MST) using common midpoint (CMP) preprocessed gathers. First, I use the MST with adjustable time-frequency resolution to carry out a high-precision time-frequency analysis of prestack CMP gathers and derive the calculation formula for the improved S transform-based frequency spectrum ratio method. Then, I use the energy density ratio to calculate the slope of the frequency spectrum ratio instead of the conventional amplitude ratio. Thus, I establish the relation between the slope of the spectrum ratio and offset as well as eliminate the offset effect by multichannel linear fitting, obtaining accurate Q values from seismic prestack data. Finally, I use the proposed prestack Q extraction method to study the fractured reservoir in Qianjin burried hill and P-wave absorption and attenuation anisotropy with good results in the fracture characterization.

  9. 3D constraints on a possible deep > 2.5 km massive sulphide mineralization from 2D crooked-line seismic reflection data in the Kristineberg mining area, northern Sweden

    NASA Astrophysics Data System (ADS)

    Malehmir, Alireza; Schmelzbach, Cedric; Bongajum, Emmanuel; Bellefleur, Gilles; Juhlin, Christopher; Tryggvason, Ari

    2009-12-01

    2D crooked-line seismic reflection surveys in crystalline environments are often considered challenging in their processing and interpretation. These challenges are more evident when complex diffraction signals that can originate from out-of-the-plane and a variety of geological features are present. A seismic profile in the Kristineberg mining area in northern Sweden shows an impressive diffraction package, covering an area larger than 25 km 2 in the subsurface at depths greater than 2.5 km. We present here a series of scenarios in which each can, to some extent, explain the nature of this extraordinarily large package of diffractions. Cross-dip analysis, diffraction imaging and modeling, as well as 3D processing of the crooked-line data provided constraints on the interpretation of the diffraction package. Overall, the results indicate that the diffraction package can be associated with at least four main short south-dipping diffractors in a depth range of 2.5-4.5 km. Candidate scenarios for the origin of the diffraction package are: (1) a series of massive sulphide deposits, (2) a series of mafic-ultramafic intrusions, (3) a major shear-zone and (4) multiple contact lithologies. We have also investigated the possible contribution of mode-converted scattered energy in the diffraction package using a modified converted-wave 3D prestack depth migration algorithm with the results indicating that a majority of the diffractions are P-wave diffractions. The 3D prestack migration of the data provided improved images of a series of steeply north-dipping mafic-ultramafic sill intrusions to a depth of about 4 km, where the diffractions appear to focus after the migration. The results and associated interpretations presented in this paper have improved our understanding of this conspicuous package of diffractions and may lead to re-evaluation of the 3D geological model of the Kristineberg mining area.

  10. 3D World Building System

    SciTech Connect

    2013-10-30

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  11. 3D World Building System

    ScienceCinema

    None

    2014-02-26

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  12. Studies of 3D-cloud optical depth from small to very large values, and of the radiation and remote sensing impacts of larger-drop clustering

    SciTech Connect

    Wiscombe, Warren; Marshak, Alexander; Knyazikhin, Yuri; Chiu, Christine

    2007-05-04

    We have basically completed all the goals stated in the previous proposal and published or submitted journal papers thereon, the only exception being First-Principles Monte Carlo which has taken more time than expected. We finally finished the comprehensive book on 3D cloud radiative transfer (edited by Marshak and Davis and published by Springer), with many contributions by ARM scientists; this book was highlighted in the 2005 ARM Annual Report. We have also completed (for now) our pioneering work on new models of cloud drop clustering based on ARM aircraft FSSP data, with applications both to radiative transfer and to rainfall. This clustering work was highlighted in the FY07 “Our Changing Planet” (annual report of the US Climate Change Science Program). Our group published 22 papers, one book, and 5 chapters in that book, during this proposal period. All are listed at the end of this section. Below, we give brief highlights of some of those papers.

  13. ToF-SIMS depth profiling of cells: z-correction, 3D imaging, and sputter rate of individual NIH/3T3 fibroblasts.

    PubMed

    Robinson, Michael A; Graham, Daniel J; Castner, David G

    2012-06-01

    Proper display of three-dimensional time-of-flight secondary ion mass spectrometry (ToF-SIMS) imaging data of complex, nonflat samples requires a correction of the data in the z-direction. Inaccuracies in displaying three-dimensional ToF-SIMS data arise from projecting data from a nonflat surface onto a 2D image plane, as well as possible variations in the sputter rate of the sample being probed. The current study builds on previous studies by creating software written in Matlab, the ZCorrectorGUI (available at http://mvsa.nb.uw.edu/), to apply the z-correction to entire 3D data sets. Three-dimensional image data sets were acquired from NIH/3T3 fibroblasts by collecting ToF-SIMS images, using a dual beam approach (25 keV Bi(3)(+) for analysis cycles and 20 keV C(60)(2+) for sputter cycles). The entire data cube was then corrected by using the new ZCorrectorGUI software, producing accurate chemical information from single cells in 3D. For the first time, a three-dimensional corrected view of a lipid-rich subcellular region, possibly the nuclear membrane, is presented. Additionally, the key assumption of a constant sputter rate throughout the data acquisition was tested by using ToF-SIMS and atomic force microscopy (AFM) analysis of the same cells. For the dried NIH/3T3 fibroblasts examined in this study, the sputter rate was found to not change appreciably in x, y, or z, and the cellular material was sputtered at a rate of approximately 10 nm per 1.25 × 10(13) ions C(60)(2+)/cm(2). PMID:22530745

  14. Combining depth analysis with surface morphology analysis to analyse the prehistoric painted pottery from Majiayao Culture by confocal 3D-XRF

    NASA Astrophysics Data System (ADS)

    Yi, Longtao; Liu, Zhiguo; Wang, Kai; Lin, Xue; Chen, Man; Peng, Shiqi; Yang, Kui; Wang, Jinbang

    2016-04-01

    The Majiayao Culture (3300 BC-2900 BC) formed one of the three painted pottery centres of the Yellow River basin, China, in prehistoric times. Painted pottery from this period is famous for its exquisite workmanship and meticulous painting. Studying the layer structure and element distribution of the paint on the pottery is conducive to investigating its workmanship, which is important for archaeological research. However, the most common analysis methods are destructive. To investigate the layers of paint on the pottery nondestructively, a confocal three-dimensional micro-X-ray fluorescence set-up combined with two individual polycapillary lenses has been used to analyse two painted pottery fragments. Nondestructive elemental depth analyses and surface topographic analysis were performed. The elemental depth profiles of Mn, Fe and Ca obtained from these measurements were consistent with those obtained using an optical microscope. The depth profiles show that there are layer structures in two samples. The images show that the distribution of Ca is approximately homogeneous in both painted and unpainted regions. In contrast, Mn appeared only in the painted regions. Meanwhile, the distributions of Fe in the painted and unpainted regions were not the same. The surface topographic shows that the pigment of dark-brown region was coated above the brown region. These conclusions allowed the painting process to be inferred.

  15. Complex patterns of faulting revealed by 3D seismic data at the West Galicia rifted margin

    NASA Astrophysics Data System (ADS)

    Reston, Timothy; Cresswell, Derren; Sawyer, Dale; Ranero, Cesar; Shillington, Donna; Morgan, Julia; Lymer, Gael

    2015-04-01

    The west Galicia margin is characterised by crust thinning to less than 3 km, well-defined fault blocks, which overlie a bright reflection (the S reflector) generally interpreted as a tectonic Moho. The margin exhibits neither voluminous magmatism nor thick sediment piles to obscure the structures and the amount of extension. As such is represents an ideal location to study the process of continental breakup both through seismic imaging and potentially through drilling. Prestack depth migration of existing 2D profiles has strongly supported the interpretation of the S reflector as both a detachment and as the crust-mantle boundary; wide-angle seismic has also shown that the mantle beneath S is serpentinised. Despite the quality of the existing 2D seismic images, a number of competing models have been advanced to explain the formation of this margin, including sequential faulting, polyphase faulting, multiple detachments and the gravitational collapse of the margin over exhumed mantle. As these models, all developed for the Galicia margin, have been subsequently applied to other margins, distinguishing between them has implications not only for the structure of the Galicia margin but for the process of rifting through to breakup more generally. To address these issues in summer of 2013 we collected a 3D combined seismic reflection and wide-angle dataset over this margin. Here we present some of the results of ongoing processing of the 3D volume, focussing on the internal structure of some of the fault blocks that overlies the S detachment. 2D processing of the data shows a relatively simple series of tilted fault block, bound by west-dipping faults that detach downwards onto the bright S reflector. However, inspection of the 3D volume produced by 3D pre-stack time migration reveals that the fault blocks contain a complex set of sedimentary packages, with strata tilted to the east, west, north and south, each package bound by faults. Furthermore, the top of crustal

  16. Interpretation of a 3D Seismic-Reflection Volume in the Basin and Range, Hawthorne, Nevada

    NASA Astrophysics Data System (ADS)

    Louie, J. N.; Kell, A. M.; Pullammanappallil, S.; Oldow, J. S.; Sabin, A.; Lazaro, M.

    2009-12-01

    A collaborative effort by the Great Basin Center for Geothermal Energy at the University of Nevada, Reno, and Optim Inc. of Reno has interpreted a 3d seismic data set recorded by the U.S. Navy Geothermal Programs Office (GPO) at the Hawthorne Army Depot, Nevada. The 3d survey incorporated about 20 NNW-striking lines covering an area of approximately 3 by 10 km. The survey covered an alluvial area below the eastern flank of the Wassuk Range. In the reflection volume the most prominent events are interpreted to be the base of Quaternary alluvium, the Quaternary Wassuk Range-front normal fault zone, and sequences of intercalated Tertiary volcanic flows and sediments. Such a data set is rare in the Basin and Range. Our interpretation reveals structural and stratigraphic details that form a basis for rapid development of the geothermal-energy resources underlying the Depot. We interpret a map of the time-elevation of the Wassuk Range fault and its associated splays and basin-ward step faults. The range-front fault is the deepest, and its isochron map provides essentially a map of "economic basement" under the prospect area. There are three faults that are the most readily picked through vertical sections. The fault reflections show an uncertainty in the time-depth that we can interpret for them of 50 to 200 ms, due to the over-migrated appearance of the processing contractor’s prestack time-migrated data set. Proper assessment of velocities for mitigating the migration artifacts through prestack depth migration is not possible from this data set alone, as the offsets are not long enough for sufficiently deep velocity tomography. The three faults we interpreted appear as gradients in potential-field maps. In addition, the southern boundary of a major Tertiary graben may be seen within the volume as the northward termination of the strong reflections from older Tertiary volcanics. Using a transparent volume view across the survey gives a view of the volcanics in full

  17. 3-D geometry and physical property of the Mega-Splay Fault in Nankai trough

    NASA Astrophysics Data System (ADS)

    Masui, R.; Tsuji, T.; Yamada, Y.; Environmental Resource; System Engineering laboratory

    2011-12-01

    The Nankai trough is a subduction zone, where the Philippine Sea plate is being subducted beneath southwest Japan at a rate of ~4-6.5 cm/y at an azimuth of ~300°-315°. A lot of operations have been done in Nankai, such as three-dimensional seismic reflection surveys and Deep Sea Drilling Project (DSDP), Ocean Drilling Program (ODP), Integrated Ocean Drilling Program (IODP). They revealed that there is a large splay fault, referred to as 'Mega-Splay'. The Mega-Splay Fault has caused a series of catastrophic earthquakes and submarine landslides, which may have led to TSUNAMI. Since fault development history may have affected the geometry of the Mega-Splay Fault and physical property within the fault zone, they need to be examined in detail. In this research, we used 3-D pre-stack depth migration (PSDM), 3-D pre-stack time migration (PSTM) and P-wave velocity in C0004B well (Logging data), in order to interpret 3-D structure of Mega-Splay Fault. The analysis in this research is basically divided into two parts. One is structural interpretation of Splay Fault, based on the high amplitude reflection surface on seismic profiles. The other part is acoustic impedance inversion (AI inversion), in which we inverted seismic waveform into physical property (in this study, acoustic impedance), with the P-wave velocity data at C0004B near Mega-Splay Fault. The 3-D PSDM (or PSTM) clearly images details of Splay Fault, with good continuity of reflections along the fault. It is possible on each seismic profile to trace the high amplitude lines, where rock-properties significantly change. Since Mega-Splay Fault has 45-59m width along the wells, we interpreted the upper limit and the lower limit of the Mega-Splay Fault, based on the high amplitude surfaces along 3-D PSDM. Our interpretation shows that the width of Mega-Splay Fault has variation along the fault, and the plan geometry of the fault toe has a salient at the middle of the 3D box area, suggesting the fault could be

  18. Computing prestack Kirchhoff time migration on general purpose GPU

    NASA Astrophysics Data System (ADS)

    Shi, Xiaohua; Li, Chuang; Wang, Shihu; Wang, Xu

    2011-10-01

    This paper introduces how to optimize a practical prestack Kirchhoff time migration program by the Compute Unified Device Architecture (CUDA) on a general purpose GPU (GPGPU). A few useful optimization methods on GPGPU are demonstrated, such as how to increase the kernel thread numbers on GPU cores, and how to utilize the memory streams to overlap GPU kernel execution time, etc. The floating-point errors on CUDA and NVidia's GPUs are discussed in detail. Some effective methods that can be used to reduce the floating-point errors are introduced. The images generated by the practical prestack Kirchhoff time migration programs for the same real-world seismic data inputs on CPU and GPU are demonstrated. The final GPGPU approach on NVidia GTX 260 is more than 17 times faster than its original CPU version on Intel's P4 3.0G.

  19. Detection of gas hydrate sediments using prestack seismic AVA inversion

    NASA Astrophysics Data System (ADS)

    Zhang, Ru-Wei; Li, Hong-Qi; Zhang, Bao-Jin; Huang, Han-Dong; Wen, Peng-Fei

    2015-09-01

    Bottom-simulating reflectors (BSRs) in seismic profile always indicate the bottom of gas hydrate stability zone, but is difficult to determine the distribution and features of gas hydrate sediments (GHS). In this study, based on AVA forward modeling and angle-domain common-image gathers we use prestack AVA parameters consistency inversion in predicting gas hydrate sediments in the Shenhu area at northern slope of South China Sea, and obtain the vertical and lateral features and saturation of GHS.

  20. Satellite and Surface Data Synergy for Developing a 3D Cloud Structure and Properties Characterization Over the ARM SGP. Stage 1: Cloud Amounts, Optical Depths, and Cloud Heights Reconciliation

    NASA Technical Reports Server (NTRS)

    Genkova, I.; Long, C. N.; Heck, P. W.; Minnis, P.

    2003-01-01

    One of the primary Atmospheric Radiation Measurement (ARM) Program objectives is to obtain measurements applicable to the development of models for better understanding of radiative processes in the atmosphere. We address this goal by building a three-dimensional (3D) characterization of the cloud structure and properties over the ARM Southern Great Plains (SGP). We take the approach of juxtaposing the cloud properties as retrieved from independent satellite and ground-based retrievals, and looking at the statistics of the cloud field properties. Once these retrievals are well understood, they will be used to populate the 3D characterization database. As a first step we determine the relationship between surface fractional sky cover and satellite viewing angle dependent cloud fraction (CF). We elaborate on the agreement intercomparing optical depth (OD) datasets from satellite and ground using available retrieval algorithms with relation to the CF, cloud height, multi-layer cloud presence, and solar zenith angle (SZA). For the SGP Central Facility, where output from the active remote sensing cloud layer (ARSCL) valueadded product (VAP) is available, we study the uncertainty of satellite estimated cloud heights and evaluate the impact of this uncertainty for radiative studies.

  1. Optimal arrangements of fiber optic probes to enhance the spatial resolution in depth for 3D reflectance diffuse optical tomography with time-resolved measurements performed with fast-gated single-photon avalanche diodes

    NASA Astrophysics Data System (ADS)

    Puszka, Agathe; Di Sieno, Laura; Dalla Mora, Alberto; Pifferi, Antonio; Contini, Davide; Boso, Gianluca; Tosi, Alberto; Hervé, Lionel; Planat-Chrétien, Anne; Koenig, Anne; Dinten, Jean-Marc

    2014-02-01

    Fiber optic probes with a width limited to a few centimeters can enable diffuse optical tomography (DOT) in intern organs like the prostate or facilitate the measurements on extern organs like the breast or the brain. We have recently shown on 2D tomographic images that time-resolved measurements with a large dynamic range obtained with fast-gated single-photon avalanche diodes (SPADs) could push forward the imaged depth range in a diffusive medium at short source-detector separation compared with conventional non-gated approaches. In this work, we confirm these performances with the first 3D tomographic images reconstructed with such a setup and processed with the Mellin- Laplace transform. More precisely, we investigate the performance of hand-held probes with short interfiber distances in terms of spatial resolution and specifically demonstrate the interest of having a compact probe design featuring small source-detector separations. We compare the spatial resolution obtained with two probes having the same design but different scale factors, the first one featuring only interfiber distances of 15 mm and the second one, 10 mm. We evaluate experimentally the spatial resolution obtained with each probe on the setup with fast-gated SPADs for optical phantoms featuring two absorbing inclusions positioned at different depths and conclude on the potential of short source-detector separations for DOT.

  2. YouDash3D: exploring stereoscopic 3D gaming for 3D movie theaters

    NASA Astrophysics Data System (ADS)

    Schild, Jonas; Seele, Sven; Masuch, Maic

    2012-03-01

    Along with the success of the digitally revived stereoscopic cinema, events beyond 3D movies become attractive for movie theater operators, i.e. interactive 3D games. In this paper, we present a case that explores possible challenges and solutions for interactive 3D games to be played by a movie theater audience. We analyze the setting and showcase current issues related to lighting and interaction. Our second focus is to provide gameplay mechanics that make special use of stereoscopy, especially depth-based game design. Based on these results, we present YouDash3D, a game prototype that explores public stereoscopic gameplay in a reduced kiosk setup. It features live 3D HD video stream of a professional stereo camera rig rendered in a real-time game scene. We use the effect to place the stereoscopic effigies of players into the digital game. The game showcases how stereoscopic vision can provide for a novel depth-based game mechanic. Projected trigger zones and distributed clusters of the audience video allow for easy adaptation to larger audiences and 3D movie theater gaming.

  3. Europeana and 3D

    NASA Astrophysics Data System (ADS)

    Pletinckx, D.

    2011-09-01

    The current 3D hype creates a lot of interest in 3D. People go to 3D movies, but are we ready to use 3D in our homes, in our offices, in our communication? Are we ready to deliver real 3D to a general public and use interactive 3D in a meaningful way to enjoy, learn, communicate? The CARARE project is realising this for the moment in the domain of monuments and archaeology, so that real 3D of archaeological sites and European monuments will be available to the general public by 2012. There are several aspects to this endeavour. First of all is the technical aspect of flawlessly delivering 3D content over all platforms and operating systems, without installing software. We have currently a working solution in PDF, but HTML5 will probably be the future. Secondly, there is still little knowledge on how to create 3D learning objects, 3D tourist information or 3D scholarly communication. We are still in a prototype phase when it comes to integrate 3D objects in physical or virtual museums. Nevertheless, Europeana has a tremendous potential as a multi-facetted virtual museum. Finally, 3D has a large potential to act as a hub of information, linking to related 2D imagery, texts, video, sound. We describe how to create such rich, explorable 3D objects that can be used intuitively by the generic Europeana user and what metadata is needed to support the semantic linking.

  4. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. The Apollo implementation of PLOT3D uses some of the capabilities of

  5. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. The Apollo implementation of PLOT3D uses some of the capabilities of

  6. 3d-3d correspondence revisited

    NASA Astrophysics Data System (ADS)

    Chung, Hee-Joong; Dimofte, Tudor; Gukov, Sergei; Sułkowski, Piotr

    2016-04-01

    In fivebrane compactifications on 3-manifolds, we point out the importance of all flat connections in the proper definition of the effective 3d {N}=2 theory. The Lagrangians of some theories with the desired properties can be constructed with the help of homological knot invariants that categorify colored Jones polynomials. Higgsing the full 3d theories constructed this way recovers theories found previously by Dimofte-Gaiotto-Gukov. We also consider the cutting and gluing of 3-manifolds along smooth boundaries and the role played by all flat connections in this operation.

  7. 'Diamond' in 3-D

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This 3-D, microscopic imager mosaic of a target area on a rock called 'Diamond Jenness' was taken after NASA's Mars Exploration Rover Opportunity ground into the surface with its rock abrasion tool for a second time.

    Opportunity has bored nearly a dozen holes into the inner walls of 'Endurance Crater.' On sols 177 and 178 (July 23 and July 24, 2004), the rover worked double-duty on Diamond Jenness. Surface debris and the bumpy shape of the rock resulted in a shallow and irregular hole, only about 2 millimeters (0.08 inch) deep. The final depth was not enough to remove all the bumps and leave a neat hole with a smooth floor. This extremely shallow depression was then examined by the rover's alpha particle X-ray spectrometer.

    On Sol 178, Opportunity's 'robotic rodent' dined on Diamond Jenness once again, grinding almost an additional 5 millimeters (about 0.2 inch). The rover then applied its Moessbauer spectrometer to the deepened hole. This double dose of Diamond Jenness enabled the science team to examine the rock at varying layers. Results from those grindings are currently being analyzed.

    The image mosaic is about 6 centimeters (2.4 inches) across.

  8. Rapid 360 degree imaging and stitching of 3D objects using multiple precision 3D cameras

    NASA Astrophysics Data System (ADS)

    Lu, Thomas; Yin, Stuart; Zhang, Jianzhong; Li, Jiangan; Wu, Frank

    2008-02-01

    In this paper, we present the system architecture of a 360 degree view 3D imaging system. The system consists of multiple 3D sensors synchronized to take 3D images around the object. Each 3D camera employs a single high-resolution digital camera and a color-coded light projector. The cameras are synchronized to rapidly capture the 3D and color information of a static object or a live person. The color encoded structure lighting ensures the precise reconstruction of the depth of the object. A 3D imaging system architecture is presented. The architecture employs the displacement of the camera and the projector to triangulate the depth information. The 3D camera system has achieved high depth resolution down to 0.1mm on a human head sized object and 360 degree imaging capability.

  9. Depth migration in transversely isotropic media with explicit operators

    SciTech Connect

    Uzcategui, O.

    1994-12-01

    The author presents and analyzes three approaches to calculating explicit two-dimensional (2D) depth-extrapolation filters for all propagation modes (P, SV, and SH) in transversely isotropic media with vertical and tilted axis of symmetry. These extrapolation filters are used to do 2D poststack depth migration, and also, just as for isotropic media, these 2D filters are used in the McClellan transformation to do poststack 3D depth migration. Furthermore, the same explicit filters can also be used to do depth-extrapolation of prestack data. The explicit filters are derived by generalizations of three different approaches: the modified Taylor series, least-squares, and minimax methods initially developed for isotropic media. The examples here show that the least-squares and minimax methods produce filters with accurate extrapolation (measured in the ability to position steep reflectors) for a wider range of propagation angles than that obtained using the modified Taylor series method. However, for low propagation angles, the modified Taylor series method has smaller amplitude and phase errors than those produced by the least-squares and minimax methods. These results suggest that to get accurate amplitude estimation, modified Taylor series filters would be somewhat preferred in areas with low dips. In areas with larger dips, the least-squares and minimax methods would give a distinctly better delineation of the subsurface structures.

  10. 3D and Education

    NASA Astrophysics Data System (ADS)

    Meulien Ohlmann, Odile

    2013-02-01

    Today the industry offers a chain of 3D products. Learning to "read" and to "create in 3D" becomes an issue of education of primary importance. 25 years professional experience in France, the United States and Germany, Odile Meulien set up a personal method of initiation to 3D creation that entails the spatial/temporal experience of the holographic visual. She will present some different tools and techniques used for this learning, their advantages and disadvantages, programs and issues of educational policies, constraints and expectations related to the development of new techniques for 3D imaging. Although the creation of display holograms is very much reduced compared to the creation of the 90ies, the holographic concept is spreading in all scientific, social, and artistic activities of our present time. She will also raise many questions: What means 3D? Is it communication? Is it perception? How the seeing and none seeing is interferes? What else has to be taken in consideration to communicate in 3D? How to handle the non visible relations of moving objects with subjects? Does this transform our model of exchange with others? What kind of interaction this has with our everyday life? Then come more practical questions: How to learn creating 3D visualization, to learn 3D grammar, 3D language, 3D thinking? What for? At what level? In which matter? for whom?

  11. Remote 3D Medical Consultation

    NASA Astrophysics Data System (ADS)

    Welch, Greg; Sonnenwald, Diane H.; Fuchs, Henry; Cairns, Bruce; Mayer-Patel, Ketan; Yang, Ruigang; State, Andrei; Towles, Herman; Ilie, Adrian; Krishnan, Srinivas; Söderholm, Hanna M.

    Two-dimensional (2D) video-based telemedical consultation has been explored widely in the past 15-20 years. Two issues that seem to arise in most relevant case studies are the difficulty associated with obtaining the desired 2D camera views, and poor depth perception. To address these problems we are exploring the use of a small array of cameras to synthesize a spatially continuous range of dynamic three-dimensional (3D) views of a remote environment and events. The 3D views can be sent across wired or wireless networks to remote viewers with fixed displays or mobile devices such as a personal digital assistant (PDA). The viewpoints could be specified manually or automatically via user head or PDA tracking, giving the remote viewer virtual head- or hand-slaved (PDA-based) remote cameras for mono or stereo viewing. We call this idea remote 3D medical consultation (3DMC). In this article we motivate and explain the vision for 3D medical consultation; we describe the relevant computer vision/graphics, display, and networking research; we present a proof-of-concept prototype system; and we present some early experimental results supporting the general hypothesis that 3D remote medical consultation could offer benefits over conventional 2D televideo.

  12. Noise suppression using preconditioned least-squares prestack time migration: application to the Mississippian limestone

    NASA Astrophysics Data System (ADS)

    Guo, Shiguang; Zhang, Bo; Wang, Qing; Cabrales-Vargas, Alejandro; Marfurt, Kurt J.

    2016-08-01

    Conventional Kirchhoff migration often suffers from artifacts such as aliasing and acquisition footprint, which come from sub-optimal seismic acquisition. The footprint can mask faults and fractures, while aliased noise can focus into false coherent events which affect interpretation and contaminate amplitude variation with offset, amplitude variation with azimuth and elastic inversion. Preconditioned least-squares migration minimizes these artifacts. We implement least-squares migration by minimizing the difference between the original data and the modeled demigrated data using an iterative conjugate gradient scheme. Unpreconditioned least-squares migration better estimates the subsurface amplitude, but does not suppress aliasing. In this work, we precondition the results by applying a 3D prestack structure-oriented LUM (lower–upper–middle) filter to each common offset and common azimuth gather at each iteration. The preconditioning algorithm not only suppresses aliasing of both signal and noise, but also improves the convergence rate. We apply the new preconditioned least-squares migration to the Marmousi model and demonstrate how it can improve the seismic image compared with conventional migration, and then apply it to one survey acquired over a new resource play in the Mid-Continent, USA. The acquisition footprint from the targets is attenuated and the signal to noise ratio is enhanced. To demonstrate the impact on interpretation, we generate a suite of seismic attributes to image the Mississippian limestone, and show that the karst-enhanced fractures in the Mississippian limestone can be better illuminated.

  13. 3D laptop for defense applications

    NASA Astrophysics Data System (ADS)

    Edmondson, Richard; Chenault, David

    2012-06-01

    Polaris Sensor Technologies has developed numerous 3D display systems using a US Army patented approach. These displays have been developed as prototypes for handheld controllers for robotic systems and closed hatch driving, and as part of a TALON robot upgrade for 3D vision, providing depth perception for the operator for improved manipulation and hazard avoidance. In this paper we discuss the prototype rugged 3D laptop computer and its applications to defense missions. The prototype 3D laptop combines full temporal and spatial resolution display with the rugged Amrel laptop computer. The display is viewed through protective passive polarized eyewear, and allows combined 2D and 3D content. Uses include robot tele-operation with live 3D video or synthetically rendered scenery, mission planning and rehearsal, enhanced 3D data interpretation, and simulation.

  14. 3D Imaging.

    ERIC Educational Resources Information Center

    Hastings, S. K.

    2002-01-01

    Discusses 3 D imaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3 D imaging for libraries and museums. (LRW)

  15. Parallel processing of Prestack Kirchhoff Time Migration on a PC Cluster

    NASA Astrophysics Data System (ADS)

    Dai, Hengchang

    2005-08-01

    This paper discusses an approach that implements a parallel processing of 3-D Prestack Kirchhoff Time Migration (PKTM) on a low-cost PC Cluster by using the Message Passing Interface (MPI), and analyses its performance using a real seismic data as examples. The PC Cluster provides a significant acceleration of the migration processing with the exact same image quality. The ratio between the communication time and processing time is a critical indicator for determining the efficiency of the PC Cluster. If the processing time is longer than the communication time, using more CPUs can efficiently reduce the elapsed time. On the contrary, using more CPUs cannot reduce the elapsed time. Appling this approach to the Alba dataset on our PC Cluster up to 15 CPUs, the elapsed time of PKTM is inversely proportional to the number of CPUs used. The elapsed time for migrating a 2-D seismic line is reduced from 15 h using one CPU to 1 h using 15 CPUs. The elapsed time for migrating a 3-D image is reduced from 630 h using one CPU to 42 h using 15 CPUs. Further reduction can be achieved by using more CPUs. However, an optimal CPU number is expected for an application on large PC clusters with hundreds of nodes. Adapting existing algorithms to the cluster environment offers the potential to allow the application of more accurate algorithms for PKTM to construct a more accurate image. This work has proven that the PC Cluster is a powerful and scalable computing resource for oil and gas exploration organizations.

  16. Structured light field 3D imaging.

    PubMed

    Cai, Zewei; Liu, Xiaoli; Peng, Xiang; Yin, Yongkai; Li, Ameng; Wu, Jiachen; Gao, Bruce Z

    2016-09-01

    In this paper, we propose a method by means of light field imaging under structured illumination to deal with high dynamic range 3D imaging. Fringe patterns are projected onto a scene and modulated by the scene depth then a structured light field is detected using light field recording devices. The structured light field contains information about ray direction and phase-encoded depth, via which the scene depth can be estimated from different directions. The multidirectional depth estimation can achieve high dynamic 3D imaging effectively. We analyzed and derived the phase-depth mapping in the structured light field and then proposed a flexible ray-based calibration approach to determine the independent mapping coefficients for each ray. Experimental results demonstrated the validity of the proposed method to perform high-quality 3D imaging for highly and lowly reflective surfaces. PMID:27607639

  17. Dimensional accuracy of 3D printed vertebra

    NASA Astrophysics Data System (ADS)

    Ogden, Kent; Ordway, Nathaniel; Diallo, Dalanda; Tillapaugh-Fay, Gwen; Aslan, Can

    2014-03-01

    3D printer applications in the biomedical sciences and medical imaging are expanding and will have an increasing impact on the practice of medicine. Orthopedic and reconstructive surgery has been an obvious area for development of 3D printer applications as the segmentation of bony anatomy to generate printable models is relatively straightforward. There are important issues that should be addressed when using 3D printed models for applications that may affect patient care; in particular the dimensional accuracy of the printed parts needs to be high to avoid poor decisions being made prior to surgery or therapeutic procedures. In this work, the dimensional accuracy of 3D printed vertebral bodies derived from CT data for a cadaver spine is compared with direct measurements on the ex-vivo vertebra and with measurements made on the 3D rendered vertebra using commercial 3D image processing software. The vertebra was printed on a consumer grade 3D printer using an additive print process using PLA (polylactic acid) filament. Measurements were made for 15 different anatomic features of the vertebral body, including vertebral body height, endplate width and depth, pedicle height and width, and spinal canal width and depth, among others. It is shown that for the segmentation and printing process used, the results of measurements made on the 3D printed vertebral body are substantially the same as those produced by direct measurement on the vertebra and measurements made on the 3D rendered vertebra.

  18. Optoplasmonics: hybridization in 3D

    NASA Astrophysics Data System (ADS)

    Rosa, L.; Gervinskas, G.; Žukauskas, A.; Malinauskas, M.; Brasselet, E.; Juodkazis, S.

    2013-12-01

    Femtosecond laser fabrication has been used to make hybrid refractive and di ractive micro-optical elements in photo-polymer SZ2080. For applications in micro- uidics, axicon lenses were fabricated (both single and arrays), for generation of light intensity patterns extending through the entire depth of a typically tens-of-micrometers deep channel. Further hybridisation of an axicon with a plasmonic slot is fabricated and demonstrated nu- merically. Spiralling chiral grooves were inscribed into a 100-nm-thick gold coating sputtered over polymerized micro-axicon lenses, using a focused ion beam. This demonstrates possibility of hybridisation between optical and plasmonic 3D micro-optical elements. Numerical modelling of optical performance by 3D-FDTD method is presented.

  19. 3D light scanning macrography.

    PubMed

    Huber, D; Keller, M; Robert, D

    2001-08-01

    The technique of 3D light scanning macrography permits the non-invasive surface scanning of small specimens at magnifications up to 200x. Obviating both the problem of limited depth of field inherent to conventional close-up macrophotography and the metallic coating required by scanning electron microscopy, 3D light scanning macrography provides three-dimensional digital images of intact specimens without the loss of colour, texture and transparency information. This newly developed technique offers a versatile, portable and cost-efficient method for the non-invasive digital and photographic documentation of small objects. Computer controlled device operation and digital image acquisition facilitate fast and accurate quantitative morphometric investigations, and the technique offers a broad field of research and educational applications in biological, medical and materials sciences. PMID:11489078

  20. TRACE 3-D documentation

    SciTech Connect

    Crandall, K.R.

    1987-08-01

    TRACE 3-D is an interactive beam-dynamics program that calculates the envelopes of a bunched beam, including linear space-charge forces, through a user-defined transport system. TRACE 3-D provides an immediate graphics display of the envelopes and the phase-space ellipses and allows nine types of beam-matching options. This report describes the beam-dynamics calculations and gives detailed instruction for using the code. Several examples are described in detail.

  1. Development of a 3D VHR seismic reflection system for lacustrine settings - a case study in Lake Geneva, Switzerland

    NASA Astrophysics Data System (ADS)

    Scheidhauer, M.; Dupuy, D.; Marillier, F.; Beres, M.

    2003-04-01

    non-aliased signal to depths of 400^om with a best vertical resolution of 1.15^om. The multi-streamer system allows acquisition of high quality data, which already after conventional 3D processing show particularly clear images of the fault zone and the overlying sediments in all directions. Prestack depth migration can further improve data quality and is more appropriate for subsequent geologic interpretation.

  2. Radiochromic 3D Detectors

    NASA Astrophysics Data System (ADS)

    Oldham, Mark

    2015-01-01

    Radiochromic materials exhibit a colour change when exposed to ionising radiation. Radiochromic film has been used for clinical dosimetry for many years and increasingly so recently, as films of higher sensitivities have become available. The two principle advantages of radiochromic dosimetry include greater tissue equivalence (radiologically) and the lack of requirement for development of the colour change. In a radiochromic material, the colour change arises direct from ionising interactions affecting dye molecules, without requiring any latent chemical, optical or thermal development, with important implications for increased accuracy and convenience. It is only relatively recently however, that 3D radiochromic dosimetry has become possible. In this article we review recent developments and the current state-of-the-art of 3D radiochromic dosimetry, and the potential for a more comprehensive solution for the verification of complex radiation therapy treatments, and 3D dose measurement in general.

  3. Bootstrapping 3D fermions

    NASA Astrophysics Data System (ADS)

    Iliesiu, Luca; Kos, Filip; Poland, David; Pufu, Silviu S.; Simmons-Duffin, David; Yacoby, Ran

    2016-03-01

    We study the conformal bootstrap for a 4-point function of fermions < ψψψψ> in 3D. We first introduce an embedding formalism for 3D spinors and compute the conformal blocks appearing in fermion 4-point functions. Using these results, we find general bounds on the dimensions of operators appearing in the ψ × ψ OPE, and also on the central charge C T . We observe features in our bounds that coincide with scaling dimensions in the GrossNeveu models at large N . We also speculate that other features could coincide with a fermionic CFT containing no relevant scalar operators.

  4. Software for 3D radiotherapy dosimetry. Validation

    NASA Astrophysics Data System (ADS)

    Kozicki, Marek; Maras, Piotr; Karwowski, Andrzej C.

    2014-08-01

    The subject of this work is polyGeVero® software (GeVero Co., Poland), which has been developed to fill the requirements of fast calculations of 3D dosimetry data with the emphasis on polymer gel dosimetry for radiotherapy. This software comprises four workspaces that have been prepared for: (i) calculating calibration curves and calibration equations, (ii) storing the calibration characteristics of the 3D dosimeters, (iii) calculating 3D dose distributions in irradiated 3D dosimeters, and (iv) comparing 3D dose distributions obtained from measurements with the aid of 3D dosimeters and calculated with the aid of treatment planning systems (TPSs). The main features and functions of the software are described in this work. Moreover, the core algorithms were validated and the results are presented. The validation was performed using the data of the new PABIGnx polymer gel dosimeter. The polyGeVero® software simplifies and greatly accelerates the calculations of raw 3D dosimetry data. It is an effective tool for fast verification of TPS-generated plans for tumor irradiation when combined with a 3D dosimeter. Consequently, the software may facilitate calculations by the 3D dosimetry community. In this work, the calibration characteristics of the PABIGnx obtained through four calibration methods: multi vial, cross beam, depth dose, and brachytherapy, are discussed as well.

  5. 3D microscope

    NASA Astrophysics Data System (ADS)

    Iizuka, Keigo

    2008-02-01

    In order to circumvent the fact that only one observer can view the image from a stereoscopic microscope, an attachment was devised for displaying the 3D microscopic image on a large LCD monitor for viewing by multiple observers in real time. The principle of operation, design, fabrication, and performance are presented, along with tolerance measurements relating to the properties of the cellophane half-wave plate used in the design.

  6. Sea level history in 3D: Data acquisition and processing for an ultra-high resolution MCS survey across IODP Expedition 313 drillsite

    NASA Astrophysics Data System (ADS)

    Nedimovic, M. R.; Mountain, G. S.; Austin, J. A., Jr.; Fulthorpe, C.; Aali, M.; Baldwin, K.; Bhatnagar, T.; Johnson, C.; Küçük, H. M.; Newton, A.; Stanley, J.

    2015-12-01

    In June-July 2015, we acquired the first 3D/2D hybrid (short/long streamer) multichannel seismic (MCS) reflection dataset. These data were collected simultaneously across IODP Exp. 313 drillsites, off New Jersey, using R/V Langsethand cover ~95% of the planned 12x50 km box. Despite the large survey area, the lateral and vertical resolution for the 3D dataset is almost a magnitude of order higher than for data gathered for standard petroleum exploration. Such high-resolution was made possible by collection of common midpoint (CMP) lines whose combined length is ~3 times the Earth's circumference (~120,000 profile km) and a source rich in high-frequencies. We present details on the data acquisition, ongoing data analysis, and preliminary results. The science driving this project is presented by Mountain et al. The 3D component of this innovative survey used an athwartship cross cable, extended laterally by 2 barovanes roughly 357.5 m apart and trailed by 24 50-m P-Cables spaced ~12.5 m with near-trace offset of 53 m. Each P-Cable had 8 single hydrophone groups spaced at 6.25 m for a total of 192 channels. Record length was 4 s and sample rate 0.5 ms, with no low cut and an 824 Hz high cut filter. We ran 77 sail lines spaced ~150 m. Receiver locations were determined using 2 GPS receivers mounted on floats and 2 compasses and depth sensors per streamer. Streamer depths varied from 2.1 to 3.7 m. The 2D component used a single 3 km streamer, with 240 9-hydrophone groups spaced at 12.5 m, towed astern with near-trace offset of 229 m. The record length was 4 s and sample rate 0.5 ms, with low cut filter at 2 Hz and high cut at 412 Hz. Receiver locations were recorded using GPS at the head float and tail buoy, combined with 12 bird compasses spaced ~300 m. Nominal streamer depth was 4.5 m. The source for both systems was a 700 in3 linear array of 4 Bolt air guns suspended at 4.5 m towing depth, 271.5 m behind the ship's stern. Shot spacing was 12.5 m. Data analysis to

  7. 3D medical thermography device

    NASA Astrophysics Data System (ADS)

    Moghadam, Peyman

    2015-05-01

    In this paper, a novel handheld 3D medical thermography system is introduced. The proposed system consists of a thermal-infrared camera, a color camera and a depth camera rigidly attached in close proximity and mounted on an ergonomic handle. As a practitioner holding the device smoothly moves it around the human body parts, the proposed system generates and builds up a precise 3D thermogram model by incorporating information from each new measurement in real-time. The data is acquired in motion, thus it provides multiple points of view. When processed, these multiple points of view are adaptively combined by taking into account the reliability of each individual measurement which can vary due to a variety of factors such as angle of incidence, distance between the device and the subject and environmental sensor data or other factors influencing a confidence of the thermal-infrared data when captured. Finally, several case studies are presented to support the usability and performance of the proposed system.

  8. Multiviewer 3D monitor

    NASA Astrophysics Data System (ADS)

    Kostrzewski, Andrew A.; Aye, Tin M.; Kim, Dai Hyun; Esterkin, Vladimir; Savant, Gajendra D.

    1998-09-01

    Physical Optics Corporation has developed an advanced 3-D virtual reality system for use with simulation tools for training technical and military personnel. This system avoids such drawbacks of other virtual reality (VR) systems as eye fatigue, headaches, and alignment for each viewer, all of which are due to the need to wear special VR goggles. The new system is based on direct viewing of an interactive environment. This innovative holographic multiplexed screen technology makes it unnecessary for the viewer to wear special goggles.

  9. 3D Audio System

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Ames Research Center research into virtual reality led to the development of the Convolvotron, a high speed digital audio processing system that delivers three-dimensional sound over headphones. It consists of a two-card set designed for use with a personal computer. The Convolvotron's primary application is presentation of 3D audio signals over headphones. Four independent sound sources are filtered with large time-varying filters that compensate for motion. The perceived location of the sound remains constant. Possible applications are in air traffic control towers or airplane cockpits, hearing and perception research and virtual reality development.

  10. 3D Surgical Simulation

    PubMed Central

    Cevidanes, Lucia; Tucker, Scott; Styner, Martin; Kim, Hyungmin; Chapuis, Jonas; Reyes, Mauricio; Proffit, William; Turvey, Timothy; Jaskolka, Michael

    2009-01-01

    This paper discusses the development of methods for computer-aided jaw surgery. Computer-aided jaw surgery allows us to incorporate the high level of precision necessary for transferring virtual plans into the operating room. We also present a complete computer-aided surgery (CAS) system developed in close collaboration with surgeons. Surgery planning and simulation include construction of 3D surface models from Cone-beam CT (CBCT), dynamic cephalometry, semi-automatic mirroring, interactive cutting of bone and bony segment repositioning. A virtual setup can be used to manufacture positioning splints for intra-operative guidance. The system provides further intra-operative assistance with the help of a computer display showing jaw positions and 3D positioning guides updated in real-time during the surgical procedure. The CAS system aids in dealing with complex cases with benefits for the patient, with surgical practice, and for orthodontic finishing. Advanced software tools for diagnosis and treatment planning allow preparation of detailed operative plans, osteotomy repositioning, bone reconstructions, surgical resident training and assessing the difficulties of the surgical procedures prior to the surgery. CAS has the potential to make the elaboration of the surgical plan a more flexible process, increase the level of detail and accuracy of the plan, yield higher operative precision and control, and enhance documentation of cases. Supported by NIDCR DE017727, and DE018962 PMID:20816308

  11. Concurrent 3-D motion segmentation and 3-D interpretation of temporal sequences of monocular images.

    PubMed

    Sekkati, Hicham; Mitiche, Amar

    2006-03-01

    The purpose of this study is to investigate a variational method for joint multiregion three-dimensional (3-D) motion segmentation and 3-D interpretation of temporal sequences of monocular images. Interpretation consists of dense recovery of 3-D structure and motion from the image sequence spatiotemporal variations due to short-range image motion. The method is direct insomuch as it does not require prior computation of image motion. It allows movement of both viewing system and multiple independently moving objects. The problem is formulated following a variational statement with a functional containing three terms. One term measures the conformity of the interpretation within each region of 3-D motion segmentation to the image sequence spatiotemporal variations. The second term is of regularization of depth. The assumption that environmental objects are rigid accounts automatically for the regularity of 3-D motion within each region of segmentation. The third and last term is for the regularity of segmentation boundaries. Minimization of the functional follows the corresponding Euler-Lagrange equations. This results in iterated concurrent computation of 3-D motion segmentation by curve evolution, depth by gradient descent, and 3-D motion by least squares within each region of segmentation. Curve evolution is implemented via level sets for topology independence and numerical stability. This algorithm and its implementation are verified on synthetic and real image sequences. Viewers presented with anaglyphs of stereoscopic images constructed from the algorithm's output reported a strong perception of depth. PMID:16519351

  12. Nonlaser-based 3D surface imaging

    SciTech Connect

    Lu, Shin-yee; Johnson, R.K.; Sherwood, R.J.

    1994-11-15

    3D surface imaging refers to methods that generate a 3D surface representation of objects of a scene under viewing. Laser-based 3D surface imaging systems are commonly used in manufacturing, robotics and biomedical research. Although laser-based systems provide satisfactory solutions for most applications, there are situations where non laser-based approaches are preferred. The issues that make alternative methods sometimes more attractive are: (1) real-time data capturing, (2) eye-safety, (3) portability, and (4) work distance. The focus of this presentation is on generating a 3D surface from multiple 2D projected images using CCD cameras, without a laser light source. Two methods are presented: stereo vision and depth-from-focus. Their applications are described.

  13. What is 3D good for? A review of human performance on stereoscopic 3D displays

    NASA Astrophysics Data System (ADS)

    McIntire, John P.; Havig, Paul R.; Geiselman, Eric E.

    2012-06-01

    This work reviews the human factors-related literature on the task performance implications of stereoscopic 3D displays, in order to point out the specific performance benefits (or lack thereof) one might reasonably expect to observe when utilizing these displays. What exactly is 3D good for? Relative to traditional 2D displays, stereoscopic displays have been shown to enhance performance on a variety of depth-related tasks. These tasks include judging absolute and relative distances, finding and identifying objects (by breaking camouflage and eliciting perceptual "pop-out"), performing spatial manipulations of objects (object positioning, orienting, and tracking), and navigating. More cognitively, stereoscopic displays can improve the spatial understanding of 3D scenes or objects, improve memory/recall of scenes or objects, and improve learning of spatial relationships and environments. However, for tasks that are relatively simple, that do not strictly require depth information for good performance, where other strong cues to depth can be utilized, or for depth tasks that lie outside the effective viewing volume of the display, the purported performance benefits of 3D may be small or altogether absent. Stereoscopic 3D displays come with a host of unique human factors problems including the simulator-sickness-type symptoms of eyestrain, headache, fatigue, disorientation, nausea, and malaise, which appear to effect large numbers of viewers (perhaps as many as 25% to 50% of the general population). Thus, 3D technology should be wielded delicately and applied carefully; and perhaps used only as is necessary to ensure good performance.

  14. Medical 3D Printing for the Radiologist.

    PubMed

    Mitsouras, Dimitris; Liacouras, Peter; Imanzadeh, Amir; Giannopoulos, Andreas A; Cai, Tianrun; Kumamaru, Kanako K; George, Elizabeth; Wake, Nicole; Caterson, Edward J; Pomahac, Bohdan; Ho, Vincent B; Grant, Gerald T; Rybicki, Frank J

    2015-01-01

    While use of advanced visualization in radiology is instrumental in diagnosis and communication with referring clinicians, there is an unmet need to render Digital Imaging and Communications in Medicine (DICOM) images as three-dimensional (3D) printed models capable of providing both tactile feedback and tangible depth information about anatomic and pathologic states. Three-dimensional printed models, already entrenched in the nonmedical sciences, are rapidly being embraced in medicine as well as in the lay community. Incorporating 3D printing from images generated and interpreted by radiologists presents particular challenges, including training, materials and equipment, and guidelines. The overall costs of a 3D printing laboratory must be balanced by the clinical benefits. It is expected that the number of 3D-printed models generated from DICOM images for planning interventions and fabricating implants will grow exponentially. Radiologists should at a minimum be familiar with 3D printing as it relates to their field, including types of 3D printing technologies and materials used to create 3D-printed anatomic models, published applications of models to date, and clinical benefits in radiology. Online supplemental material is available for this article. PMID:26562233

  15. 3D polarimetric purity

    NASA Astrophysics Data System (ADS)

    Gil, José J.; San José, Ignacio

    2010-11-01

    From our previous definition of the indices of polarimetric purity for 3D light beams [J.J. Gil, J.M. Correas, P.A. Melero and C. Ferreira, Monogr. Semin. Mat. G. de Galdeano 31, 161 (2004)], an analysis of their geometric and physical interpretation is presented. It is found that, in agreement with previous results, the first parameter is a measure of the degree of polarization, whereas the second parameter (called the degree of directionality) is a measure of the mean angular aperture of the direction of propagation of the corresponding light beam. This pair of invariant, non-dimensional, indices of polarimetric purity contains complete information about the polarimetric purity of a light beam. The overall degree of polarimetric purity is obtained as a weighted quadratic average of the degree of polarization and the degree of directionality.

  16. 3D field harmonics

    SciTech Connect

    Caspi, S.; Helm, M.; Laslett, L.J.

    1991-03-30

    We have developed an harmonic representation for the three dimensional field components within the windings of accelerator magnets. The form by which the field is presented is suitable for interfacing with other codes that make use of the 3D field components (particle tracking and stability). The field components can be calculated with high precision and reduced cup time at any location (r,{theta},z) inside the magnet bore. The same conductor geometry which is used to simulate line currents is also used in CAD with modifications more readily available. It is our hope that the format used here for magnetic fields can be used not only as a means of delivering fields but also as a way by which beam dynamics can suggest correction to the conductor geometry. 5 refs., 70 figs.

  17. 'Bonneville' in 3-D!

    NASA Technical Reports Server (NTRS)

    2004-01-01

    The Mars Exploration Rover Spirit took this 3-D navigation camera mosaic of the crater called 'Bonneville' after driving approximately 13 meters (42.7 feet) to get a better vantage point. Spirit's current position is close enough to the edge to see the interior of the crater, but high enough and far enough back to get a view of all of the walls. Because scientists and rover controllers are so pleased with this location, they will stay here for at least two more martian days, or sols, to take high resolution panoramic camera images of 'Bonneville' in its entirety. Just above the far crater rim, on the left side, is the rover's heatshield, which is visible as a tiny reflective speck.

  18. Volumetric 3D display using a DLP projection engine

    NASA Astrophysics Data System (ADS)

    Geng, Jason

    2012-03-01

    In this article, we describe a volumetric 3D display system based on the high speed DLPTM (Digital Light Processing) projection engine. Existing two-dimensional (2D) flat screen displays often lead to ambiguity and confusion in high-dimensional data/graphics presentation due to lack of true depth cues. Even with the help of powerful 3D rendering software, three-dimensional (3D) objects displayed on a 2D flat screen may still fail to provide spatial relationship or depth information correctly and effectively. Essentially, 2D displays have to rely upon capability of human brain to piece together a 3D representation from 2D images. Despite the impressive mental capability of human visual system, its visual perception is not reliable if certain depth cues are missing. In contrast, volumetric 3D display technologies to be discussed in this article are capable of displaying 3D volumetric images in true 3D space. Each "voxel" on a 3D image (analogous to a pixel in 2D image) locates physically at the spatial position where it is supposed to be, and emits light from that position toward omni-directions to form a real 3D image in 3D space. Such a volumetric 3D display provides both physiological depth cues and psychological depth cues to human visual system to truthfully perceive 3D objects. It yields a realistic spatial representation of 3D objects and simplifies our understanding to the complexity of 3D objects and spatial relationship among them.

  19. Iterative Multiparameter Elastic Waveform Inversion Using Prestack Time Imaging and Kirchhoff approximation

    NASA Astrophysics Data System (ADS)

    Khaniani, Hassan

    This thesis proposes a "standard strategy" for iterative inversion of elastic properties from the seismic reflection data. The term "standard" refers to the current hands-on commercial techniques that are used for the seismic imaging and inverse problem. The method is established to reduce the computation time associated with elastic Full Waveform Inversion (FWI) methods. It makes use of AVO analysis, prestack time migration and corresponding forward modeling in an iterative scheme. The main objective is to describe the iterative inversion procedure used in seismic reflection data using simplified mathematical expression and their numerical applications. The frame work of the inversion is similar to (FWI) method but with less computational costs. The reduction of computational costs depends on the data conditioning (with or without multiple data), the level of the complexity of geological model and acquisition condition such as Signal to Noise Ratio (SNR). Many processing methods consider multiple events as noise and remove it from the data. This is the motivation for reducing the computational cost associated with Finite Difference Time Domain (FDTD) forward modeling and Reverse Time Migration (RTM)-based techniques. Therefore, a one-way solution of the wave equation for inversion is implemented. While less computationally intensive depth imaging methods are available by iterative coupling of ray theory and the Born approximation, it is shown that we can further reduce the cost of inversion by dropping the cost of ray tracing for traveltime estimation in a way similar to standard Prestack Time Migration (PSTM) and the corresponding forward modeling. This requires the model to have smooth lateral variations in elastic properties, so that the traveltime of the scatterpoints can be approximated by a Double Square Root (DSR) equation. To represent a more realistic and stable solution of the inverse problem, while considering the phase of supercritical angles, the

  20. Walker Ranch 3D seismic images

    DOE Data Explorer

    Robert J. Mellors

    2016-03-01

    Amplitude images (both vertical and depth slices) extracted from 3D seismic reflection survey over area of Walker Ranch area (adjacent to Raft River). Crossline spacing of 660 feet and inline of 165 feet using a Vibroseis source. Processing included depth migration. Micro-earthquake hypocenters on images. Stratigraphic information and nearby well tracks added to images. Images are embedded in a Microsoft Word document with additional information. Exact location and depth restricted for proprietary reasons. Data collection and processing funded by Agua Caliente. Original data remains property of Agua Caliente.

  1. Discrimination of reservoir dolostone within tight limestone using rock physics modeling and pre-stack parameters

    NASA Astrophysics Data System (ADS)

    Park, G.; Lee, B.; Lee, G.

    2013-12-01

    Dolostones may be differentiated from limestones based on various pre-stack seismic parameters as they are denser and faster. However, because the seismic properties of a rock are affected strongly by porosity, porous dolostones may not be significantly denser and faster than limestones. We computed various pre-stack parameters (P-impedance, S-impedance, Vp/Vs, Poisson's ratio, Lamé constants) for tight limestones using the Vp and density logs from a well that penetrated Jurassic carbonate and the Vs log, constructed from the empirical relationships of Vp and Vs. The pre-stack parameters of dolostones with 1% - 40% porosity were estimated based on the bulk and shear moduli and bulk densities computed from the formulas proposed by various workers, including Gassmann equations. Crossplots of the pre-stack parameters show that the Lamé constants (λ, μ) are most effective in differentiating dolostones from limestones. In particular, the λρ -μρ vs. μρ crossplot shows a clear-cut separation of the porous dolostones and tight limestones; the porous dolostones plot exclusively to the left of the λρ -μρ of about 25 GPa.

  2. Visual inertia of rotating 3-D objects.

    PubMed

    Jiang, Y; Pantle, A J; Mark, L S

    1998-02-01

    Five experiments were designed to determine whether a rotating, transparent 3-D cloud of dots (simulated sphere) could influence the perceived direction of rotation of a subsequent sphere. Experiment 1 established conditions under which the direction of rotation of a virtual sphere was perceived unambiguously. When a near-far luminance difference and perspective depth cues were present, observers consistently saw the sphere rotate in the intended direction. In Experiment 2, a near-far luminance difference was used to create an unambiguous rotation sequence that was followed by a directionally ambiguous rotation sequence that lacked both the near-far luminance cue and the perspective cue. Observers consistently saw the second sequence as rotating in the same direction as the first, indicating the presence of 3-D visual inertia. Experiment 3 showed that 3-D visual inertia was sufficiently powerful to bias the perceived direction of a rotation sequence made unambiguous by a near-far luminance cue. Experiment 5 showed that 3-D visual inertia could be obtained using an occlusion depth cue to create an unambiguous inertia-inducing sequence. Finally, Experiments 2, 4, and 5 all revealed a fast-decay phase of inertia that lasted for approximately 800 msec, followed by an asymptotic phase that lasted for periods as long as 1,600 msec. The implications of these findings are examined with respect to motion mechanisms of 3-D visual inertia. PMID:9529911

  3. Prominent rocks - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Many prominent rocks near the Sagan Memorial Station are featured in this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. Wedge is at lower left; Shark, Half-Dome, and Pumpkin are at center. Flat Top, about four inches high, is at lower right. The horizon in the distance is one to two kilometers away.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  4. 3D printed diffractive terahertz lenses.

    PubMed

    Furlan, Walter D; Ferrando, Vicente; Monsoriu, Juan A; Zagrajek, Przemysław; Czerwińska, Elżbieta; Szustakowski, Mieczysław

    2016-04-15

    A 3D printer was used to realize custom-made diffractive THz lenses. After testing several materials, phase binary lenses with periodic and aperiodic radial profiles were designed and constructed in polyamide material to work at 0.625 THz. The nonconventional focusing properties of such lenses were assessed by computing and measuring their axial point spread function (PSF). Our results demonstrate that inexpensive 3D printed THz diffractive lenses can be reliably used in focusing and imaging THz systems. Diffractive THz lenses with unprecedented features, such as extended depth of focus or bifocalization, have been demonstrated. PMID:27082335

  5. The rendering context for stereoscopic 3D web

    NASA Astrophysics Data System (ADS)

    Chen, Qinshui; Wang, Wenmin; Wang, Ronggang

    2014-03-01

    3D technologies on the Web has been studied for many years, but they are basically monoscopic 3D. With the stereoscopic technology gradually maturing, we are researching to integrate the binocular 3D technology into the Web, creating a stereoscopic 3D browser that will provide users with a brand new experience of human-computer interaction. In this paper, we propose a novel approach to apply stereoscopy technologies to the CSS3 3D Transforms. Under our model, each element can create or participate in a stereoscopic 3D rendering context, in which 3D Transforms such as scaling, translation and rotation, can be applied and be perceived in a truly 3D space. We first discuss the underlying principles of stereoscopy. After that we discuss how these principles can be applied to the Web. A stereoscopic 3D browser with backward compatibility is also created for demonstration purposes. We take advantage of the open-source WebKit project, integrating the 3D display ability into the rendering engine of the web browser. For each 3D web page, our 3D browser will create two slightly different images, each representing the left-eye view and right-eye view, both to be combined on the 3D display to generate the illusion of depth. And as the result turns out, elements can be manipulated in a truly 3D space.

  6. 3D dynamic roadmapping for abdominal catheterizations.

    PubMed

    Bender, Frederik; Groher, Martin; Khamene, Ali; Wein, Wolfgang; Heibel, Tim Hauke; Navab, Nassir

    2008-01-01

    Despite rapid advances in interventional imaging, the navigation of a guide wire through abdominal vasculature remains, not only for novice radiologists, a difficult task. Since this navigation is mostly based on 2D fluoroscopic image sequences from one view, the process is slowed down significantly due to missing depth information and patient motion. We propose a novel approach for 3D dynamic roadmapping in deformable regions by predicting the location of the guide wire tip in a 3D vessel model from the tip's 2D location, respiratory motion analysis, and view geometry. In a first step, the method compensates for the apparent respiratory motion in 2D space before backprojecting the 2D guide wire tip into three dimensional space, using a given projection matrix. To countervail the error connected to the projection parameters and the motion compensation, as well as the ambiguity caused by vessel deformation, we establish a statistical framework, which computes a reliable estimate of the guide wire tip location within the 3D vessel model. With this 2D-to-3D transfer, the navigation can be performed from arbitrary viewing angles, disconnected from the static perspective view of the fluoroscopic sequence. Tests on a realistic breathing phantom and on synthetic data with a known ground truth clearly reveal the superiority of our approach compared to naive methods for 3D roadmapping. The concepts and information presented in this paper are based on research and are not commercially available. PMID:18982662

  7. 3D camera tracking from disparity images

    NASA Astrophysics Data System (ADS)

    Kim, Kiyoung; Woo, Woontack

    2005-07-01

    In this paper, we propose a robust camera tracking method that uses disparity images computed from known parameters of 3D camera and multiple epipolar constraints. We assume that baselines between lenses in 3D camera and intrinsic parameters are known. The proposed method reduces camera motion uncertainty encountered during camera tracking. Specifically, we first obtain corresponding feature points between initial lenses using normalized correlation method. In conjunction with matching features, we get disparity images. When the camera moves, the corresponding feature points, obtained from each lens of 3D camera, are robustly tracked via Kanade-Lukas-Tomasi (KLT) tracking algorithm. Secondly, relative pose parameters of each lens are calculated via Essential matrices. Essential matrices are computed from Fundamental matrix calculated using normalized 8-point algorithm with RANSAC scheme. Then, we determine scale factor of translation matrix by d-motion. This is required because the camera motion obtained from Essential matrix is up to scale. Finally, we optimize camera motion using multiple epipolar constraints between lenses and d-motion constraints computed from disparity images. The proposed method can be widely adopted in Augmented Reality (AR) applications, 3D reconstruction using 3D camera, and fine surveillance systems which not only need depth information, but also camera motion parameters in real-time.

  8. STAR3D: a stack-based RNA 3D structural alignment tool

    PubMed Central

    Ge, Ping; Zhang, Shaojie

    2015-01-01

    The various roles of versatile non-coding RNAs typically require the attainment of complex high-order structures. Therefore, comparing the 3D structures of RNA molecules can yield in-depth understanding of their functional conservation and evolutionary history. Recently, many powerful tools have been developed to align RNA 3D structures. Although some methods rely on both backbone conformations and base pairing interactions, none of them consider the entire hierarchical formation of the RNA secondary structure. One of the major issues is that directly applying the algorithms of matching 2D structures to the 3D coordinates is particularly time-consuming. In this article, we propose a novel RNA 3D structural alignment tool, STAR3D, to take into full account the 2D relations between stacks without the complicated comparison of secondary structures. First, the 3D conserved stacks in the inputs are identified and then combined into a tree-like consensus. Afterward, the loop regions are compared one-to-one in accordance with their relative positions in the consensus tree. The experimental results show that the prediction of STAR3D is more accurate for both non-homologous and homologous RNAs than other state-of-the-art tools with shorter running time. PMID:26184875

  9. STAR3D: a stack-based RNA 3D structural alignment tool.

    PubMed

    Ge, Ping; Zhang, Shaojie

    2015-11-16

    The various roles of versatile non-coding RNAs typically require the attainment of complex high-order structures. Therefore, comparing the 3D structures of RNA molecules can yield in-depth understanding of their functional conservation and evolutionary history. Recently, many powerful tools have been developed to align RNA 3D structures. Although some methods rely on both backbone conformations and base pairing interactions, none of them consider the entire hierarchical formation of the RNA secondary structure. One of the major issues is that directly applying the algorithms of matching 2D structures to the 3D coordinates is particularly time-consuming. In this article, we propose a novel RNA 3D structural alignment tool, STAR3D, to take into full account the 2D relations between stacks without the complicated comparison of secondary structures. First, the 3D conserved stacks in the inputs are identified and then combined into a tree-like consensus. Afterward, the loop regions are compared one-to-one in accordance with their relative positions in the consensus tree. The experimental results show that the prediction of STAR3D is more accurate for both non-homologous and homologous RNAs than other state-of-the-art tools with shorter running time. PMID:26184875

  10. Multiple footprint stereo algorithms for 3D display content generation

    NASA Astrophysics Data System (ADS)

    Boughorbel, Faysal

    2007-02-01

    This research focuses on the conversion of stereoscopic video material into an image + depth format which is suitable for rendering on the multiview auto-stereoscopic displays of Philips. The recent interest shown in the movie industry for 3D significantly increased the availability of stereo material. In this context the conversion from stereo to the input formats of 3D displays becomes an important task. In this paper we present a stereo algorithm that uses multiple footprints generating several depth candidates for each image pixel. We characterize the various matching windows and we devise a robust strategy for extracting high quality estimates from the resulting depth candidates. The proposed algorithm is based on a surface filtering method that employs simultaneously the available depth estimates in a small local neighborhood while ensuring correct depth discontinuities by the inclusion of image constraints. The resulting highquality image-aligned depth maps proved an excellent match with our 3D displays.

  11. 3D Seismic Reflection Experiment over the Galicia Deep Basin

    NASA Astrophysics Data System (ADS)

    Sawyer, D. S.; Jordan, B.; Reston, T. J.; Minshull, T. A.; Klaeschen, D.; Ranero, C.; Shillington, D. J.; Morgan, J. K.

    2014-12-01

    In June thru September, 2013, a 3D reflection and a long offset seismic experiment were conducted at the Galicia rifted margin by investigators from the US, UK, Germany, and Spain. The 3D multichannel experiment covered 64 km by 20 km (1280 km2), using the RV Marcus Langseth. Four streamers 6 km long were deployed at 12.5 m hydrophone channel spacing. The streamers were 200 m apart. Two airgun arrays, each 3300 cu in, were fired alternately every 37.5 m, to collectively yield a 400 m wide sail line consisting of 8 CMP lines at 50 m spacing. The long offset seismic experiment included 72 short period OBS's deployed below the 3D reflection survey box. Most of the instruments recorded all the shots from the airgun array shots. The 3D seismic box covered a variety of geologic features. The Peridotite Ridge (PR), is associated with the exhumation of upper mantle rocks to the seafloor during the final stage of the continental separation between the Galicia Bank and the Grand Banks of Newfoundland. The S reflector is present below most of the continental blocks under the deep Galicia basin. S is interpreted to be a low-angle detachment fault formed late in the rifting process, and a number of rotated fault block basins and ranges containing pre and syn-rift sediments. Initial observations from stacked 3D seismic data, and samples of 2D pre-stack time migrated (PSTM) 3D seismic data show that the PR is elevated above the present seafloor in the South and not exposed through the seafloor in the North. The relative smoothness of the PR surface for the entire 20 km N-S contrasts with the more complex, shorter wavelength, faulting of the continental crustal blocks to the east. The PR does not seem to show offsets or any apparent internal structure. The PSTM dip lines show substantial improvement for the structures in the deep sedimentary basin East of the PR. These seem to extend the S reflector somewhat farther to the West. The migrated data show a substantial network of

  12. 3D Spectroscopy in Astronomy

    NASA Astrophysics Data System (ADS)

    Mediavilla, Evencio; Arribas, Santiago; Roth, Martin; Cepa-Nogué, Jordi; Sánchez, Francisco

    2011-09-01

    Preface; Acknowledgements; 1. Introductory review and technical approaches Martin M. Roth; 2. Observational procedures and data reduction James E. H. Turner; 3. 3D Spectroscopy instrumentation M. A. Bershady; 4. Analysis of 3D data Pierre Ferruit; 5. Science motivation for IFS and galactic studies F. Eisenhauer; 6. Extragalactic studies and future IFS science Luis Colina; 7. Tutorials: how to handle 3D spectroscopy data Sebastian F. Sánchez, Begona García-Lorenzo and Arlette Pécontal-Rousset.

  13. 3D Elevation Program—Virtual USA in 3D

    USGS Publications Warehouse

    Lukas, Vicki; Stoker, J.M.

    2016-01-01

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) uses a laser system called ‘lidar’ (light detection and ranging) to create a virtual reality map of the Nation that is very accurate. 3D maps have many uses with new uses being discovered all the time.  

  14. COMBINING A NEW 3-D SEISMIC S-WAVE PROPAGATION ANALYSIS FOR REMOTE FRACTURE DETECTION WITH A ROBUST SUBSURFACE MICROFRACTURE-BASED VERIFICATION TECHNIQUE

    SciTech Connect

    Bob Hardage; M.M. Backus; M.V. DeAngelo; R.J. Graebner; S.E. Laubach; Paul Murray

    2004-02-01

    Fractures within the producing reservoirs at McElroy Field could not be studied with the industry-provided 3C3D seismic data used as a cost-sharing contribution in this study. The signal-to-noise character of the converted-SV data across the targeted reservoirs in these contributed data was not adequate for interpreting azimuth-dependent data effects. After illustrating the low signal quality of the converted-SV data at McElroy Field, the seismic portion of this report abandons the McElroy study site and defers to 3C3D seismic data acquired across a different fractured carbonate reservoir system to illustrate how 3C3D seismic data can provide useful information about fracture systems. Using these latter data, we illustrate how fast-S and slow-S data effects can be analyzed in the prestack domain to recognize fracture azimuth, and then demonstrate how fast-S and slow-S data volumes can be analyzed in the poststack domain to estimate fracture intensity. In the geologic portion of the report, we analyze published regional stress data near McElroy Field and numerous formation multi-imager (FMI) logs acquired across McElroy to develop possible fracture models for the McElroy system. Regional stress data imply a fracture orientation different from the orientations observed in most of the FMI logs. This report culminates Phase 2 of the study, ''Combining a New 3-D Seismic S-Wave Propagation Analysis for Remote Fracture Detection with a Robust Subsurface Microfracture-Based Verification Technique''. Phase 3 will not be initiated because wells were to be drilled in Phase 3 of the project to verify the validity of fracture-orientation maps and fracture-intensity maps produced in Phase 2. Such maps cannot be made across McElroy Field because of the limitations of the available 3C3D seismic data at the depth level of the reservoir target.

  15. Modular 3-D Transport model

    EPA Science Inventory

    MT3D was first developed by Chunmiao Zheng in 1990 at S.S. Papadopulos & Associates, Inc. with partial support from the U.S. Environmental Protection Agency (USEPA). Starting in 1990, MT3D was released as a pubic domain code from the USEPA. Commercial versions with enhanced capab...

  16. Market study: 3-D eyetracker

    NASA Technical Reports Server (NTRS)

    1977-01-01

    A market study of a proposed version of a 3-D eyetracker for initial use at NASA's Ames Research Center was made. The commercialization potential of a simplified, less expensive 3-D eyetracker was ascertained. Primary focus on present and potential users of eyetrackers, as well as present and potential manufacturers has provided an effective means of analyzing the prospects for commercialization.

  17. LLNL-Earth3D

    2013-10-01

    Earth3D is a computer code designed to allow fast calculation of seismic rays and travel times through a 3D model of the Earth. LLNL is using this for earthquake location and global tomography efforts and such codes are of great interest to the Earth Science community.

  18. [3-D ultrasound in gastroenterology].

    PubMed

    Zoller, W G; Liess, H

    1994-06-01

    Three-dimensional (3D) sonography represents a development of noninvasive diagnostic imaging by real-time two-dimensional (2D) sonography. The use of transparent rotating scans, comparable to a block of glass, generates a 3D effect. The objective of the present study was to optimate 3D presentation of abdominal findings. Additional investigations were made with a new volumetric program to determine the volume of selected findings of the liver. The results were compared with the estimated volumes of 2D sonography and 2D computer tomography (CT). For the processing of 3D images, typical parameter constellations were found for the different findings, which facilitated processing of 3D images. In more than 75% of the cases examined we found an optimal 3D presentation of sonographic findings with respect to the evaluation criteria developed by us for the 3D imaging of processed data. Great differences were found for the estimated volumes of the findings of the liver concerning the three different techniques applied. 3D ultrasound represents a valuable method to judge morphological appearance in abdominal findings. The possibility of volumetric measurements enlarges its potential diagnostic significance. Further clinical investigations are necessary to find out if definite differentiation between benign and malign findings is possible. PMID:7919882

  19. Euro3D Science Conference

    NASA Astrophysics Data System (ADS)

    Walsh, J. R.

    2004-02-01

    The Euro3D RTN is an EU funded Research Training Network to foster the exploitation of 3D spectroscopy in Europe. 3D spectroscopy is a general term for spectroscopy of an area of the sky and derives its name from its two spatial + one spectral dimensions. There are an increasing number of instruments which use integral field devices to achieve spectroscopy of an area of the sky, either using lens arrays, optical fibres or image slicers, to pack spectra of multiple pixels on the sky (``spaxels'') onto a 2D detector. On account of the large volume of data and the special methods required to reduce and analyse 3D data, there are only a few centres of expertise and these are mostly involved with instrument developments. There is a perceived lack of expertise in 3D spectroscopy spread though the astronomical community and its use in the armoury of the observational astronomer is viewed as being highly specialised. For precisely this reason the Euro3D RTN was proposed to train young researchers in this area and develop user tools to widen the experience with this particular type of data in Europe. The Euro3D RTN is coordinated by Martin M. Roth (Astrophysikalisches Institut Potsdam) and has been running since July 2002. The first Euro3D science conference was held in Cambridge, UK from 22 to 23 May 2003. The main emphasis of the conference was, in keeping with the RTN, to expose the work of the young post-docs who are funded by the RTN. In addition the team members from the eleven European institutes involved in Euro3D also presented instrumental and observational developments. The conference was organized by Andy Bunker and held at the Institute of Astronomy. There were over thirty participants and 26 talks covered the whole range of application of 3D techniques. The science ranged from Galactic planetary nebulae and globular clusters to kinematics of nearby galaxies out to objects at high redshift. Several talks were devoted to reporting recent observations with newly

  20. PLOT3D user's manual

    NASA Technical Reports Server (NTRS)

    Walatka, Pamela P.; Buning, Pieter G.; Pierce, Larry; Elson, Patricia A.

    1990-01-01

    PLOT3D is a computer graphics program designed to visualize the grids and solutions of computational fluid dynamics. Seventy-four functions are available. Versions are available for many systems. PLOT3D can handle multiple grids with a million or more grid points, and can produce varieties of model renderings, such as wireframe or flat shaded. Output from PLOT3D can be used in animation programs. The first part of this manual is a tutorial that takes the reader, keystroke by keystroke, through a PLOT3D session. The second part of the manual contains reference chapters, including the helpfile, data file formats, advice on changing PLOT3D, and sample command files.

  1. 3D printing in dentistry.

    PubMed

    Dawood, A; Marti Marti, B; Sauret-Jackson, V; Darwood, A

    2015-12-01

    3D printing has been hailed as a disruptive technology which will change manufacturing. Used in aerospace, defence, art and design, 3D printing is becoming a subject of great interest in surgery. The technology has a particular resonance with dentistry, and with advances in 3D imaging and modelling technologies such as cone beam computed tomography and intraoral scanning, and with the relatively long history of the use of CAD CAM technologies in dentistry, it will become of increasing importance. Uses of 3D printing include the production of drill guides for dental implants, the production of physical models for prosthodontics, orthodontics and surgery, the manufacture of dental, craniomaxillofacial and orthopaedic implants, and the fabrication of copings and frameworks for implant and dental restorations. This paper reviews the types of 3D printing technologies available and their various applications in dentistry and in maxillofacial surgery. PMID:26657435

  2. Q estimation using modified S transform based on pre-stack gathers and its applications on carbonate reservoir

    NASA Astrophysics Data System (ADS)

    Zandong Sun, Sam; Sun, Xuekai; Wang, Yonggang; Xie, Huiwen

    2015-10-01

    Pre-stack seismic data is acknowledged to be more favorable in estimating Q values since it carries much more valuable information in traveltime and amplitude than post-stack data. However, the spectrum of reflectors can be strongly altered by nearby reflector or side lobes of the wavelet, which thereby degrades the accuracy of Q estimation based on the pre-stack spectral ratio method. To solve this problem, we propose a method based on the modified S-transform (MST) for estimating Q values from pre-stack gathers, in which Q values can be obtained with regression analysis based on the relationship between spectral ratio slope and the square of offset. Through tests on a numerical model, we first prove advantages of this pre-stack spectral ratio method compared to the traditional post-stack method. Besides, it is also shown that application of MST would lead to a much more focused intercept, which is the kernel for the pre-stack method. Therefore, the accuracy of Q estimation using MST is further improved when compared with that of conventional S-transform (ST). Based on this Q estimation method, we apply relevant processing methods (e.g. inverse Q filtering and dynamic Q migration) in practice, in order to improve imaging resolution and gathering quality with better amplitude and phase relationships. Applications on a carbonate reservoir witness remarkable enhancements of the imaging result, in which features of faults and deep strata are more clearly revealed. Moreover, pre-stack common-reflection-point (CRP) gathers obtained by dynamic Q migration well compensate the amplitude loss and correct the phase. Its ultimate pre-stack elastic inversion result better characterizes the geologic rules of complex carbonate reservoir predominated by secondary-storage-space.

  3. Automatic 2D-to-3D image conversion using 3D examples from the internet

    NASA Astrophysics Data System (ADS)

    Konrad, J.; Brown, G.; Wang, M.; Ishwar, P.; Wu, C.; Mukherjee, D.

    2012-03-01

    The availability of 3D hardware has so far outpaced the production of 3D content. Although to date many methods have been proposed to convert 2D images to 3D stereopairs, the most successful ones involve human operators and, therefore, are time-consuming and costly, while the fully-automatic ones have not yet achieved the same level of quality. This subpar performance is due to the fact that automatic methods usually rely on assumptions about the captured 3D scene that are often violated in practice. In this paper, we explore a radically different approach inspired by our work on saliency detection in images. Instead of relying on a deterministic scene model for the input 2D image, we propose to "learn" the model from a large dictionary of stereopairs, such as YouTube 3D. Our new approach is built upon a key observation and an assumption. The key observation is that among millions of stereopairs available on-line, there likely exist many stereopairs whose 3D content matches that of the 2D input (query). We assume that two stereopairs whose left images are photometrically similar are likely to have similar disparity fields. Our approach first finds a number of on-line stereopairs whose left image is a close photometric match to the 2D query and then extracts depth information from these stereopairs. Since disparities for the selected stereopairs differ due to differences in underlying image content, level of noise, distortions, etc., we combine them by using the median. We apply the resulting median disparity field to the 2D query to obtain the corresponding right image, while handling occlusions and newly-exposed areas in the usual way. We have applied our method in two scenarios. First, we used YouTube 3D videos in search of the most similar frames. Then, we repeated the experiments on a small, but carefully-selected, dictionary of stereopairs closely matching the query. This, to a degree, emulates the results one would expect from the use of an extremely large 3D

  4. Bioprinting of 3D hydrogels.

    PubMed

    Stanton, M M; Samitier, J; Sánchez, S

    2015-08-01

    Three-dimensional (3D) bioprinting has recently emerged as an extension of 3D material printing, by using biocompatible or cellular components to build structures in an additive, layer-by-layer methodology for encapsulation and culture of cells. These 3D systems allow for cell culture in a suspension for formation of highly organized tissue or controlled spatial orientation of cell environments. The in vitro 3D cellular environments simulate the complexity of an in vivo environment and natural extracellular matrices (ECM). This paper will focus on bioprinting utilizing hydrogels as 3D scaffolds. Hydrogels are advantageous for cell culture as they are highly permeable to cell culture media, nutrients, and waste products generated during metabolic cell processes. They have the ability to be fabricated in customized shapes with various material properties with dimensions at the micron scale. 3D hydrogels are a reliable method for biocompatible 3D printing and have applications in tissue engineering, drug screening, and organ on a chip models. PMID:26066320

  5. Unassisted 3D camera calibration

    NASA Astrophysics Data System (ADS)

    Atanassov, Kalin; Ramachandra, Vikas; Nash, James; Goma, Sergio R.

    2012-03-01

    With the rapid growth of 3D technology, 3D image capture has become a critical part of the 3D feature set on mobile phones. 3D image quality is affected by the scene geometry as well as on-the-device processing. An automatic 3D system usually assumes known camera poses accomplished by factory calibration using a special chart. In real life settings, pose parameters estimated by factory calibration can be negatively impacted by movements of the lens barrel due to shaking, focusing, or camera drop. If any of these factors displaces the optical axes of either or both cameras, vertical disparity might exceed the maximum tolerable margin and the 3D user may experience eye strain or headaches. To make 3D capture more practical, one needs to consider unassisted (on arbitrary scenes) calibration. In this paper, we propose an algorithm that relies on detection and matching of keypoints between left and right images. Frames containing erroneous matches, along with frames with insufficiently rich keypoint constellations, are detected and discarded. Roll, pitch yaw , and scale differences between left and right frames are then estimated. The algorithm performance is evaluated in terms of the remaining vertical disparity as compared to the maximum tolerable vertical disparity.

  6. Arena3D: visualization of biological networks in 3D

    PubMed Central

    Pavlopoulos, Georgios A; O'Donoghue, Seán I; Satagopam, Venkata P; Soldatos, Theodoros G; Pafilis, Evangelos; Schneider, Reinhard

    2008-01-01

    Background Complexity is a key problem when visualizing biological networks; as the number of entities increases, most graphical views become incomprehensible. Our goal is to enable many thousands of entities to be visualized meaningfully and with high performance. Results We present a new visualization tool, Arena3D, which introduces a new concept of staggered layers in 3D space. Related data – such as proteins, chemicals, or pathways – can be grouped onto separate layers and arranged via layout algorithms, such as Fruchterman-Reingold, distance geometry, and a novel hierarchical layout. Data on a layer can be clustered via k-means, affinity propagation, Markov clustering, neighbor joining, tree clustering, or UPGMA ('unweighted pair-group method with arithmetic mean'). A simple input format defines the name and URL for each node, and defines connections or similarity scores between pairs of nodes. The use of Arena3D is illustrated with datasets related to Huntington's disease. Conclusion Arena3D is a user friendly visualization tool that is able to visualize biological or any other network in 3D space. It is free for academic use and runs on any platform. It can be downloaded or lunched directly from . Java3D library and Java 1.5 need to be pre-installed for the software to run. PMID:19040715

  7. Anisotropy effects on 3D waveform inversion

    NASA Astrophysics Data System (ADS)

    Stekl, I.; Warner, M.; Umpleby, A.

    2010-12-01

    In the recent years 3D waveform inversion has become achievable procedure for seismic data processing. A number of datasets has been inverted and presented (Warner el al 2008, Ben Hadj at all, Sirgue et all 2010) using isotropic 3D waveform inversion. However the question arises will the results be affected by isotropic assumption. Full-wavefield inversion techniques seek to match field data, wiggle-for-wiggle, to synthetic data generated by a high-resolution model of the sub-surface. In this endeavour, correctly matching the travel times of the principal arrivals is a necessary minimal requirement. In many, perhaps most, long-offset and wide-azimuth datasets, it is necessary to introduce some form of p-wave velocity anisotropy to match the travel times successfully. If this anisotropy is not also incorporated into the wavefield inversion, then results from the inversion will necessarily be compromised. We have incorporated anisotropy into our 3D wavefield tomography codes, characterised as spatially varying transverse isotropy with a tilted axis of symmetry - TTI anisotropy. This enhancement approximately doubles both the run time and the memory requirements of the code. We show that neglect of anisotropy can lead to significant artefacts in the recovered velocity models. We will present inversion results of inverting anisotropic 3D dataset by assuming isotropic earth and compare them with anisotropic inversion result. As a test case Marmousi model extended to 3D with no velocity variation in third direction and with added spatially varying anisotropy is used. Acquisition geometry is assumed as OBC with sources and receivers everywhere at the surface. We attempted inversion using both 2D and full 3D acquisition for this dataset. Results show that if no anisotropy is taken into account although image looks plausible most features are miss positioned in depth and space, even for relatively low anisotropy, which leads to incorrect result. This may lead to

  8. Fdf in US3D

    NASA Astrophysics Data System (ADS)

    Otis, Collin; Ferrero, Pietro; Candler, Graham; Givi, Peyman

    2013-11-01

    The scalar filtered mass density function (SFMDF) methodology is implemented into the computer code US3D. This is an unstructured Eulerian finite volume hydrodynamic solver and has proven very effective for simulation of compressible turbulent flows. The resulting SFMDF-US3D code is employed for large eddy simulation (LES) on unstructured meshes. Simulations are conducted of subsonic and supersonic flows under non-reacting and reacting conditions. The consistency and the accuracy of the simulated results are assessed along with appraisal of the overall performance of the methodology. The SFMDF-US3D is now capable of simulating high speed flows in complex configurations.

  9. Iterative Multiparameter Elastic Waveform Inversion Using Prestack Time Imaging and Kirchhoff approximation

    NASA Astrophysics Data System (ADS)

    Khaniani, Hassan

    This thesis proposes a "standard strategy" for iterative inversion of elastic properties from the seismic reflection data. The term "standard" refers to the current hands-on commercial techniques that are used for the seismic imaging and inverse problem. The method is established to reduce the computation time associated with elastic Full Waveform Inversion (FWI) methods. It makes use of AVO analysis, prestack time migration and corresponding forward modeling in an iterative scheme. The main objective is to describe the iterative inversion procedure used in seismic reflection data using simplified mathematical expression and their numerical applications. The frame work of the inversion is similar to (FWI) method but with less computational costs. The reduction of computational costs depends on the data conditioning (with or without multiple data), the level of the complexity of geological model and acquisition condition such as Signal to Noise Ratio (SNR). Many processing methods consider multiple events as noise and remove it from the data. This is the motivation for reducing the computational cost associated with Finite Difference Time Domain (FDTD) forward modeling and Reverse Time Migration (RTM)-based techniques. Therefore, a one-way solution of the wave equation for inversion is implemented. While less computationally intensive depth imaging methods are available by iterative coupling of ray theory and the Born approximation, it is shown that we can further reduce the cost of inversion by dropping the cost of ray tracing for traveltime estimation in a way similar to standard Prestack Time Migration (PSTM) and the corresponding forward modeling. This requires the model to have smooth lateral variations in elastic properties, so that the traveltime of the scatterpoints can be approximated by a Double Square Root (DSR) equation. To represent a more realistic and stable solution of the inverse problem, while considering the phase of supercritical angles, the

  10. 3D measurement using circular gratings

    NASA Astrophysics Data System (ADS)

    Harding, Kevin

    2013-09-01

    3D measurement using methods of structured light are well known in the industry. Most such systems use some variation of straight lines, either as simple lines or with some form of encoding. This geometry assumes the lines will be projected from one side and viewed from another to generate the profile information. But what about applications where a wide triangulation angle may not be practical, particularly at longer standoff distances. This paper explores the use of circular grating patterns projected from a center point to achieve 3D information. Originally suggested by John Caulfield around 1990, the method had some interesting potential, particularly if combined with alternate means of measurement from traditional triangulation including depth from focus methods. The possible advantages of a central reference point in the projected pattern may offer some different capabilities not as easily attained with a linear grating pattern. This paper will explore the pros and cons of the method and present some examples of possible applications.

  11. Illustrative visualization of 3D city models

    NASA Astrophysics Data System (ADS)

    Doellner, Juergen; Buchholz, Henrik; Nienhaus, Marc; Kirsch, Florian

    2005-03-01

    This paper presents an illustrative visualization technique that provides expressive representations of large-scale 3D city models, inspired by the tradition of artistic and cartographic visualizations typically found in bird"s-eye view and panoramic maps. We define a collection of city model components and a real-time multi-pass rendering algorithm that achieves comprehensible, abstract 3D city model depictions based on edge enhancement, color-based and shadow-based depth cues, and procedural facade texturing. Illustrative visualization provides an effective visual interface to urban spatial information and associated thematic information complementing visual interfaces based on the Virtual Reality paradigm, offering a huge potential for graphics design. Primary application areas include city and landscape planning, cartoon worlds in computer games, and tourist information systems.

  12. Heterodyne 3D ghost imaging

    NASA Astrophysics Data System (ADS)

    Yang, Xu; Zhang, Yong; Yang, Chenghua; Xu, Lu; Wang, Qiang; Zhao, Yuan

    2016-06-01

    Conventional three dimensional (3D) ghost imaging measures range of target based on pulse fight time measurement method. Due to the limit of data acquisition system sampling rate, range resolution of the conventional 3D ghost imaging is usually low. In order to take off the effect of sampling rate to range resolution of 3D ghost imaging, a heterodyne 3D ghost imaging (HGI) system is presented in this study. The source of HGI is a continuous wave laser instead of pulse laser. Temporal correlation and spatial correlation of light are both utilized to obtain the range image of target. Through theory analysis and numerical simulations, it is demonstrated that HGI can obtain high range resolution image with low sampling rate.

  13. Combinatorial 3D Mechanical Metamaterials

    NASA Astrophysics Data System (ADS)

    Coulais, Corentin; Teomy, Eial; de Reus, Koen; Shokef, Yair; van Hecke, Martin

    2015-03-01

    We present a class of elastic structures which exhibit 3D-folding motion. Our structures consist of cubic lattices of anisotropic unit cells that can be tiled in a complex combinatorial fashion. We design and 3d-print this complex ordered mechanism, in which we combine elastic hinges and defects to tailor the mechanics of the material. Finally, we use this large design space to encode smart functionalities such as surface patterning and multistability.

  14. PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. In addition to providing the advantages of performing complex

  15. PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. In addition to providing the advantages of performing complex

  16. PLOT3D/AMES, DEC VAX VMS VERSION USING DISSPLA (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. The VAX/VMS/DISSPLA implementation of PLOT3D supports 2-D polygons as

  17. PLOT3D/AMES, DEC VAX VMS VERSION USING DISSPLA (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P. G.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. The VAX/VMS/DISSPLA implementation of PLOT3D supports 2-D polygons as

  18. 3D visualization of polymer nanostructure

    SciTech Connect

    Werner, James H

    2009-01-01

    at {approx} 10 nm resolution over hundreds of microns in 3 spatial dimensions. Super-resolution microcopy methods based upon single molecule localization were originally limited to 2D slices. Recent advances in this field have extended these methods to three dimensions. However, the 3D rendering was limited to viewing sparsely labeled cellular structures over a z-depth of less than 1 micron. Our first goal is to extend super resolution microscopy to z-depths of hundreds of microns. This substantial improvement is needed to image polymer nanostructure over functionally relevant length scales. (2) Benchmark this instrument by studying the 3D nanostructure of diblock co-polymer morphologies. We will test and benchmark our instrument by imaging fluorescently labeled diblock copolymers, molecules that self-assemble into a variety of 3D nano-structures. We reiterate these polymers are useful for a variety of applications ranging from lithography to light harvesting.

  19. PLOT3D/AMES, SGI IRIS VERSION (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. In each of these areas, the IRIS implementation of PLOT3D offers

  20. PLOT3D/AMES, SGI IRIS VERSION (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. In each of these areas, the IRIS implementation of PLOT3D offers

  1. PLOT3D/AMES, GENERIC UNIX VERSION USING DISSPLA (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. The UNIX/DISSPLA implementation of PLOT3D supports 2-D polygons as

  2. PLOT3D/AMES, GENERIC UNIX VERSION USING DISSPLA (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. The UNIX/DISSPLA implementation of PLOT3D supports 2-D polygons as

  3. From 3D view to 3D print

    NASA Astrophysics Data System (ADS)

    Dima, M.; Farisato, G.; Bergomi, M.; Viotto, V.; Magrin, D.; Greggio, D.; Farinato, J.; Marafatto, L.; Ragazzoni, R.; Piazza, D.

    2014-08-01

    In the last few years 3D printing is getting more and more popular and used in many fields going from manufacturing to industrial design, architecture, medical support and aerospace. 3D printing is an evolution of bi-dimensional printing, which allows to obtain a solid object from a 3D model, realized with a 3D modelling software. The final product is obtained using an additive process, in which successive layers of material are laid down one over the other. A 3D printer allows to realize, in a simple way, very complex shapes, which would be quite difficult to be produced with dedicated conventional facilities. Thanks to the fact that the 3D printing is obtained superposing one layer to the others, it doesn't need any particular work flow and it is sufficient to simply draw the model and send it to print. Many different kinds of 3D printers exist based on the technology and material used for layer deposition. A common material used by the toner is ABS plastics, which is a light and rigid thermoplastic polymer, whose peculiar mechanical properties make it diffusely used in several fields, like pipes production and cars interiors manufacturing. I used this technology to create a 1:1 scale model of the telescope which is the hardware core of the space small mission CHEOPS (CHaracterising ExOPlanets Satellite) by ESA, which aims to characterize EXOplanets via transits observations. The telescope has a Ritchey-Chrétien configuration with a 30cm aperture and the launch is foreseen in 2017. In this paper, I present the different phases for the realization of such a model, focusing onto pros and cons of this kind of technology. For example, because of the finite printable volume (10×10×12 inches in the x, y and z directions respectively), it has been necessary to split the largest parts of the instrument in smaller components to be then reassembled and post-processed. A further issue is the resolution of the printed material, which is expressed in terms of layers

  4. Speaking Volumes About 3-D

    NASA Technical Reports Server (NTRS)

    2002-01-01

    In 1999, Genex submitted a proposal to Stennis Space Center for a volumetric 3-D display technique that would provide multiple users with a 360-degree perspective to simultaneously view and analyze 3-D data. The futuristic capabilities of the VolumeViewer(R) have offered tremendous benefits to commercial users in the fields of medicine and surgery, air traffic control, pilot training and education, computer-aided design/computer-aided manufacturing, and military/battlefield management. The technology has also helped NASA to better analyze and assess the various data collected by its satellite and spacecraft sensors. Genex capitalized on its success with Stennis by introducing two separate products to the commercial market that incorporate key elements of the 3-D display technology designed under an SBIR contract. The company Rainbow 3D(R) imaging camera is a novel, three-dimensional surface profile measurement system that can obtain a full-frame 3-D image in less than 1 second. The third product is the 360-degree OmniEye(R) video system. Ideal for intrusion detection, surveillance, and situation management, this unique camera system offers a continuous, panoramic view of a scene in real time.

  5. Prestack migration velocity analysis based on simplified two-parameter moveout equation

    NASA Astrophysics Data System (ADS)

    Chen, Hai-Feng; Li, Xiang-Yang; Qian, Zhong-Ping; Song, Jian-Jun; Zhao, Gui-Ling

    2016-03-01

    Stacking velocity V C2, vertical velocity ratio γ 0, effective velocity ratio γ eff, and anisotropic parameter χ eff are correlated in the PS-converted-wave (PS-wave) anisotropic prestack Kirchhoff time migration (PKTM) velocity model and are thus difficult to independently determine. We extended the simplified two-parameter (stacking velocity V C2 and anisotropic parameter k eff) moveout equation from stacking velocity analysis to PKTM velocity model updating and formed a new four-parameter (stacking velocity V C2, vertical velocity ratio γ 0, effective velocity ratio γ eff, and anisotropic parameter k eff) PS-wave anisotropic PKTM velocity model updating and process flow based on the simplified two-parameter moveout equation. In the proposed method, first, the PS-wave two-parameter stacking velocity is analyzed to obtain the anisotropic PKTM initial velocity and anisotropic parameters; then, the velocity and anisotropic parameters are corrected by analyzing the residual moveout on common imaging point gathers after prestack time migration. The vertical velocity ratio γ 0 of the prestack time migration velocity model is obtained with an appropriate method utilizing the P- and PS-wave stacked sections after level calibration. The initial effective velocity ratio γ eff is calculated using the Thomsen (1999) equation in combination with the P-wave velocity analysis; ultimately, the final velocity model of the effective velocity ratio γ eff is obtained by percentage scanning migration. This method simplifies the PS-wave parameter estimation in high-quality imaging, reduces the uncertainty of multiparameter estimations, and obtains good imaging results in practice.

  6. 3D-Printed Microfluidics.

    PubMed

    Au, Anthony K; Huynh, Wilson; Horowitz, Lisa F; Folch, Albert

    2016-03-14

    The advent of soft lithography allowed for an unprecedented expansion in the field of microfluidics. However, the vast majority of PDMS microfluidic devices are still made with extensive manual labor, are tethered to bulky control systems, and have cumbersome user interfaces, which all render commercialization difficult. On the other hand, 3D printing has begun to embrace the range of sizes and materials that appeal to the developers of microfluidic devices. Prior to fabrication, a design is digitally built as a detailed 3D CAD file. The design can be assembled in modules by remotely collaborating teams, and its mechanical and fluidic behavior can be simulated using finite-element modeling. As structures are created by adding materials without the need for etching or dissolution, processing is environmentally friendly and economically efficient. We predict that in the next few years, 3D printing will replace most PDMS and plastic molding techniques in academia. PMID:26854878

  7. Geomatics for precise 3D breast imaging.

    PubMed

    Alto, Hilary

    2005-02-01

    Canadian women have a one in nine chance of developing breast cancer during their lifetime. Mammography is the most common imaging technology used for breast cancer detection in its earliest stages through screening programs. Clusters of microcalcifications are primary indicators of breast cancer; the shape, size and number may be used to determine whether they are malignant or benign. However, overlapping images of calcifications on a mammogram hinder the classification of the shape and size of each calcification and a misdiagnosis may occur resulting in either an unnecessary biopsy being performed or a necessary biopsy not being performed. The introduction of 3D imaging techniques such as standard photogrammetry may increase the confidence of the radiologist when making his/her diagnosis. In this paper, traditional analytical photogrammetric techniques for the 3D mathematical reconstruction of microcalcifications are presented. The techniques are applied to a specially designed and constructed x-ray transparent Plexiglas phantom (control object). The phantom was embedded with 1.0 mm x-ray opaque lead pellets configured to represent overlapping microcalcifications. Control points on the phantom were determined by standard survey methods and hand measurements. X-ray films were obtained using a LORAD M-III mammography machine. The photogrammetric techniques of relative and absolute orientation were applied to the 2D mammographic films to analytically generate a 3D depth map with an overall accuracy of 0.6 mm. A Bundle Adjustment and the Direct Linear Transform were used to confirm the results. PMID:15649085

  8. Development and Calibration of New 3-D Vector VSP Imaging Technology: Vinton Salt Dome, LA

    SciTech Connect

    Kurt J. Marfurt; Hua-Wei Zhou; E. Charlotte Sullivan

    2004-09-01

    Vinton salt dome is located in Southwestern Louisiana, in Calcasieu Parish. Tectonically, the piercement dome is within the salt dome minibasin province. The field has been in production since 1901, with most of the production coming from Miocene and Oligocene sands. The goal of our project was to develop and calibrate new processing and interpretation technology to fully exploit the information available from a simultaneous 3-D surface seismic survey and 3-C, 3-D vertical seismic profile (VSP) survey over the dome. More specifically the goal was to better image salt dome flanks and small, reservoir-compartmentalizing faults. This new technology has application to mature salt-related fields across the Gulf Coast. The primary focus of our effort was to develop, apply, and assess the limitations of new 3-C, 3-D wavefield separation and imaging technology that could be used to image aliased, limited-aperture, vector VSP data. Through 2-D and 3-D full elastic modeling, we verified that salt flank reflections exist in the horizontally-traveling portion of the wavefield rather than up- and down-going portions of the wavefield, thereby explaining why many commercial VSP processing flow failed. Since the P-wave reflections from the salt flank are measured primarily on the horizontal components while P-wave reflections from deeper sedimentary horizons are measured primarily on the vertical component, a true vector VSP analysis was needed. We developed an antialiased discrete Radon transform filter to accurately model P- and S-wave data components measured by the vector VSP. On-the-fly polarization filtering embedded in our Kirchhoff imaging algorithm was effective in separating PP from PS wave images. By the novel application of semblance-weighted filters, we were able to suppress many of the migration artifacts associated with low fold, sparse VSP acquisition geometries. To provide a better velocity/depth model, we applied 3-D prestack depth migration to the surface data

  9. 3D Computations and Experiments

    SciTech Connect

    Couch, R; Faux, D; Goto, D; Nikkel, D

    2004-04-05

    This project consists of two activities. Task A, Simulations and Measurements, combines all the material model development and associated numerical work with the materials-oriented experimental activities. The goal of this effort is to provide an improved understanding of dynamic material properties and to provide accurate numerical representations of those properties for use in analysis codes. Task B, ALE3D Development, involves general development activities in the ALE3D code with the focus of improving simulation capabilities for problems of mutual interest to DoD and DOE. Emphasis is on problems involving multi-phase flow, blast loading of structures and system safety/vulnerability studies.

  10. 3D Computations and Experiments

    SciTech Connect

    Couch, R; Faux, D; Goto, D; Nikkel, D

    2003-05-12

    This project is in its first full year after the combining of two previously funded projects: ''3D Code Development'' and ''Dynamic Material Properties''. The motivation behind this move was to emphasize and strengthen the ties between the experimental work and the computational model development in the materials area. The next year's activities will indicate the merging of the two efforts. The current activity is structured in two tasks. Task A, ''Simulations and Measurements'', combines all the material model development and associated numerical work with the materials-oriented experimental activities. Task B, ''ALE3D Development'', is a continuation of the non-materials related activities from the previous project.

  11. 3D GPR Imaging of Wooden Logs

    NASA Astrophysics Data System (ADS)

    Halabe, Udaya B.; Pyakurel, Sandeep

    2007-03-01

    There has been a lack of an effective NDE technique to locate internal defects within wooden logs. The few available elastic wave propagation based techniques are limited to predicting E values. Other techniques such as X-rays have not been very successful in detecting internal defects in logs. If defects such as embedded metals could be identified before the sawing process, the saw mills could significantly increase their production by reducing the probability of damage to the saw blade and the associated downtime and the repair cost. Also, if the internal defects such as knots and decayed areas could be identified in logs, the sawing blade can be oriented to exclude the defective portion and optimize the volume of high valued lumber that can be obtained from the logs. In this research, GPR has been successfully used to locate internal defects (knots, decays and embedded metals) within the logs. This paper discusses GPR imaging and mapping of the internal defects using both 2D and 3D interpretation methodology. Metal pieces were inserted in a log and the reflection patterns from these metals were interpreted from the radargrams acquired using 900 MHz antenna. Also, GPR was able to accurately identify the location of knots and decays. Scans from several orientations of the log were collected to generate 3D cylindrical volume. The actual location of the defects showed good correlation with the interpreted defects in the 3D volume. The time/depth slices from 3D cylindrical volume data were useful in understanding the extent of defects inside the log.

  12. New portable FELIX 3D display

    NASA Astrophysics Data System (ADS)

    Langhans, Knut; Bezecny, Daniel; Homann, Dennis; Bahr, Detlef; Vogt, Carsten; Blohm, Christian; Scharschmidt, Karl-Heinz

    1998-04-01

    An improved generation of our 'FELIX 3D Display' is presented. This system is compact, light, modular and easy to transport. The created volumetric images consist of many voxels, which are generated in a half-sphere display volume. In that way a spatial object can be displayed occupying a physical space with height, width and depth. The new FELIX generation uses a screen rotating with 20 revolutions per second. This target screen is mounted by an easy to change mechanism making it possible to use appropriate screens for the specific purpose of the display. An acousto-optic deflection unit with an integrated small diode pumped laser draws the images on the spinning screen. Images can consist of up to 10,000 voxels at a refresh rate of 20 Hz. Currently two different hardware systems are investigated. The first one is based on a standard PCMCIA digital/analog converter card as an interface and is controlled by a notebook. The developed software is provided with a graphical user interface enabling several animation features. The second, new prototype is designed to display images created by standard CAD applications. It includes the development of a new high speed hardware interface suitable for state-of-the- art fast and high resolution scanning devices, which require high data rates. A true 3D volume display as described will complement the broad range of 3D visualization tools, such as volume rendering packages, stereoscopic and virtual reality techniques, which have become widely available in recent years. Potential applications for the FELIX 3D display include imaging in the field so fair traffic control, medical imaging, computer aided design, science as well as entertainment.

  13. SNL3dFace

    2007-07-20

    This software distribution contains MATLAB and C++ code to enable identity verification using 3D images that may or may not contain a texture component. The code is organized to support system performance testing and system capability demonstration through the proper configuration of the available user interface. Using specific algorithm parameters the face recognition system has been demonstrated to achieve a 96.6% verification rate (Pd) at 0.001 false alarm rate. The system computes robust facial featuresmore » of a 3D normalized face using Principal Component Analysis (PCA) and Fisher Linear Discriminant Analysis (FLDA). A 3D normalized face is obtained by alighning each face, represented by a set of XYZ coordinated, to a scaled reference face using the Iterative Closest Point (ICP) algorithm. The scaled reference face is then deformed to the input face using an iterative framework with parameters that control the deformed surface regulation an rate of deformation. A variety of options are available to control the information that is encoded by the PCA. Such options include the XYZ coordinates, the difference of each XYZ coordinates from the reference, the Z coordinate, the intensity/texture values, etc. In addition to PCA/FLDA feature projection this software supports feature matching to obtain similarity matrices for performance analysis. In addition, this software supports visualization of the STL, MRD, 2D normalized, and PCA synthetic representations in a 3D environment.« less

  14. Making Inexpensive 3-D Models

    ERIC Educational Resources Information Center

    Manos, Harry

    2016-01-01

    Visual aids are important to student learning, and they help make the teacher's job easier. Keeping with the "TPT" theme of "The Art, Craft, and Science of Physics Teaching," the purpose of this article is to show how teachers, lacking equipment and funds, can construct a durable 3-D model reference frame and a model gravity…

  15. SNL3dFace

    SciTech Connect

    Russ, Trina; Koch, Mark; Koudelka, Melissa; Peters, Ralph; Little, Charles; Boehnen, Chris; Peters, Tanya

    2007-07-20

    This software distribution contains MATLAB and C++ code to enable identity verification using 3D images that may or may not contain a texture component. The code is organized to support system performance testing and system capability demonstration through the proper configuration of the available user interface. Using specific algorithm parameters the face recognition system has been demonstrated to achieve a 96.6% verification rate (Pd) at 0.001 false alarm rate. The system computes robust facial features of a 3D normalized face using Principal Component Analysis (PCA) and Fisher Linear Discriminant Analysis (FLDA). A 3D normalized face is obtained by alighning each face, represented by a set of XYZ coordinated, to a scaled reference face using the Iterative Closest Point (ICP) algorithm. The scaled reference face is then deformed to the input face using an iterative framework with parameters that control the deformed surface regulation an rate of deformation. A variety of options are available to control the information that is encoded by the PCA. Such options include the XYZ coordinates, the difference of each XYZ coordinates from the reference, the Z coordinate, the intensity/texture values, etc. In addition to PCA/FLDA feature projection this software supports feature matching to obtain similarity matrices for performance analysis. In addition, this software supports visualization of the STL, MRD, 2D normalized, and PCA synthetic representations in a 3D environment.

  16. 3D Printing: Exploring Capabilities

    ERIC Educational Resources Information Center

    Samuels, Kyle; Flowers, Jim

    2015-01-01

    As 3D printers become more affordable, schools are using them in increasing numbers. They fit well with the emphasis on product design in technology and engineering education, allowing students to create high-fidelity physical models to see and test different iterations in their product designs. They may also help students to "think in three…

  17. Case study: The Avengers 3D: cinematic techniques and digitally created 3D

    NASA Astrophysics Data System (ADS)

    Clark, Graham D.

    2013-03-01

    Marvel's THE AVENGERS was the third film Stereo D collaborated on with Marvel; it was a summation of our artistic development of what Digitally Created 3D and Stereo D's artists and toolsets affords Marvel's filmmakers; the ability to shape stereographic space to support the film and story, in a way that balances human perception and live photography. We took our artistic lead from the cinematic intentions of Marvel, the Director Joss Whedon, and Director of Photography Seamus McGarvey. In the digital creation of a 3D film from a 2D image capture, recommendations to the filmmakers cinematic techniques are offered by Stereo D at each step from pre-production onwards, through set, into post. As the footage arrives at our facility we respond in depth to the cinematic qualities of the imagery in context of the edit and story, with the guidance of the Directors and Studio, creating stereoscopic imagery. Our involvement in The Avengers was early in production, after reading the script we had the opportunity and honor to meet and work with the Director Joss Whedon, and DP Seamus McGarvey on set, and into post. We presented what is obvious to such great filmmakers in the ways of cinematic techniques as they related to the standard depth cues and story points we would use to evaluate depth for their film. Our hope was any cinematic habits that supported better 3D would be emphasized. In searching for a 3D statement for the studio and filmmakers we arrived at a stereographic style that allowed for comfort and maximum visual engagement to the viewer.

  18. TACO3D. 3-D Finite Element Heat Transfer Code

    SciTech Connect

    Mason, W.E.

    1992-03-04

    TACO3D is a three-dimensional, finite-element program for heat transfer analysis. An extension of the two-dimensional TACO program, it can perform linear and nonlinear analyses and can be used to solve either transient or steady-state problems. The program accepts time-dependent or temperature-dependent material properties, and materials may be isotropic or orthotropic. A variety of time-dependent and temperature-dependent boundary conditions and loadings are available including temperature, flux, convection, and radiation boundary conditions and internal heat generation. Additional specialized features treat enclosure radiation, bulk nodes, and master/slave internal surface conditions (e.g., contact resistance). Data input via a free-field format is provided. A user subprogram feature allows for any type of functional representation of any independent variable. A profile (bandwidth) minimization option is available. The code is limited to implicit time integration for transient solutions. TACO3D has no general mesh generation capability. Rows of evenly-spaced nodes and rows of sequential elements may be generated, but the program relies on separate mesh generators for complex zoning. TACO3D does not have the ability to calculate view factors internally. Graphical representation of data in the form of time history and spatial plots is provided through links to the POSTACO and GRAPE postprocessor codes.

  19. 3D augmented reality with integral imaging display

    NASA Astrophysics Data System (ADS)

    Shen, Xin; Hua, Hong; Javidi, Bahram

    2016-06-01

    In this paper, a three-dimensional (3D) integral imaging display for augmented reality is presented. By implementing the pseudoscopic-to-orthoscopic conversion method, elemental image arrays with different capturing parameters can be transferred into the identical format for 3D display. With the proposed merging algorithm, a new set of elemental images for augmented reality display is generated. The newly generated elemental images contain both the virtual objects and real world scene with desired depth information and transparency parameters. The experimental results indicate the feasibility of the proposed 3D augmented reality with integral imaging.

  20. Accommodation response measurements for integral 3D image

    NASA Astrophysics Data System (ADS)

    Hiura, H.; Mishina, T.; Arai, J.; Iwadate, Y.

    2014-03-01

    We measured accommodation responses under integral photography (IP), binocular stereoscopic, and real object display conditions, and viewing conditions of binocular and monocular viewing conditions. The equipment we used was an optometric device and a 3D display. We developed the 3D display for IP and binocular stereoscopic images that comprises a high-resolution liquid crystal display (LCD) and a high-density lens array. The LCD has a resolution of 468 dpi and a diagonal size of 4.8 inches. The high-density lens array comprises 106 x 69 micro lenses that have a focal length of 3 mm and diameter of 1 mm. The lenses are arranged in a honeycomb pattern. The 3D display was positioned 60 cm from an observer under IP and binocular stereoscopic display conditions. The target was presented at eight depth positions relative to the 3D display: 15, 10, and 5 cm in front of the 3D display, on the 3D display panel, and 5, 10, 15 and 30 cm behind the 3D display under the IP and binocular stereoscopic display conditions. Under the real object display condition, the target was displayed on the 3D display panel, and the 3D display was placed at the eight positions. The results suggest that the IP image induced more natural accommodation responses compared to the binocular stereoscopic image. The accommodation responses of the IP image were weaker than those of a real object; however, they showed a similar tendency with those of the real object under the two viewing conditions. Therefore, IP can induce accommodation to the depth positions of 3D images.

  1. Prestack seismic data regularization using a time-variant anisotropic Radon transform

    NASA Astrophysics Data System (ADS)

    Gong, Xiangbo; Yu, Shuang; Wang, Shengchao

    2016-08-01

    The Radon transform (RT) has been widely used in seismic data processing. In this paper, we develop a sparse time-variant anisotropic Radon transform (ART) to regularize and interpolate the prestack seismic data. By introducing the anelliptical parameter η, the ART has a more accurate integral path than other widely used RTs, which produces a better energy-focused Radon panel in the case of a vertical transverse isotropy VTI medium or seismic gather with a large moveout. To promote the sparsity of the Radon panel, the RT is realized as a l 1–l 2 norm inversion problem, and the fast iterative shrinkage thresholding algorithm is imposed to solve this sparsity-constrained inversion problem. Compared with the time-invariant parabolic RT in the mixed frequency-time domain and time-variant hyperbolic RT, the reconstructed result of the ART has the best performance and the least reconstruction error in a general synthetic VTI medium. Another field marine example further demonstrates that the ART is effective and robust for prestack seismic data regularization.

  2. 3-D Relativistic MHD Simulations

    NASA Astrophysics Data System (ADS)

    Nishikawa, K.-I.; Frank, J.; Koide, S.; Sakai, J.-I.; Christodoulou, D. M.; Sol, H.; Mutel, R. L.

    1998-12-01

    We present 3-D numerical simulations of moderately hot, supersonic jets propagating initially along or obliquely to the field lines of a denser magnetized background medium with Lorentz factors of W = 4.56 and evolving in a four-dimensional spacetime. The new results are understood as follows: Relativistic simulations have consistently shown that these jets are effectively heavy and so they do not suffer substantial momentum losses and are not decelerated as efficiently as their nonrelativistic counterparts. In addition, the ambient magnetic field, however strong, can be pushed aside with relative ease by the beam, provided that the degrees of freedom associated with all three spatial dimensions are followed self-consistently in the simulations. This effect is analogous to pushing Japanese ``noren'' or vertical Venetian blinds out of the way while the slats are allowed to bend in 3-D space rather than as a 2-D slab structure.

  3. Forensic 3D Scene Reconstruction

    SciTech Connect

    LITTLE,CHARLES Q.; PETERS,RALPH R.; RIGDON,J. BRIAN; SMALL,DANIEL E.

    1999-10-12

    Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a feasible prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.

  4. Forensic 3D scene reconstruction

    NASA Astrophysics Data System (ADS)

    Little, Charles Q.; Small, Daniel E.; Peters, Ralph R.; Rigdon, J. B.

    2000-05-01

    Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a fieldable prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.

  5. 360-degree 3D profilometry

    NASA Astrophysics Data System (ADS)

    Song, Yuanhe; Zhao, Hong; Chen, Wenyi; Tan, Yushan

    1997-12-01

    A new method of 360 degree turning 3D shape measurement in which light sectioning and phase shifting techniques are both used is presented in this paper. A sine light field is applied in the projected light stripe, meanwhile phase shifting technique is used to calculate phases of the light slit. Thereafter wrapped phase distribution of the slit is formed and the unwrapping process is made by means of the height information based on the light sectioning method. Therefore phase measuring results with better precision can be obtained. At last the target 3D shape data can be produced according to geometric relationships between phases and the object heights. The principles of this method are discussed in detail and experimental results are shown in this paper.

  6. 3D Printable Graphene Composite.

    PubMed

    Wei, Xiaojun; Li, Dong; Jiang, Wei; Gu, Zheming; Wang, Xiaojuan; Zhang, Zengxing; Sun, Zhengzong

    2015-01-01

    In human being's history, both the Iron Age and Silicon Age thrived after a matured massive processing technology was developed. Graphene is the most recent superior material which could potentially initialize another new material Age. However, while being exploited to its full extent, conventional processing methods fail to provide a link to today's personalization tide. New technology should be ushered in. Three-dimensional (3D) printing fills the missing linkage between graphene materials and the digital mainstream. Their alliance could generate additional stream to push the graphene revolution into a new phase. Here we demonstrate for the first time, a graphene composite, with a graphene loading up to 5.6 wt%, can be 3D printable into computer-designed models. The composite's linear thermal coefficient is below 75 ppm·°C(-1) from room temperature to its glass transition temperature (Tg), which is crucial to build minute thermal stress during the printing process. PMID:26153673

  7. 3D Printed Robotic Hand

    NASA Technical Reports Server (NTRS)

    Pizarro, Yaritzmar Rosario; Schuler, Jason M.; Lippitt, Thomas C.

    2013-01-01

    Dexterous robotic hands are changing the way robots and humans interact and use common tools. Unfortunately, the complexity of the joints and actuations drive up the manufacturing cost. Some cutting edge and commercially available rapid prototyping machines now have the ability to print multiple materials and even combine these materials in the same job. A 3D model of a robotic hand was designed using Creo Parametric 2.0. Combining "hard" and "soft" materials, the model was printed on the Object Connex350 3D printer with the purpose of resembling as much as possible the human appearance and mobility of a real hand while needing no assembly. After printing the prototype, strings where installed as actuators to test mobility. Based on printing materials, the manufacturing cost of the hand was $167, significantly lower than other robotic hands without the actuators since they have more complex assembly processes.

  8. 3D-graphite structure

    SciTech Connect

    Belenkov, E. A. Ali-Pasha, V. A.

    2011-01-15

    The structure of clusters of some new carbon 3D-graphite phases have been calculated using the molecular-mechanics methods. It is established that 3D-graphite polytypes {alpha}{sub 1,1}, {alpha}{sub 1,3}, {alpha}{sub 1,5}, {alpha}{sub 2,1}, {alpha}{sub 2,3}, {alpha}{sub 3,1}, {beta}{sub 1,2}, {beta}{sub 1,4}, {beta}{sub 1,6}, {beta}{sub 2,1}, and {beta}{sub 3,2} consist of sp{sup 2}-hybridized atoms, have hexagonal unit cells, and differ in regards to the structure of layers and order of their alternation. A possible way to experimentally synthesize new carbon phases is proposed: the polymerization and carbonization of hydrocarbon molecules.

  9. Volumetric 3D Display System with Static Screen

    NASA Technical Reports Server (NTRS)

    Geng, Jason

    2011-01-01

    Current display technology has relied on flat, 2D screens that cannot truly convey the third dimension of visual information: depth. In contrast to conventional visualization that is primarily based on 2D flat screens, the volumetric 3D display possesses a true 3D display volume, and places physically each 3D voxel in displayed 3D images at the true 3D (x,y,z) spatial position. Each voxel, analogous to a pixel in a 2D image, emits light from that position to form a real 3D image in the eyes of the viewers. Such true volumetric 3D display technology provides both physiological (accommodation, convergence, binocular disparity, and motion parallax) and psychological (image size, linear perspective, shading, brightness, etc.) depth cues to human visual systems to help in the perception of 3D objects. In a volumetric 3D display, viewers can watch the displayed 3D images from a completely 360 view without using any special eyewear. The volumetric 3D display techniques may lead to a quantum leap in information display technology and can dramatically change the ways humans interact with computers, which can lead to significant improvements in the efficiency of learning and knowledge management processes. Within a block of glass, a large amount of tiny dots of voxels are created by using a recently available machining technique called laser subsurface engraving (LSE). The LSE is able to produce tiny physical crack points (as small as 0.05 mm in diameter) at any (x,y,z) location within the cube of transparent material. The crack dots, when illuminated by a light source, scatter the light around and form visible voxels within the 3D volume. The locations of these tiny voxels are strategically determined such that each can be illuminated by a light ray from a high-resolution digital mirror device (DMD) light engine. The distribution of these voxels occupies the full display volume within the static 3D glass screen. This design eliminates any moving screen seen in previous

  10. [Real time 3D echocardiography].

    PubMed

    Bauer, F; Shiota, T; Thomas, J D

    2001-07-01

    Three-dimensional representation of the heart is an old concern. Usually, 3D reconstruction of the cardiac mass is made by successive acquisition of 2D sections, the spatial localisation and orientation of which require complex guiding systems. More recently, the concept of volumetric acquisition has been introduced. A matricial emitter-receiver probe complex with parallel data processing provides instantaneous of a pyramidal 64 degrees x 64 degrees volume. The image is restituted in real time and is composed of 3 planes (planes B and C) which can be displaced in all spatial directions at any time during acquisition. The flexibility of this system of acquisition allows volume and mass measurement with greater accuracy and reproducibility, limiting inter-observer variability. Free navigation of the planes of investigation allows reconstruction for qualitative and quantitative analysis of valvular heart disease and other pathologies. Although real time 3D echocardiography is ready for clinical usage, some improvements are still necessary to improve its conviviality. Then real time 3D echocardiography could be the essential tool for understanding, diagnosis and management of patients. PMID:11494630

  11. [Real time 3D echocardiography

    NASA Technical Reports Server (NTRS)

    Bauer, F.; Shiota, T.; Thomas, J. D.

    2001-01-01

    Three-dimensional representation of the heart is an old concern. Usually, 3D reconstruction of the cardiac mass is made by successive acquisition of 2D sections, the spatial localisation and orientation of which require complex guiding systems. More recently, the concept of volumetric acquisition has been introduced. A matricial emitter-receiver probe complex with parallel data processing provides instantaneous of a pyramidal 64 degrees x 64 degrees volume. The image is restituted in real time and is composed of 3 planes (planes B and C) which can be displaced in all spatial directions at any time during acquisition. The flexibility of this system of acquisition allows volume and mass measurement with greater accuracy and reproducibility, limiting inter-observer variability. Free navigation of the planes of investigation allows reconstruction for qualitative and quantitative analysis of valvular heart disease and other pathologies. Although real time 3D echocardiography is ready for clinical usage, some improvements are still necessary to improve its conviviality. Then real time 3D echocardiography could be the essential tool for understanding, diagnosis and management of patients.

  12. 3D gesture recognition from serial range image

    NASA Astrophysics Data System (ADS)

    Matsui, Yasuyuki; Miyasaka, Takeo; Hirose, Makoto; Araki, Kazuo

    2001-10-01

    In this research, the recognition of gesture in 3D space is examined by using serial range images obtained by a real-time 3D measurement system developed in our laboratory. Using this system, it is possible to obtain time sequences of range, intensity and color data for a moving object in real-time without assigning markers to the targets. At first, gestures are tracked in 2D space by calculating 2D flow vectors at each points using an ordinal optical flow estimation method, based on time sequences of the intensity data. Then, location of each point after 2D movement is detected on the x-y plane using thus obtained 2D flow vectors. Depth information of each point after movement is then obtained from the range data and 3D flow vectors are assigned to each point. Time sequences of thus obtained 3D flow vectors allow us to track the 3D movement of the target. So, based on time sequences of 3D flow vectors of the targets, it is possible to classify the movement of the targets using continuous DP matching technique. This tracking of 3D movement using time sequences of 3D flow vectors may be applicable for a robust gesture recognition system.

  13. Personal perceptual and cognitive property for 3D recognition

    NASA Astrophysics Data System (ADS)

    Matozaki, Takeshi; Tanisita, Akihiko

    1996-04-01

    3D closed circuit TV which produces stereoscopic vision by observing different images through each eye alternately, has been proposed. But, there are several problems, both physiological and psychological, for 3D image observation in many fields. From this prospective, we are learning personal visual characteristics for 3D recognition in the transition from 2D to 3D. We have separated the mechanism of 3D recognition into several categories, and formed some hypothesis about the personal features. These hypotheses are related to an observer's personal features, as follows: (1) consideration of the angle between the left and the right eye's line of vision and the adjustment of focus, (2) consideration of the angle of vision and the time required for fusion, (3) consideration of depth sense based on life experience, (4) consideration of 3D experience, and (5) consideration of 3D sense based on the observer's age. To establish these hypotheses, and we have analyzed the personal features of the time interval required for 3D recognition through some examinations to examinees. Examinees indicate their response for 3D recognition by pushing a button. Recently, we introduced a method for picking up the reaction of 3D recognition from examinees through their biological information, for example, analysis of pulse waves of the finger. We also bring a hypothesis, as a result of the analysis of pulse waves. (1) We can observe chaotic response when the examinee is recognizing a 2D image. (2) We can observe periodic response when the examinee is recognizing a 3D image. We are making nonlinear forecasts by getting correlation between the forecast and the biological phenomena. Deterministic nonlinear prediction are applied to the data, as a promising method of chaotic time series analysis in order to analyze the long term unpredictability, one of the fundamental characteristics of deterministic chaos.

  14. GPU-Accelerated Denoising in 3D (GD3D)

    2013-10-01

    The raw computational power GPU Accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. This software addresses two facets of this promising application: what tuning is necessary to achieve optimal performance on a modern GPU? And what parameters yield the best denoising results in practice? To answer the first question, the software performs an autotuning step to empirically determine optimal memory blocking on the GPU. To answer themore » second, it performs a sweep of algorithm parameters to determine the combination that best reduces the mean squared error relative to a noiseless reference image.« less

  15. Video coding and transmission standards for 3D television — a survey

    NASA Astrophysics Data System (ADS)

    Buchowicz, A.

    2013-03-01

    The emerging 3D television systems require effective techniques for transmission and storage of data representing a 3-D scene. The 3-D scene representations based on multiple video sequences or multiple views plus depth maps are especially important since they can be processed with existing video technologies. The review of the video coding and transmission techniques is presented in this paper.

  16. Magmatic Systems in 3-D

    NASA Astrophysics Data System (ADS)

    Kent, G. M.; Harding, A. J.; Babcock, J. M.; Orcutt, J. A.; Bazin, S.; Singh, S.; Detrick, R. S.; Canales, J. P.; Carbotte, S. M.; Diebold, J.

    2002-12-01

    Multichannel seismic (MCS) images of crustal magma chambers are ideal targets for advanced visualization techniques. In the mid-ocean ridge environment, reflections originating at the melt-lens are well separated from other reflection boundaries, such as the seafloor, layer 2A and Moho, which enables the effective use of transparency filters. 3-D visualization of seismic reflectivity falls into two broad categories: volume and surface rendering. Volumetric-based visualization is an extremely powerful approach for the rapid exploration of very dense 3-D datasets. These 3-D datasets are divided into volume elements or voxels, which are individually color coded depending on the assigned datum value; the user can define an opacity filter to reject plotting certain voxels. This transparency allows the user to peer into the data volume, enabling an easy identification of patterns or relationships that might have geologic merit. Multiple image volumes can be co-registered to look at correlations between two different data types (e.g., amplitude variation with offsets studies), in a manner analogous to draping attributes onto a surface. In contrast, surface visualization of seismic reflectivity usually involves producing "fence" diagrams of 2-D seismic profiles that are complemented with seafloor topography, along with point class data, draped lines and vectors (e.g. fault scarps, earthquake locations and plate-motions). The overlying seafloor can be made partially transparent or see-through, enabling 3-D correlations between seafloor structure and seismic reflectivity. Exploration of 3-D datasets requires additional thought when constructing and manipulating these complex objects. As numbers of visual objects grow in a particular scene, there is a tendency to mask overlapping objects; this clutter can be managed through the effective use of total or partial transparency (i.e., alpha-channel). In this way, the co-variation between different datasets can be investigated

  17. 3D annotation and manipulation of medical anatomical structures

    NASA Astrophysics Data System (ADS)

    Vitanovski, Dime; Schaller, Christian; Hahn, Dieter; Daum, Volker; Hornegger, Joachim

    2009-02-01

    Although the medical scanners are rapidly moving towards a three-dimensional paradigm, the manipulation and annotation/labeling of the acquired data is still performed in a standard 2D environment. Editing and annotation of three-dimensional medical structures is currently a complex task and rather time-consuming, as it is carried out in 2D projections of the original object. A major problem in 2D annotation is the depth ambiguity, which requires 3D landmarks to be identified and localized in at least two of the cutting planes. Operating directly in a three-dimensional space enables the implicit consideration of the full 3D local context, which significantly increases accuracy and speed. A three-dimensional environment is as well more natural optimizing the user's comfort and acceptance. The 3D annotation environment requires the three-dimensional manipulation device and display. By means of two novel and advanced technologies, Wii Nintendo Controller and Philips 3D WoWvx display, we define an appropriate 3D annotation tool and a suitable 3D visualization monitor. We define non-coplanar setting of four Infrared LEDs with a known and exact position, which are tracked by the Wii and from which we compute the pose of the device by applying a standard pose estimation algorithm. The novel 3D renderer developed by Philips uses either the Z-value of a 3D volume, or it computes the depth information out of a 2D image, to provide a real 3D experience without having some special glasses. Within this paper we present a new framework for manipulation and annotation of medical landmarks directly in three-dimensional volume.

  18. Application of pre-stack reverse time migration based on FWI velocity estimation to ground penetrating radar data

    NASA Astrophysics Data System (ADS)

    Liu, Sixin; Lei, Linlin; Fu, Lei; Wu, Junjun

    2014-08-01

    Reverse-time migration (RTM) is used for subsurface imaging to handle complex velocity models including steeply dipping interfaces and dramatic lateral variations and promises better imaging results compared to traditional migration method such as Kirchhoff migration algorithm. RTM has been increasingly used in seismic surveys for hydrocarbon resource explorations. Based on the similarity of kinematics and dynamics between electromagnetic wave and elastic wave, we develop pre-stack RTM method and apply it to process ground penetrating radar (GPR) data. Finite-difference time domain (FDTD) numerical method is used to simulate the electromagnetic wave propagation including forward and backward extrapolations, the cross-correlation imaging condition is used to obtain the final image. In order to provide a velocity model with relatively higher accuracy as the initial velocity model for RTM, we apply a full waveform inversion (FWI) in time domain to estimate the subsurface velocity structure based on reflection radar data. For testing the effectiveness of the algorithm, we have constructed a complex geological model, common-offset radar data and common-shot profile (CSP) radar reflection data are synthesized. All data are migrated with traditional Kirchhoff migration method and pre-stack RTM method separately, the migration results from pre-stack RTM show better coincidence with the true model. Furthermore, we have performed a physical experiment in a sandbox where a polyvinyl chloride (PVC) box is buried in the sand, the obtained common-offset radar data and common-shot radar data are migrated by using Kirchhoff migration method and pre-stack RTM algorithm separately, the pre-stack RTM result shows that RTM algorithm could get better imaging results.

  19. Comparing a quasi-3D to a full 3D nearshore circulation model: SHORECIRC and ROMS

    NASA Astrophysics Data System (ADS)

    Haas, Kevin A.; Warner, John C.

    Predictions of nearshore and surf zone processes are important for determining coastal circulation, impacts of storms, navigation, and recreational safety. Numerical modeling of these systems facilitates advancements in our understanding of coastal changes and can provide predictive capabilities for resource managers. There exists many nearshore coastal circulation models, however they are mostly limited or typically only applied as depth integrated models. SHORECIRC is an established surf zone circulation model that is quasi-3D to allow the effect of the variability in the vertical structure of the currents while maintaining the computational advantage of a 2DH model. Here we compare SHORECIRC to ROMS, a fully 3D ocean circulation model which now includes a three dimensional formulation for the wave-driven flows. We compare the models with three different test applications for: (i) spectral waves approaching a plane beach with an oblique angle of incidence; (ii) monochromatic waves driving longshore currents in a laboratory basin; and (iii) monochromatic waves on a barred beach with rip channels in a laboratory basin. Results identify that the models are very similar for the depth integrated flows and qualitatively consistent for the vertically varying components. The differences are primarily the result of the vertically varying radiation stress utilized by ROMS and the utilization of long wave theory for the radiation stress formulation in vertical varying momentum balance by SHORECIRC. The quasi-3D model is faster, however the applicability of the fully 3D model allows it to extend over a broader range of processes, temporal, and spatial scales.

  20. Comparing a quasi-3D to a full 3D nearshore circulation model: SHORECIRC and ROMS

    USGS Publications Warehouse

    Haas, K.A.; Warner, J.C.

    2009-01-01

    Predictions of nearshore and surf zone processes are important for determining coastal circulation, impacts of storms, navigation, and recreational safety. Numerical modeling of these systems facilitates advancements in our understanding of coastal changes and can provide predictive capabilities for resource managers. There exists many nearshore coastal circulation models, however they are mostly limited or typically only applied as depth integrated models. SHORECIRC is an established surf zone circulation model that is quasi-3D to allow the effect of the variability in the vertical structure of the currents while maintaining the computational advantage of a 2DH model. Here we compare SHORECIRC to ROMS, a fully 3D ocean circulation model which now includes a three dimensional formulation for the wave-driven flows. We compare the models with three different test applications for: (i) spectral waves approaching a plane beach with an oblique angle of incidence; (ii) monochromatic waves driving longshore currents in a laboratory basin; and (iii) monochromatic waves on a barred beach with rip channels in a laboratory basin. Results identify that the models are very similar for the depth integrated flows and qualitatively consistent for the vertically varying components. The differences are primarily the result of the vertically varying radiation stress utilized by ROMS and the utilization of long wave theory for the radiation stress formulation in vertical varying momentum balance by SHORECIRC. The quasi-3D model is faster, however the applicability of the fully 3D model allows it to extend over a broader range of processes, temporal, and spatial scales. ?? 2008 Elsevier Ltd.

  1. Wide-viewing-angle floating 3D display system with no 3D glasses

    NASA Astrophysics Data System (ADS)

    Dolgoff, Eugene

    1998-04-01

    Previously, the author has described a new 3D imaging technology entitled 'real depth' with several different configurations and methods of implantation. Included were several methods to 'float' images in free space. Viewers can pass their hands through the image or appear to hold it in their hands. Most implementations provide an angle of view of approximately 45 degrees. The technology produces images at different depths from any display, such as CRT and LCD, for television, computer, projection, and other formats. Unlike stereoscopic 3D imaging, no glasses, headgear or other viewing aids are used. In addition to providing traditional depth cues, such as perspective and background images occlusion, the technology also provides both horizontal and vertical binocular parallax producing visual accommodation and convergence which coincide. Consequently, viewing these images do not produce headaches, fatigue, or eyestrain, regardless of how long they are viewed. A method was also proposed to provide a floating image display system with a wide angle of view. Implementation of this design proved problematic, producing various image distortions. In this paper the author discloses new methods to produce aerial images with a wide angel of view and improved image quality.

  2. 3D sensitivity of 6-electrode Focused Impedance Method (FIM)

    NASA Astrophysics Data System (ADS)

    Masum Iquebal, A. H.; Siddique-e Rabbani, K.

    2010-04-01

    The present work was taken up to have an understanding of the depth sensitivity of the 6 electrode FIM developed by our laboratory earlier, so that it may be applied judiciously for the measurement of organs in 3D, with electrodes on the skin surface. For a fixed electrode geometry sensitivity is expected to depend on the depth, size and conductivity of the target object. With current electrodes 18 cm apart and potential electrodes 5 cm apart, depth sensitivity of spherical conductors, insulators and of pieces of potato of different diameters were measured. The sensitivity dropped sharply with depth gradually leveling off to background, and objects could be sensed down to a depth of about twice their diameters. The sensitivity at a certain depth increases almost linearly with volume for objects with the same conductivity. Thus these results increase confidence in the use of FIM for studying organs at depths of the body.

  3. Clinical Applications of 3-D Conformal Radiotherapy

    NASA Astrophysics Data System (ADS)

    Miralbell, Raymond

    Although a significant improvement in cancer cure (i.e. 20% increment) has been obtained in the last 2-3 decades, 30-40% of patients still fail locally after curative radiotherapy. In order to improve local tumor control rates with radiotherapy high doses to the tumor volume are frequently necessary. Three-dimensional conformal radiation therapy (3-D CRT) is used to denote a spectrum of radiation planning and delivery techniques that rely on three-dimensional imaging to define the target (tumor) and to distinguish it from normal tissues. Modern, high-precision radiotherapy (RT) techniques are needed in order to implement the goal of optimal tumor destruction delivering minimal dose to the non-target normal tissues. A better target definition is nowadays possible with contemporary imaging (computerized tomography, magnetic resonance imaging, and positron emission tomography) and image registration technology. A highly precise dose distributions can be obtained with optimal 3-D CRT treatment delivery techniques such as stereotactic RT, intensity modulated RT (IMRT), or protontherapy (the latter allowing for in-depth conformation). Patient daily set-up repositioning and internal organ immobilization systems are necessary before considering to undertake any of the above mentioned high-precision treatment approaches. Prostate cancer, brain tumors, and base of skull malignancies are among the sites most benefitting of dose escalation approaches. Nevertheless, a significant dose reduction to the normal tissues in the vicinity of the irradiated tumor also achievable with optimal 3-D CRT may also be a major issue in the treatment of pediatric tumors in order to preserve growth, normal development, and to reduce the risk of developing radiation induced diseases such as cancer or endocrinologic disorders.

  4. Interactive 3D Mars Visualization

    NASA Technical Reports Server (NTRS)

    Powell, Mark W.

    2012-01-01

    The Interactive 3D Mars Visualization system provides high-performance, immersive visualization of satellite and surface vehicle imagery of Mars. The software can be used in mission operations to provide the most accurate position information for the Mars rovers to date. When integrated into the mission data pipeline, this system allows mission planners to view the location of the rover on Mars to 0.01-meter accuracy with respect to satellite imagery, with dynamic updates to incorporate the latest position information. Given this information so early in the planning process, rover drivers are able to plan more accurate drive activities for the rover than ever before, increasing the execution of science activities significantly. Scientifically, this 3D mapping information puts all of the science analyses to date into geologic context on a daily basis instead of weeks or months, as was the norm prior to this contribution. This allows the science planners to judge the efficacy of their previously executed science observations much more efficiently, and achieve greater science return as a result. The Interactive 3D Mars surface view is a Mars terrain browsing software interface that encompasses the entire region of exploration for a Mars surface exploration mission. The view is interactive, allowing the user to pan in any direction by clicking and dragging, or to zoom in or out by scrolling the mouse or touchpad. This set currently includes tools for selecting a point of interest, and a ruler tool for displaying the distance between and positions of two points of interest. The mapping information can be harvested and shared through ubiquitous online mapping tools like Google Mars, NASA WorldWind, and Worldwide Telescope.

  5. Landmine detection by 3D GPR system

    NASA Astrophysics Data System (ADS)

    Sato, Motoyuki; Yokota, Yuya; Takahashi, Kazunori; Grasmueck, Mark

    2012-06-01

    In order to demonstrate the possibility of Ground Penetrating Radar (GPR) for detection of small buried objects such as landmine and UXO, conducted demonstration tests by using the 3DGPR system, which is a GPR system combined with high accuracy positing system using a commercial laser positioning system (iGPS). iGPS can provide absolute and better than centimetre precise x,y,z coordinates to multiple mine sensors at the same time. The developed " 3DGPR" system is efficient and capable of high-resolution 3D shallow subsurface scanning of larger areas (25 m2 to thousands of square meters) with irregular topography . Field test by using a 500MHz GPR system equipped with 3DGPR system was conducted. PMN-2 and Type-72 mine models have been buried at the depth of 5-20cm in sand. We could demonstrate that the 3DGPR can visualize each of these buried land mines very clearly.

  6. Low Complexity Mode Decision for 3D-HEVC

    PubMed Central

    Li, Nana; Gan, Yong

    2014-01-01

    High efficiency video coding- (HEVC-) based 3D video coding (3D-HEVC) developed by joint collaborative team on 3D video coding (JCT-3V) for multiview video and depth map is an extension of HEVC standard. In the test model of 3D-HEVC, variable coding unit (CU) size decision and disparity estimation (DE) are introduced to achieve the highest coding efficiency with the cost of very high computational complexity. In this paper, a fast mode decision algorithm based on variable size CU and DE is proposed to reduce 3D-HEVC computational complexity. The basic idea of the method is to utilize the correlations between depth map and motion activity in prediction mode where variable size CU and DE are needed, and only in these regions variable size CU and DE are enabled. Experimental results show that the proposed algorithm can save about 43% average computational complexity of 3D-HEVC while maintaining almost the same rate-distortion (RD) performance. PMID:25254237

  7. A Clean Adirondack (3-D)

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This is a 3-D anaglyph showing a microscopic image taken of an area measuring 3 centimeters (1.2 inches) across on the rock called Adirondack. The image was taken at Gusev Crater on the 33rd day of the Mars Exploration Rover Spirit's journey (Feb. 5, 2004), after the rover used its rock abrasion tool brush to clean the surface of the rock. Dust, which was pushed off to the side during cleaning, can still be seen to the left and in low areas of the rock.

  8. Making Inexpensive 3-D Models

    NASA Astrophysics Data System (ADS)

    Manos, Harry

    2016-03-01

    Visual aids are important to student learning, and they help make the teacher's job easier. Keeping with the TPT theme of "The Art, Craft, and Science of Physics Teaching," the purpose of this article is to show how teachers, lacking equipment and funds, can construct a durable 3-D model reference frame and a model gravity well tailored to specific class lessons. Most of the supplies are readily available in the home or at school: rubbing alcohol, a rag, two colors of spray paint, art brushes, and masking tape. The cost of these supplies, if you don't have them, is less than 20.

  9. What Lies Ahead (3-D)

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This 3-D cylindrical-perspective mosaic taken by the navigation camera on the Mars Exploration Rover Spirit on sol 82 shows the view south of the large crater dubbed 'Bonneville.' The rover will travel toward the Columbia Hills, seen here at the upper left. The rock dubbed 'Mazatzal' and the hole the rover drilled in to it can be seen at the lower left. The rover's position is referred to as 'Site 22, Position 32.' This image was geometrically corrected to make the horizon appear flat.

  10. Vacant Lander in 3-D

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This 3-D image captured by the Mars Exploration Rover Opportunity's rear hazard-identification camera shows the now-empty lander that carried the rover 283 million miles to Meridiani Planum, Mars. Engineers received confirmation that Opportunity's six wheels successfully rolled off the lander and onto martian soil at 3:01 a.m. PST, January 31, 2004, on the seventh martian day, or sol, of the mission. The rover is approximately 1 meter (3 feet) in front of the lander, facing north.

  11. 3D Stratigraphic Modeling of Central Aachen

    NASA Astrophysics Data System (ADS)

    Dong, M.; Neukum, C.; Azzam, R.; Hu, H.

    2010-05-01

    Since 1980s, advanced computer hardware and software technologies, as well as multidisciplinary research have provided possibilities to develop advanced three dimensional (3D) simulation software for geosciences application. Some countries, such as USA1) and Canada2) 3), have built up regional 3D geological models based on archival geological data. Such models have played huge roles in engineering geology2), hydrogeology2) 3), geothermal industry1) and so on. In cooperating with the Municipality of Aachen, the Department of Engineering Geology of RWTH Aachen University have built up a computer-based 3D stratigraphic model of 50 meter' depth for the center of Aachen, which is a 5 km by 7 km geologically complex area. The uncorrelated data from multi-resources, discontinuous nature and unconformable connection of the units are main challenges for geological modeling in this area. The reliability of 3D geological models largely depends on the quality and quantity of data. Existing 1D and 2D geological data were collected, including 1) approximately 6970 borehole data of different depth compiled in Microsoft Access database and MapInfo database; 2) a Digital Elevation Model (DEM); 3) geological cross sections; and 4) stratigraphic maps in 1m, 2m and 5m depth. Since acquired data are of variable origins, they were managed step by step. The main processes are described below: 1) Typing errors of borehole data were identified and the corrected data were exported to Variowin2.2 to distinguish duplicate points; 2) The surface elevation of borehole data was compared to the DEM, and differences larger than 3m were eliminated. Moreover, where elevation data missed, it was read from the DEM; 3) Considerable data were collected from municipal constructions, such as residential buildings, factories, and roads. Therefore, many boreholes are spatially clustered, and only one or two representative points were picked out in such areas; After above procedures, 5839 boreholes with -x

  12. [Evaluation of Motion Sickness Induced by 3D Video Clips].

    PubMed

    Matsuura, Yasuyuki; Takada, Hiroki

    2016-01-01

    The use of stereoscopic images has been spreading rapidly. Nowadays, stereoscopic movies are nothing new to people. Stereoscopic systems date back to 280 A.D. when Euclid first recognized the concept of depth perception by humans. Despite the increase in the production of three-dimensional (3D) display products and many studies on stereoscopic vision, the effect of stereoscopic vision on the human body has been insufficiently understood. However, symptoms such as eye fatigue and 3D sickness have been the concerns when viewing 3D films for a prolonged period of time; therefore, it is important to consider the safety of viewing virtual 3D contents as a contribution to society. It is generally explained to the public that accommodation and convergence are mismatched during stereoscopic vision and that this is the main reason for the visual fatigue and visually induced motion sickness (VIMS) during 3D viewing. We have devised a method to simultaneously measure lens accommodation and convergence. We used this simultaneous measurement device to characterize 3D vision. Fixation distance was compared between accommodation and convergence during the viewing of 3D films with repeated measurements. Time courses of these fixation distances and their distributions were compared in subjects who viewed 2D and 3D video clips. The results indicated that after 90 s of continuously viewing 3D images, the accommodative power does not correspond to the distance of convergence. In this paper, remarks on methods to measure the severity of motion sickness induced by viewing 3D films are also given. From the epidemiological viewpoint, it is useful to obtain novel knowledge for reduction and/or prevention of VIMS. We should accumulate empirical data on motion sickness, which may contribute to the development of relevant fields in science and technology. PMID:26832611

  13. Positional Awareness Map 3D (PAM3D)

    NASA Technical Reports Server (NTRS)

    Hoffman, Monica; Allen, Earl L.; Yount, John W.; Norcross, April Louise

    2012-01-01

    The Western Aeronautical Test Range of the National Aeronautics and Space Administration s Dryden Flight Research Center needed to address the aging software and hardware of its current situational awareness display application, the Global Real-Time Interactive Map (GRIM). GRIM was initially developed in the late 1980s and executes on older PC architectures using a Linux operating system that is no longer supported. Additionally, the software is difficult to maintain due to its complexity and loss of developer knowledge. It was decided that a replacement application must be developed or acquired in the near future. The replacement must provide the functionality of the original system, the ability to monitor test flight vehicles in real-time, and add improvements such as high resolution imagery and true 3-dimensional capability. This paper will discuss the process of determining the best approach to replace GRIM, and the functionality and capabilities of the first release of the Positional Awareness Map 3D.

  14. 3-D object-oriented image analysis of geophysical data

    NASA Astrophysics Data System (ADS)

    Fadel, I.; Kerle, N.; van der Meijde, M.

    2014-07-01

    Geophysical data are the main source of information about the subsurface. Geophysical techniques are, however, highly non-unique in determining specific physical parameters and boundaries of subsurface objects. To obtain actual physical information, an inversion process is often applied, in which measurements at or above the Earth surface are inverted into a 2- or 3-D subsurface spatial distribution of the physical property. Interpreting these models into structural objects, related to physical processes, requires a priori knowledge and expert analysis which is susceptible to subjective choices and is therefore often non-repeatable. In this research, we implemented a recently introduced object-based approach to interpret the 3-D inversion results of a single geophysical technique using the available a priori information and the physical and geometrical characteristics of the interpreted objects. The introduced methodology is semi-automatic and repeatable, and allows the extraction of subsurface structures using 3-D object-oriented image analysis (3-D OOA) in an objective knowledge-based classification scheme. The approach allows for a semi-objective setting of thresholds that can be tested and, if necessary, changed in a very fast and efficient way. These changes require only changing the thresholds used in a so-called ruleset, which is composed of algorithms that extract objects from a 3-D data cube. The approach is tested on a synthetic model, which is based on a priori knowledge on objects present in the study area (Tanzania). Object characteristics and thresholds were well defined in a 3-D histogram of velocity versus depth, and objects were fully retrieved. The real model results showed how 3-D OOA can deal with realistic 3-D subsurface conditions in which the boundaries become fuzzy, the object extensions become unclear and the model characteristics vary with depth due to the different physical conditions. As expected, the 3-D histogram of the real data was

  15. 3D Printable Graphene Composite

    NASA Astrophysics Data System (ADS)

    Wei, Xiaojun; Li, Dong; Jiang, Wei; Gu, Zheming; Wang, Xiaojuan; Zhang, Zengxing; Sun, Zhengzong

    2015-07-01

    In human being’s history, both the Iron Age and Silicon Age thrived after a matured massive processing technology was developed. Graphene is the most recent superior material which could potentially initialize another new material Age. However, while being exploited to its full extent, conventional processing methods fail to provide a link to today’s personalization tide. New technology should be ushered in. Three-dimensional (3D) printing fills the missing linkage between graphene materials and the digital mainstream. Their alliance could generate additional stream to push the graphene revolution into a new phase. Here we demonstrate for the first time, a graphene composite, with a graphene loading up to 5.6 wt%, can be 3D printable into computer-designed models. The composite’s linear thermal coefficient is below 75 ppm·°C-1 from room temperature to its glass transition temperature (Tg), which is crucial to build minute thermal stress during the printing process.

  16. 3D acoustic atmospheric tomography

    NASA Astrophysics Data System (ADS)

    Rogers, Kevin; Finn, Anthony

    2014-10-01

    This paper presents a method for tomographically reconstructing spatially varying 3D atmospheric temperature profiles and wind velocity fields based. Measurements of the acoustic signature measured onboard a small Unmanned Aerial Vehicle (UAV) are compared to ground-based observations of the same signals. The frequency-shifted signal variations are then used to estimate the acoustic propagation delay between the UAV and the ground microphones, which are also affected by atmospheric temperature and wind speed vectors along each sound ray path. The wind and temperature profiles are modelled as the weighted sum of Radial Basis Functions (RBFs), which also allow local meteorological measurements made at the UAV and ground receivers to supplement any acoustic observations. Tomography is used to provide a full 3D reconstruction/visualisation of the observed atmosphere. The technique offers observational mobility under direct user control and the capacity to monitor hazardous atmospheric environments, otherwise not justifiable on the basis of cost or risk. This paper summarises the tomographic technique and reports on the results of simulations and initial field trials. The technique has practical applications for atmospheric research, sound propagation studies, boundary layer meteorology, air pollution measurements, analysis of wind shear, and wind farm surveys.

  17. 3D Printed Bionic Ears

    PubMed Central

    Mannoor, Manu S.; Jiang, Ziwen; James, Teena; Kong, Yong Lin; Malatesta, Karen A.; Soboyejo, Winston O.; Verma, Naveen; Gracias, David H.; McAlpine, Michael C.

    2013-01-01

    The ability to three-dimensionally interweave biological tissue with functional electronics could enable the creation of bionic organs possessing enhanced functionalities over their human counterparts. Conventional electronic devices are inherently two-dimensional, preventing seamless multidimensional integration with synthetic biology, as the processes and materials are very different. Here, we present a novel strategy for overcoming these difficulties via additive manufacturing of biological cells with structural and nanoparticle derived electronic elements. As a proof of concept, we generated a bionic ear via 3D printing of a cell-seeded hydrogel matrix in the precise anatomic geometry of a human ear, along with an intertwined conducting polymer consisting of infused silver nanoparticles. This allowed for in vitro culturing of cartilage tissue around an inductive coil antenna in the ear, which subsequently enables readout of inductively-coupled signals from cochlea-shaped electrodes. The printed ear exhibits enhanced auditory sensing for radio frequency reception, and complementary left and right ears can listen to stereo audio music. Overall, our approach suggests a means to intricately merge biologic and nanoelectronic functionalities via 3D printing. PMID:23635097

  18. 3-D Relativistic MHD Simulations

    NASA Astrophysics Data System (ADS)

    Nishikaw, K.-I.; Frank, J.; Christodoulou, D. M.; Koide, S.; Sakai, J.-I.; Sol, H.; Mutel, R. L.

    1998-12-01

    We present 3-D numerical simulations of moderately hot, supersonic jets propagating initially along or obliquely to the field lines of a denser magnetized background medium with Lorentz factors of W=4.56 and evolving in a four-dimensional spacetime. The new results are understood as follows: Relativistic simulations have consistently shown that these jets are effectively heavy and so they do not suffer substantial momentum losses and are not decelerated as efficiently as their nonrelativistic counterparts. In addition, the ambient magnetic field, however strong, can be pushed aside with relative ease by the beam, provided that the degrees of freedom associated with all three spatial dimensions are followed self-consistently in the simulations. This effect is analogous to pushing Japanese ``noren'' or vertical Venetian blinds out of the way while the slats are allowed to bend in 3-D space rather than as a 2-D slab structure. We also simulate jets with the more realistic initial conditions for injecting jets for helical mangetic field, perturbed density, velocity, and internal energy, which are supposed to be caused in the process of jet generation. Three possible explanations for the observed variability are (i) tidal disruption of a star falling into the black hole, (ii) instabilities in the relativistic accretion disk, and (iii) jet-related PRocesses. New results will be reported at the meeting.

  19. 3D printed bionic ears.

    PubMed

    Mannoor, Manu S; Jiang, Ziwen; James, Teena; Kong, Yong Lin; Malatesta, Karen A; Soboyejo, Winston O; Verma, Naveen; Gracias, David H; McAlpine, Michael C

    2013-06-12

    The ability to three-dimensionally interweave biological tissue with functional electronics could enable the creation of bionic organs possessing enhanced functionalities over their human counterparts. Conventional electronic devices are inherently two-dimensional, preventing seamless multidimensional integration with synthetic biology, as the processes and materials are very different. Here, we present a novel strategy for overcoming these difficulties via additive manufacturing of biological cells with structural and nanoparticle derived electronic elements. As a proof of concept, we generated a bionic ear via 3D printing of a cell-seeded hydrogel matrix in the anatomic geometry of a human ear, along with an intertwined conducting polymer consisting of infused silver nanoparticles. This allowed for in vitro culturing of cartilage tissue around an inductive coil antenna in the ear, which subsequently enables readout of inductively-coupled signals from cochlea-shaped electrodes. The printed ear exhibits enhanced auditory sensing for radio frequency reception, and complementary left and right ears can listen to stereo audio music. Overall, our approach suggests a means to intricately merge biologic and nanoelectronic functionalities via 3D printing. PMID:23635097

  20. 3D Printable Graphene Composite

    PubMed Central

    Wei, Xiaojun; Li, Dong; Jiang, Wei; Gu, Zheming; Wang, Xiaojuan; Zhang, Zengxing; Sun, Zhengzong

    2015-01-01

    In human being’s history, both the Iron Age and Silicon Age thrived after a matured massive processing technology was developed. Graphene is the most recent superior material which could potentially initialize another new material Age. However, while being exploited to its full extent, conventional processing methods fail to provide a link to today’s personalization tide. New technology should be ushered in. Three-dimensional (3D) printing fills the missing linkage between graphene materials and the digital mainstream. Their alliance could generate additional stream to push the graphene revolution into a new phase. Here we demonstrate for the first time, a graphene composite, with a graphene loading up to 5.6 wt%, can be 3D printable into computer-designed models. The composite’s linear thermal coefficient is below 75 ppm·°C−1 from room temperature to its glass transition temperature (Tg), which is crucial to build minute thermal stress during the printing process. PMID:26153673

  1. 3D Ion Temperature Reconstruction

    NASA Astrophysics Data System (ADS)

    Tanabe, Hiroshi; You, Setthivoine; Balandin, Alexander; Inomoto, Michiaki; Ono, Yasushi

    2009-11-01

    The TS-4 experiment at the University of Tokyo collides two spheromaks to form a single high-beta compact toroid. Magnetic reconnection during the merging process heats and accelerates the plasma in toroidal and poloidal directions. The reconnection region has a complex 3D topology determined by the pitch of the spheromak magnetic fields at the merging plane. A pair of multichord passive spectroscopic diagnostics have been established to measure the ion temperature and velocity in the reconnection volume. One setup measures spectral lines across a poloidal plane, retrieving velocity and temperature from Abel inversion. The other, novel setup records spectral lines across another section of the plasma and reconstructs velocity and temperature from 3D vector and 2D scalar tomography techniques. The magnetic field linking both measurement planes is determined from in situ magnetic probe arrays. The ion temperature is then estimated within the volume between the two measurement planes and at the reconnection region. The measurement is followed over several repeatable discharges to follow the heating and acceleration process during the merging reconnection.

  2. Kirchhoff prestack migration using the suppressed wave equation estimation of traveltime (SWEET) algorithm in VTI media

    NASA Astrophysics Data System (ADS)

    Bae, Ho Seuk; Chung, Wookeen; Ha, Jiho; Shin, Changsoo

    2015-12-01

    This paper examines anisotropic prestack Kirchhoff migration. We used pseudo-acoustic wave equations in the complex frequency domain to describe the wave propagation in a vertical transversely isotropic (VTI) medium. Both amplitudes and traveltimes were calculated efficiently using the suppressed wave equation estimation of traveltime (SWEET) algorithm. The accuracy of the traveltimes obtained with the SWEET algorithm was verified by comparing the traveltime contours simulated with the anisotropic elastic wave equation using a staggered-grid method. Finally, we tested our migration algorithm using the two-dimensional HESS VTI model. We correctly imaged the shape of both the salt and background layer structures. We also reduced the numerical artefacts compared to the isotropic technique.

  3. LOTT RANCH 3D PROJECT

    SciTech Connect

    Larry Lawrence; Bruce Miller

    2004-09-01

    The Lott Ranch 3D seismic prospect located in Garza County, Texas is a project initiated in September of 1991 by the J.M. Huber Corp., a petroleum exploration and production company. By today's standards the 126 square mile project does not seem monumental, however at the time it was conceived it was the most intensive land 3D project ever attempted. Acquisition began in September of 1991 utilizing GEO-SEISMIC, INC., a seismic data contractor. The field parameters were selected by J.M. Huber, and were of a radical design. The recording instruments used were GeoCor IV amplifiers designed by Geosystems Inc., which record the data in signed bit format. It would not have been practical, if not impossible, to have processed the entire raw volume with the tools available at that time. The end result was a dataset that was thought to have little utility due to difficulties in processing the field data. In 1997, Yates Energy Corp. located in Roswell, New Mexico, formed a partnership to further develop the project. Through discussions and meetings with Pinnacle Seismic, it was determined that the original Lott Ranch 3D volume could be vastly improved upon reprocessing. Pinnacle Seismic had shown the viability of improving field-summed signed bit data on smaller 2D and 3D projects. Yates contracted Pinnacle Seismic Ltd. to perform the reprocessing. This project was initiated with high resolution being a priority. Much of the potential resolution was lost through the initial summing of the field data. Modern computers that are now being utilized have tremendous speed and storage capacities that were cost prohibitive when this data was initially processed. Software updates and capabilities offer a variety of quality control and statics resolution, which are pertinent to the Lott Ranch project. The reprocessing effort was very successful. The resulting processed data-set was then interpreted using modern PC-based interpretation and mapping software. Production data, log data

  4. Volumetric image display for complex 3D data visualization

    NASA Astrophysics Data System (ADS)

    Tsao, Che-Chih; Chen, Jyh Shing

    2000-05-01

    A volumetric image display is a new display technology capable of displaying computer generated 3D images in a volumetric space. Many viewers can walk around the display and see the image from omni-directions simultaneously without wearing any glasses. The image is real and possesses all major elements in both physiological and psychological depth cues. Due to the volumetric nature of its image, the VID can provide the most natural human-machine interface in operations involving 3D data manipulation and 3D targets monitoring. The technology creates volumetric 3D images by projecting a series of profiling images distributed in the space form a volumetric image because of the after-image effect of human eyes. Exemplary applications in biomedical image visualization were tested on a prototype display, using different methods to display a data set from Ct-scans. The features of this display technology make it most suitable for applications that require quick understanding of the 3D relations, need frequent spatial interactions with the 3D images, or involve time-varying 3D data. It can also be useful for group discussion and decision making.

  5. Recent Advances in Visualizing 3D Flow with LIC

    NASA Technical Reports Server (NTRS)

    Interrante, Victoria; Grosch, Chester

    1998-01-01

    Line Integral Convolution (LIC), introduced by Cabral and Leedom in 1993, is an elegant and versatile technique for representing directional information via patterns of correlation in a texture. Although most commonly used to depict 2D flow, or flow over a surface in 3D, LIC methods can equivalently be used to portray 3D flow through a volume. However, the popularity of LIC as a device for illustrating 3D flow has historically been limited both by the computational expense of generating and rendering such a 3D texture and by the difficulties inherent in clearly and effectively conveying the directional information embodied in the volumetric output textures that are produced. In an earlier paper, we briefly discussed some of the factors that may underlie the perceptual difficulties that we can encounter with dense 3D displays and outlined several strategies for more effectively visualizing 3D flow with volume LIC. In this article, we review in more detail techniques for selectively emphasizing critical regions of interest in a flow and for facilitating the accurate perception of the 3D depth and orientation of overlapping streamlines, and we demonstrate new methods for efficiently incorporating an indication of orientation into a flow representation and for conveying additional information about related scalar quantities such as temperature or vorticity over a flow via subtle, continuous line width and color variations.

  6. A systematized WYSIWYG pipeline for digital stereoscopic 3D filmmaking

    NASA Astrophysics Data System (ADS)

    Mueller, Robert; Ward, Chris; Hušák, Michal

    2008-02-01

    Digital tools are transforming stereoscopic 3D content creation and delivery, creating an opportunity for the broad acceptance and success of stereoscopic 3D films. Beginning in late 2005, a series of mostly CGI features has successfully initiated the public to this new generation of highly-comfortable, artifact-free digital 3D. While the response has been decidedly favorable, a lack of high-quality live-action films could hinder long-term success. Liveaction stereoscopic films have historically been more time-consuming, costly, and creatively-limiting than 2D films - thus a need arises for a live-action 3D filmmaking process which minimizes such limitations. A unique 'systematized' what-you-see-is-what-you-get (WYSIWYG) pipeline is described which allows the efficient, intuitive and accurate capture and integration of 3D and 2D elements from multiple shoots and sources - both live-action and CGI. Throughout this pipeline, digital tools utilize a consistent algorithm to provide meaningful and accurate visual depth references with respect to the viewing audience in the target theater environment. This intuitive, visual approach introduces efficiency and creativity to the 3D filmmaking process by eliminating both the need for a 'mathematician mentality' of spreadsheets and calculators, as well as any trial and error guesswork, while enabling the most comfortable, 'pixel-perfect', artifact-free 3D product possible.

  7. 3D Printing of Graphene Aerogels.

    PubMed

    Zhang, Qiangqiang; Zhang, Feng; Medarametla, Sai Pradeep; Li, Hui; Zhou, Chi; Lin, Dong

    2016-04-01

    3D printing of a graphene aerogel with true 3D overhang structures is highlighted. The aerogel is fabricated by combining drop-on-demand 3D printing and freeze casting. The water-based GO ink is ejected and freeze-cast into designed 3D structures. The lightweight (<10 mg cm(-3) ) 3D printed graphene aerogel presents superelastic and high electrical conduction. PMID:26861680

  8. ShowMe3D

    2012-01-05

    ShowMe3D is a data visualization graphical user interface specifically designed for use with hyperspectral image obtained from the Hyperspectral Confocal Microscope. The program allows the user to select and display any single image from a three dimensional hyperspectral image stack. By moving a slider control, the user can easily move between images of the stack. The user can zoom into any region of the image. The user can select any pixel or region from themore » displayed image and display the fluorescence spectrum associated with that pixel or region. The user can define up to 3 spectral filters to apply to the hyperspectral image and view the image as it would appear from a filter-based confocal microscope. The user can also obtain statistics such as intensity average and variance from selected regions.« less

  9. ShowMe3D

    SciTech Connect

    Sinclair, Michael B

    2012-01-05

    ShowMe3D is a data visualization graphical user interface specifically designed for use with hyperspectral image obtained from the Hyperspectral Confocal Microscope. The program allows the user to select and display any single image from a three dimensional hyperspectral image stack. By moving a slider control, the user can easily move between images of the stack. The user can zoom into any region of the image. The user can select any pixel or region from the displayed image and display the fluorescence spectrum associated with that pixel or region. The user can define up to 3 spectral filters to apply to the hyperspectral image and view the image as it would appear from a filter-based confocal microscope. The user can also obtain statistics such as intensity average and variance from selected regions.

  10. 3D Elastic Wavefield Tomography

    NASA Astrophysics Data System (ADS)

    Guasch, L.; Warner, M.; Stekl, I.; Umpleby, A.; Shah, N.

    2010-12-01

    Wavefield tomography, or waveform inversion, aims to extract the maximum information from seismic data by matching trace by trace the response of the solid earth to seismic waves using numerical modelling tools. Its first formulation dates from the early 80's, when Albert Tarantola developed a solid theoretical basis that is still used today with little change. Due to computational limitations, the application of the method to 3D problems has been unaffordable until a few years ago, and then only under the acoustic approximation. Although acoustic wavefield tomography is widely used, a complete solution of the seismic inversion problem requires that we account properly for the physics of wave propagation, and so must include elastic effects. We have developed a 3D tomographic wavefield inversion code that incorporates the full elastic wave equation. The bottle neck of the different implementations is the forward modelling algorithm that generates the synthetic data to be compared with the field seismograms as well as the backpropagation of the residuals needed to form the direction update of the model parameters. Furthermore, one or two extra modelling runs are needed in order to calculate the step-length. Our approach uses a FD scheme explicit time-stepping by finite differences that are 4th order in space and 2nd order in time, which is a 3D version of the one developed by Jean Virieux in 1986. We chose the time domain because an explicit time scheme is much less demanding in terms of memory than its frequency domain analogue, although the discussion of wich domain is more efficient still remains open. We calculate the parameter gradients for Vp and Vs by correlating the normal and shear stress wavefields respectively. A straightforward application would lead to the storage of the wavefield at all grid points at each time-step. We tackled this problem using two different approaches. The first one makes better use of resources for small models of dimension equal

  11. Supernova Remnant in 3-D

    NASA Technical Reports Server (NTRS)

    2009-01-01

    wavelengths. Since the amount of the wavelength shift is related to the speed of motion, one can determine how fast the debris are moving in either direction. Because Cas A is the result of an explosion, the stellar debris is expanding radially outwards from the explosion center. Using simple geometry, the scientists were able to construct a 3-D model using all of this information. A program called 3-D Slicer modified for astronomical use by the Astronomical Medicine Project at Harvard University in Cambridge, Mass. was used to display and manipulate the 3-D model. Commercial software was then used to create the 3-D fly-through.

    The blue filaments defining the blast wave were not mapped using the Doppler effect because they emit a different kind of light synchrotron radiation that does not emit light at discrete wavelengths, but rather in a broad continuum. The blue filaments are only a representation of the actual filaments observed at the blast wave.

    This visualization shows that there are two main components to this supernova remnant: a spherical component in the outer parts of the remnant and a flattened (disk-like) component in the inner region. The spherical component consists of the outer layer of the star that exploded, probably made of helium and carbon. These layers drove a spherical blast wave into the diffuse gas surrounding the star. The flattened component that astronomers were unable to map into 3-D prior to these Spitzer observations consists of the inner layers of the star. It is made from various heavier elements, not all shown in the visualization, such as oxygen, neon, silicon, sulphur, argon and iron.

    High-velocity plumes, or jets, of this material are shooting out from the explosion in the plane of the disk-like component mentioned above. Plumes of silicon appear in the northeast and southwest, while those of iron are seen in the southeast and north. These jets were already known and Doppler velocity measurements have been made for these

  12. Clinical Assessment of Stereoacuity and 3-D Stereoscopic Entertainment

    PubMed Central

    Tidbury, Laurence P.; Black, Robert H.; O’Connor, Anna R.

    2015-01-01

    Abstract Background/Aims: The perception of compelling depth is often reported in individuals where no clinically measurable stereoacuity is apparent. We aim to investigate the potential cause of this finding by varying the amount of stereopsis available to the subject, and assessing their perception of depth viewing 3-D video clips and a Nintendo 3DS. Methods: Monocular blur was used to vary interocular VA difference, consequently creating 4 levels of measurable binocular deficit from normal stereoacuity to suppression. Stereoacuity was assessed at each level using the TNO, Preschool Randot®, Frisby, the FD2, and Distance Randot®. Subjects also completed an object depth identification task using the Nintendo 3DS, a static 3DTV stereoacuity test, and a 3-D perception rating task of 6 video clips. Results: As intraocular VA differences increased, stereoacuity of the 57 subjects (aged 16–62 years) decreased (eg, 110”, 280”, 340”, and suppression). The ability to correctly identify depth on the Nintendo 3DS remained at 100% until suppression of one eye occurred. The perception of a compelling 3-D effect when viewing the video clips was rated high until suppression of one eye occurred, where the 3-D effect was still reported as fairly evident. Conclusion: If an individual has any level of measurable stereoacuity, the perception of 3-D when viewing stereoscopic entertainment is present. The presence of motion in stereoscopic video appears to provide cues to depth, where static cues are not sufficient. This suggests there is a need for a dynamic test of stereoacuity to be developed, to allow fully informed patient management decisions to be made. PMID:26669421

  13. Supernova Remnant in 3-D

    NASA Technical Reports Server (NTRS)

    2009-01-01

    wavelengths. Since the amount of the wavelength shift is related to the speed of motion, one can determine how fast the debris are moving in either direction. Because Cas A is the result of an explosion, the stellar debris is expanding radially outwards from the explosion center. Using simple geometry, the scientists were able to construct a 3-D model using all of this information. A program called 3-D Slicer modified for astronomical use by the Astronomical Medicine Project at Harvard University in Cambridge, Mass. was used to display and manipulate the 3-D model. Commercial software was then used to create the 3-D fly-through.

    The blue filaments defining the blast wave were not mapped using the Doppler effect because they emit a different kind of light synchrotron radiation that does not emit light at discrete wavelengths, but rather in a broad continuum. The blue filaments are only a representation of the actual filaments observed at the blast wave.

    This visualization shows that there are two main components to this supernova remnant: a spherical component in the outer parts of the remnant and a flattened (disk-like) component in the inner region. The spherical component consists of the outer layer of the star that exploded, probably made of helium and carbon. These layers drove a spherical blast wave into the diffuse gas surrounding the star. The flattened component that astronomers were unable to map into 3-D prior to these Spitzer observations consists of the inner layers of the star. It is made from various heavier elements, not all shown in the visualization, such as oxygen, neon, silicon, sulphur, argon and iron.

    High-velocity plumes, or jets, of this material are shooting out from the explosion in the plane of the disk-like component mentioned above. Plumes of silicon appear in the northeast and southwest, while those of iron are seen in the southeast and north. These jets were already known and Doppler velocity measurements have been made for these

  14. Depth remapping using seam carving for depth image based rendering

    NASA Astrophysics Data System (ADS)

    Tsubaki, Ikuko; Iwauchi, Kenichi

    2015-03-01

    Depth remapping is a technique to control depth range of stereo images. Conventional remapping which uses a transform function in the whole image has a stable characteristic, however it sometimes reduces the 3D appearance too much. To cope with this problem, a depth remapping method which preserves the details of depth structure is proposed. We apply seam carving, which is an effective technique for image retargeting, to depth remapping. An extended depth map is defined as a space-depth volume, and a seam surface which is a 2D monotonic and connected manifold is introduced. The depth range is reduced by removing depth values on the seam surface from the space-depth volume. Finally a stereo image pair is synthesized from the corrected depth map and an input color image by depth image based rendering.

  15. Scattering robust 3D reconstruction via polarized transient imaging.

    PubMed

    Wu, Rihui; Suo, Jinli; Dai, Feng; Zhang, Yongdong; Dai, Qionghai

    2016-09-01

    Reconstructing 3D structure of scenes in the scattering medium is a challenging task with great research value. Existing techniques often impose strong assumptions on the scattering behaviors and are of limited performance. Recently, a low-cost transient imaging system has provided a feasible way to resolve the scene depth, by detecting the reflection instant on the time profile of a surface point. However, in cases with scattering medium, the rays are both reflected and scattered during transmission, and the depth calculated from the time profile largely deviates from the true value. To handle this problem, we used the different polarization behaviors of the reflection and scattering components, and introduced active polarization to separate the reflection component to estimate the scattering robust depth. Our experiments have demonstrated that our approach can accurately reconstruct the 3D structure underlying the scattering medium. PMID:27607944

  16. D3D augmented reality imaging system: proof of concept in mammography

    PubMed Central

    Douglas, David B; Petricoin, Emanuel F; Liotta, Lance; Wilson, Eugene

    2016-01-01

    Purpose The purpose of this article is to present images from simulated breast microcalcifications and assess the pattern of the microcalcifications with a technical development called “depth 3-dimensional (D3D) augmented reality”. Materials and methods A computer, head display unit, joystick, D3D augmented reality software, and an in-house script of simulated data of breast microcalcifications in a ductal distribution were used. No patient data was used and no statistical analysis was performed. Results The D3D augmented reality system demonstrated stereoscopic depth perception by presenting a unique image to each eye, focal point convergence, head position tracking, 3D cursor, and joystick fly-through. Conclusion The D3D augmented reality imaging system offers image viewing with depth perception and focal point convergence. The D3D augmented reality system should be tested to determine its utility in clinical practice. PMID:27563261

  17. 3D multiplexed immunoplasmonics microscopy

    NASA Astrophysics Data System (ADS)

    Bergeron, Éric; Patskovsky, Sergiy; Rioux, David; Meunier, Michel

    2016-07-01

    Selective labelling, identification and spatial distribution of cell surface biomarkers can provide important clinical information, such as distinction between healthy and diseased cells, evolution of a disease and selection of the optimal patient-specific treatment. Immunofluorescence is the gold standard for efficient detection of biomarkers expressed by cells. However, antibodies (Abs) conjugated to fluorescent dyes remain limited by their photobleaching, high sensitivity to the environment, low light intensity, and wide absorption and emission spectra. Immunoplasmonics is a novel microscopy method based on the visualization of Abs-functionalized plasmonic nanoparticles (fNPs) targeting cell surface biomarkers. Tunable fNPs should provide higher multiplexing capacity than immunofluorescence since NPs are photostable over time, strongly scatter light at their plasmon peak wavelengths and can be easily functionalized. In this article, we experimentally demonstrate accurate multiplexed detection based on the immunoplasmonics approach. First, we achieve the selective labelling of three targeted cell surface biomarkers (cluster of differentiation 44 (CD44), epidermal growth factor receptor (EGFR) and voltage-gated K+ channel subunit KV1.1) on human cancer CD44+ EGFR+ KV1.1+ MDA-MB-231 cells and reference CD44- EGFR- KV1.1+ 661W cells. The labelling efficiency with three stable specific immunoplasmonics labels (functionalized silver nanospheres (CD44-AgNSs), gold (Au) NSs (EGFR-AuNSs) and Au nanorods (KV1.1-AuNRs)) detected by reflected light microscopy (RLM) is similar to the one with immunofluorescence. Second, we introduce an improved method for 3D localization and spectral identification of fNPs based on fast z-scanning by RLM with three spectral filters corresponding to the plasmon peak wavelengths of the immunoplasmonics labels in the cellular environment (500 nm for 80 nm AgNSs, 580 nm for 100 nm AuNSs and 700 nm for 40 nm × 92 nm AuNRs). Third, the developed

  18. How 3D immersive visualization is changing medical diagnostics

    NASA Astrophysics Data System (ADS)

    Koning, Anton H. J.

    2011-03-01

    Originally the only way to look inside the human body without opening it up was by means of two dimensional (2D) images obtained using X-ray equipment. The fact that human anatomy is inherently three dimensional leads to ambiguities in interpretation and problems of occlusion. Three dimensional (3D) imaging modalities such as CT, MRI and 3D ultrasound remove these drawbacks and are now part of routine medical care. While most hospitals 'have gone digital', meaning that the images are no longer printed on film, they are still being viewed on 2D screens. However, this way valuable depth information is lost, and some interactions become unnecessarily complex or even unfeasible. Using a virtual reality (VR) system to present volumetric data means that depth information is presented to the viewer and 3D interaction is made possible. At the Erasmus MC we have developed V-Scope, an immersive volume visualization system for visualizing a variety of (bio-)medical volumetric datasets, ranging from 3D ultrasound, via CT and MRI, to confocal microscopy, OPT and 3D electron-microscopy data. In this talk we will address the advantages of such a system for both medical diagnostics as well as for (bio)medical research.

  19. NIF Ignition Target 3D Point Design

    SciTech Connect

    Jones, O; Marinak, M; Milovich, J; Callahan, D

    2008-11-05

    We have developed an input file for running 3D NIF hohlraums that is optimized such that it can be run in 1-2 days on parallel computers. We have incorporated increasing levels of automation into the 3D input file: (1) Configuration controlled input files; (2) Common file for 2D and 3D, different types of capsules (symcap, etc.); and (3) Can obtain target dimensions, laser pulse, and diagnostics settings automatically from NIF Campaign Management Tool. Using 3D Hydra calculations to investigate different problems: (1) Intrinsic 3D asymmetry; (2) Tolerance to nonideal 3D effects (e.g. laser power balance, pointing errors); and (3) Synthetic diagnostics.

  20. 3D multiplexed immunoplasmonics microscopy.

    PubMed

    Bergeron, Éric; Patskovsky, Sergiy; Rioux, David; Meunier, Michel

    2016-07-21

    Selective labelling, identification and spatial distribution of cell surface biomarkers can provide important clinical information, such as distinction between healthy and diseased cells, evolution of a disease and selection of the optimal patient-specific treatment. Immunofluorescence is the gold standard for efficient detection of biomarkers expressed by cells. However, antibodies (Abs) conjugated to fluorescent dyes remain limited by their photobleaching, high sensitivity to the environment, low light intensity, and wide absorption and emission spectra. Immunoplasmonics is a novel microscopy method based on the visualization of Abs-functionalized plasmonic nanoparticles (fNPs) targeting cell surface biomarkers. Tunable fNPs should provide higher multiplexing capacity than immunofluorescence since NPs are photostable over time, strongly scatter light at their plasmon peak wavelengths and can be easily functionalized. In this article, we experimentally demonstrate accurate multiplexed detection based on the immunoplasmonics approach. First, we achieve the selective labelling of three targeted cell surface biomarkers (cluster of differentiation 44 (CD44), epidermal growth factor receptor (EGFR) and voltage-gated K(+) channel subunit KV1.1) on human cancer CD44(+) EGFR(+) KV1.1(+) MDA-MB-231 cells and reference CD44(-) EGFR(-) KV1.1(+) 661W cells. The labelling efficiency with three stable specific immunoplasmonics labels (functionalized silver nanospheres (CD44-AgNSs), gold (Au) NSs (EGFR-AuNSs) and Au nanorods (KV1.1-AuNRs)) detected by reflected light microscopy (RLM) is similar to the one with immunofluorescence. Second, we introduce an improved method for 3D localization and spectral identification of fNPs based on fast z-scanning by RLM with three spectral filters corresponding to the plasmon peak wavelengths of the immunoplasmonics labels in the cellular environment (500 nm for 80 nm AgNSs, 580 nm for 100 nm AuNSs and 700 nm for 40 nm × 92 nm AuNRs). Third

  1. Depth inpainting by tensor voting.

    PubMed

    Kulkarni, Mandar; Rajagopalan, Ambasamudram N

    2013-06-01

    Depth maps captured by range scanning devices or by using optical cameras often suffer from missing regions due to occlusions, reflectivity, limited scanning area, sensor imperfections, etc. In this paper, we propose a fast and reliable algorithm for depth map inpainting using the tensor voting (TV) framework. For less complex missing regions, local edge and depth information is utilized for synthesizing missing values. The depth variations are modeled by local planes using 3D TV, and missing values are estimated using plane equations. For large and complex missing regions, we collect and evaluate depth estimates from self-similar (training) datasets. We align the depth maps of the training set with the target (defective) depth map and evaluate the goodness of depth estimates among candidate values using 3D TV. We demonstrate the effectiveness of the proposed approaches on real as well as synthetic data. PMID:24323102

  2. 3D Kitaev spin liquids

    NASA Astrophysics Data System (ADS)

    Hermanns, Maria

    The Kitaev honeycomb model has become one of the archetypal spin models exhibiting topological phases of matter, where the magnetic moments fractionalize into Majorana fermions interacting with a Z2 gauge field. In this talk, we discuss generalizations of this model to three-dimensional lattice structures. Our main focus is the metallic state that the emergent Majorana fermions form. In particular, we discuss the relation of the nature of this Majorana metal to the details of the underlying lattice structure. Besides (almost) conventional metals with a Majorana Fermi surface, one also finds various realizations of Dirac semi-metals, where the gapless modes form Fermi lines or even Weyl nodes. We introduce a general classification of these gapless quantum spin liquids using projective symmetry analysis. Furthermore, we briefly outline why these Majorana metals in 3D Kitaev systems provide an even richer variety of Dirac and Weyl phases than possible for electronic matter and comment on possible experimental signatures. Work done in collaboration with Kevin O'Brien and Simon Trebst.

  3. Yogi the rock - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Yogi, a rock taller than rover Sojourner, is the subject of this image, taken in stereo by the deployed Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. The soil in the foreground has been the location of multiple soil mechanics experiments performed by Sojourner's cleated wheels. Pathfinder scientists were able to control the force inflicted on the soil beneath the rover's wheels, giving them insight into the soil's mechanical properties. The soil mechanics experiments were conducted after this image was taken.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  4. 3D ultrafast laser scanner

    NASA Astrophysics Data System (ADS)

    Mahjoubfar, A.; Goda, K.; Wang, C.; Fard, A.; Adam, J.; Gossett, D. R.; Ayazi, A.; Sollier, E.; Malik, O.; Chen, E.; Liu, Y.; Brown, R.; Sarkhosh, N.; Di Carlo, D.; Jalali, B.

    2013-03-01

    Laser scanners are essential for scientific research, manufacturing, defense, and medical practice. Unfortunately, often times the speed of conventional laser scanners (e.g., galvanometric mirrors and acousto-optic deflectors) falls short for many applications, resulting in motion blur and failure to capture fast transient information. Here, we present a novel type of laser scanner that offers roughly three orders of magnitude higher scan rates than conventional methods. Our laser scanner, which we refer to as the hybrid dispersion laser scanner, performs inertia-free laser scanning by dispersing a train of broadband pulses both temporally and spatially. More specifically, each broadband pulse is temporally processed by time stretch dispersive Fourier transform and further dispersed into space by one or more diffractive elements such as prisms and gratings. As a proof-of-principle demonstration, we perform 1D line scans at a record high scan rate of 91 MHz and 2D raster scans and 3D volumetric scans at an unprecedented scan rate of 105 kHz. The method holds promise for a broad range of scientific, industrial, and biomedical applications. To show the utility of our method, we demonstrate imaging, nanometer-resolved surface vibrometry, and high-precision flow cytometry with real-time throughput that conventional laser scanners cannot offer due to their low scan rates.

  5. Crowdsourcing Based 3d Modeling

    NASA Astrophysics Data System (ADS)

    Somogyi, A.; Barsi, A.; Molnar, B.; Lovas, T.

    2016-06-01

    Web-based photo albums that support organizing and viewing the users' images are widely used. These services provide a convenient solution for storing, editing and sharing images. In many cases, the users attach geotags to the images in order to enable using them e.g. in location based applications on social networks. Our paper discusses a procedure that collects open access images from a site frequently visited by tourists. Geotagged pictures showing the image of a sight or tourist attraction are selected and processed in photogrammetric processing software that produces the 3D model of the captured object. For the particular investigation we selected three attractions in Budapest. To assess the geometrical accuracy, we used laser scanner and DSLR as well as smart phone photography to derive reference values to enable verifying the spatial model obtained from the web-album images. The investigation shows how detailed and accurate models could be derived applying photogrammetric processing software, simply by using images of the community, without visiting the site.

  6. 3D multiplexed immunoplasmonics microscopy

    NASA Astrophysics Data System (ADS)

    Bergeron, Éric; Patskovsky, Sergiy; Rioux, David; Meunier, Michel

    2016-07-01

    Selective labelling, identification and spatial distribution of cell surface biomarkers can provide important clinical information, such as distinction between healthy and diseased cells, evolution of a disease and selection of the optimal patient-specific treatment. Immunofluorescence is the gold standard for efficient detection of biomarkers expressed by cells. However, antibodies (Abs) conjugated to fluorescent dyes remain limited by their photobleaching, high sensitivity to the environment, low light intensity, and wide absorption and emission spectra. Immunoplasmonics is a novel microscopy method based on the visualization of Abs-functionalized plasmonic nanoparticles (fNPs) targeting cell surface biomarkers. Tunable fNPs should provide higher multiplexing capacity than immunofluorescence since NPs are photostable over time, strongly scatter light at their plasmon peak wavelengths and can be easily functionalized. In this article, we experimentally demonstrate accurate multiplexed detection based on the immunoplasmonics approach. First, we achieve the selective labelling of three targeted cell surface biomarkers (cluster of differentiation 44 (CD44), epidermal growth factor receptor (EGFR) and voltage-gated K+ channel subunit KV1.1) on human cancer CD44+ EGFR+ KV1.1+ MDA-MB-231 cells and reference CD44- EGFR- KV1.1+ 661W cells. The labelling efficiency with three stable specific immunoplasmonics labels (functionalized silver nanospheres (CD44-AgNSs), gold (Au) NSs (EGFR-AuNSs) and Au nanorods (KV1.1-AuNRs)) detected by reflected light microscopy (RLM) is similar to the one with immunofluorescence. Second, we introduce an improved method for 3D localization and spectral identification of fNPs based on fast z-scanning by RLM with three spectral filters corresponding to the plasmon peak wavelengths of the immunoplasmonics labels in the cellular environment (500 nm for 80 nm AgNSs, 580 nm for 100 nm AuNSs and 700 nm for 40 nm × 92 nm AuNRs). Third, the developed

  7. 3-D Imaging Systems for Agricultural Applications-A Review.

    PubMed

    Vázquez-Arellano, Manuel; Griepentrog, Hans W; Reiser, David; Paraforos, Dimitris S

    2016-01-01

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture. PMID:27136560

  8. 3-D Imaging Systems for Agricultural Applications—A Review

    PubMed Central

    Vázquez-Arellano, Manuel; Griepentrog, Hans W.; Reiser, David; Paraforos, Dimitris S.

    2016-01-01

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture. PMID:27136560

  9. Learning the spherical harmonic features for 3-D face recognition.

    PubMed

    Liu, Peijiang; Wang, Yunhong; Huang, Di; Zhang, Zhaoxiang; Chen, Liming

    2013-03-01

    In this paper, a competitive method for 3-D face recognition (FR) using spherical harmonic features (SHF) is proposed. With this solution, 3-D face models are characterized by the energies contained in spherical harmonics with different frequencies, thereby enabling the capture of both gross shape and fine surface details of a 3-D facial surface. This is in clear contrast to most 3-D FR techniques which are either holistic or feature based, using local features extracted from distinctive points. First, 3-D face models are represented in a canonical representation, namely, spherical depth map, by which SHF can be calculated. Then, considering the predictive contribution of each SHF feature, especially in the presence of facial expression and occlusion, feature selection methods are used to improve the predictive performance and provide faster and more cost-effective predictors. Experiments have been carried out on three public 3-D face datasets, SHREC2007, FRGC v2.0, and Bosphorus, with increasing difficulties in terms of facial expression, pose, and occlusion, and which demonstrate the effectiveness of the proposed method. PMID:23060332

  10. Generation and use of measurement-based 3-D dose distributions for 3-D dose calculation verification.

    PubMed

    Stern, R L; Fraass, B A; Gerhardsson, A; McShan, D L; Lam, K L

    1992-01-01

    A 3-D radiation therapy treatment planning system calculates dose to an entire volume of points and therefore requires a 3-D distribution of measured dose values for quality assurance and dose calculation verification. To measure such a volumetric distribution with a scanning ion chamber is prohibitively time consuming. A method is presented for the generation of a 3-D grid of dose values based on beam's-eye-view (BEV) film dosimetry. For each field configuration of interest, a set of BEV films at different depths is obtained and digitized, and the optical densities are converted to dose. To reduce inaccuracies associated with film measurement of megavoltage photon depth doses, doses on the different planes are normalized using an ion-chamber measurement of the depth dose. A 3-D grid of dose values is created by interpolation between BEV planes along divergent beam rays. This matrix of measurement-based dose values can then be compared to calculations over the entire volume of interest. This method is demonstrated for three different field configurations. Accuracy of the film-measured dose values is determined by 1-D and 2-D comparisons with ion chamber measurements. Film and ion chamber measurements agree within 2% in the central field regions and within 2.0 mm in the penumbral regions. PMID:1620042

  11. 3-D Cavern Enlargement Analyses

    SciTech Connect

    EHGARTNER, BRIAN L.; SOBOLIK, STEVEN R.

    2002-03-01

    Three-dimensional finite element analyses simulate the mechanical response of enlarging existing caverns at the Strategic Petroleum Reserve (SPR). The caverns are located in Gulf Coast salt domes and are enlarged by leaching during oil drawdowns as fresh water is injected to displace the crude oil from the caverns. The current criteria adopted by the SPR limits cavern usage to 5 drawdowns (leaches). As a base case, 5 leaches were modeled over a 25 year period to roughly double the volume of a 19 cavern field. Thirteen additional leaches where then simulated until caverns approached coalescence. The cavern field approximated the geometries and geologic properties found at the West Hackberry site. This enabled comparisons are data collected over nearly 20 years to analysis predictions. The analyses closely predicted the measured surface subsidence and cavern closure rates as inferred from historic well head pressures. This provided the necessary assurance that the model displacements, strains, and stresses are accurate. However, the cavern field has not yet experienced the large scale drawdowns being simulated. Should they occur in the future, code predictions should be validated with actual field behavior at that time. The simulations were performed using JAS3D, a three dimensional finite element analysis code for nonlinear quasi-static solids. The results examine the impacts of leaching and cavern workovers, where internal cavern pressures are reduced, on surface subsidence, well integrity, and cavern stability. The results suggest that the current limit of 5 oil drawdowns may be extended with some mitigative action required on the wells and later on to surface structure due to subsidence strains. The predicted stress state in the salt shows damage to start occurring after 15 drawdowns with significant failure occurring at the 16th drawdown, well beyond the current limit of 5 drawdowns.

  12. Imaging a Sustainable Future in 3D

    NASA Astrophysics Data System (ADS)

    Schuhr, W.; Lee, J. D.; Kanngieser, E.

    2012-07-01

    It is the intention of this paper, to contribute to a sustainable future by providing objective object information based on 3D photography as well as promoting 3D photography not only for scientists, but also for amateurs. Due to the presentation of this article by CIPA Task Group 3 on "3D Photographs in Cultural Heritage", the presented samples are masterpieces of historic as well as of current 3D photography concentrating on cultural heritage. In addition to a report on exemplarily access to international archives of 3D photographs, samples for new 3D photographs taken with modern 3D cameras, as well as by means of a ground based high resolution XLITE staff camera and also 3D photographs taken from a captive balloon and the use of civil drone platforms are dealt with. To advise on optimum suited 3D methodology, as well as to catch new trends in 3D, an updated synoptic overview of the 3D visualization technology, even claiming completeness, has been carried out as a result of a systematic survey. In this respect, e.g., today's lasered crystals might be "early bird" products in 3D, which, due to lack in resolution, contrast and color, remember to the stage of the invention of photography.

  13. Teaching Geography with 3-D Visualization Technology

    ERIC Educational Resources Information Center

    Anthamatten, Peter; Ziegler, Susy S.

    2006-01-01

    Technology that helps students view images in three dimensions (3-D) can support a broad range of learning styles. "Geo-Wall systems" are visualization tools that allow scientists, teachers, and students to project stereographic images and view them in 3-D. We developed and presented 3-D visualization exercises in several undergraduate courses.…

  14. 3D Printing and Its Urologic Applications

    PubMed Central

    Soliman, Youssef; Feibus, Allison H; Baum, Neil

    2015-01-01

    3D printing is the development of 3D objects via an additive process in which successive layers of material are applied under computer control. This article discusses 3D printing, with an emphasis on its historical context and its potential use in the field of urology. PMID:26028997

  15. 3D Flow Visualization Using Texture Advection

    NASA Technical Reports Server (NTRS)

    Kao, David; Zhang, Bing; Kim, Kwansik; Pang, Alex; Moran, Pat (Technical Monitor)

    2001-01-01

    Texture advection is an effective tool for animating and investigating 2D flows. In this paper, we discuss how this technique can be extended to 3D flows. In particular, we examine the use of 3D and 4D textures on 3D synthetic and computational fluid dynamics flow fields.

  16. 3D Elastic Seismic Wave Propagation Code

    1998-09-23

    E3D is capable of simulating seismic wave propagation in a 3D heterogeneous earth. Seismic waves are initiated by earthquake, explosive, and/or other sources. These waves propagate through a 3D geologic model, and are simulated as synthetic seismograms or other graphical output.

  17. 3D Printing and Its Urologic Applications.

    PubMed

    Soliman, Youssef; Feibus, Allison H; Baum, Neil

    2015-01-01

    3D printing is the development of 3D objects via an additive process in which successive layers of material are applied under computer control. This article discusses 3D printing, with an emphasis on its historical context and its potential use in the field of urology. PMID:26028997

  18. A 3D mosaic algorithm using disparity map

    NASA Astrophysics Data System (ADS)

    Yu, Bo; Kakeya, Hideki

    2015-03-01

    Conventionally there exist two major methods to create mosaics in 3D videos. One is to duplicate the area of mosaics from the image of one viewpoint (the left view or the right view) to that of the other viewpoint. This method, which is not capable of expressing depth, cannot give viewers a natural perception in 3D. The other method is to create the mosaics separately in the left view and the right view. With this method the depth is expressed in the area of mosaics, but 3D perception is not natural enough. To overcome these problems, we propose a method to create mosaics by using a disparity map. In the proposed method the mosaic of the image from one viewpoint is made with the conventional method, while the mosaic of the image from the other viewpoint is made based on the data of the disparity map so that the mosaic patterns of the two images can give proper depth perception to the viewer. We confirm that the proposed mosaic pattern using a disparity map gives more natural depth perception of the viewer by subjective experiments using a static image and two videos.

  19. A 3-D velocity structure in and around the Miura peninsula, Japan, using a 3-component off-line seismographic array.

    NASA Astrophysics Data System (ADS)

    Kawamura, T.; Hirata, N.; Sato, H.; Onishi, M.; Noda, K.; Saito, H.

    2004-12-01

    A deep seismic profiling around the Metropolitan Tokyo region, the Kanto district, started in 2002 under the project titled as the Regional Characterization of the Crust in Metropolitan Areas for Prediction of Strong Ground Motion. The deep seismic profiling, Tokyo Bay 2003, was performed along the major axis of the Tokyo Bay. Because the seismic line in the Miura peninsula passes through a densely populated area, we have a low signal-to-noise ratio data due to the cultural noise. Thus, in addition to the conventional reflection profiling, we deployed 51 off-line recorders with a 3-compornent geophone of 4.5 Hz at carefully selected, quiet receiver points. During 90 days, we had continuous records including many shot signals produced by vibrators on land and air-guns at the bay area. These data provided far-offset first arrival signals and wide angle reflections. We focus on the common receiver gather records of the Tokyo Bay 2003 off-line stations data to identify first arrival and wide angle phases. We applied the first arrival tomography method using a finite difference travel time solver (Hole, 1992) to those data to obtain a 3-D P-wave velocity structure of the uppermost crust along the profile. We obtained a velocity model in and around the Miura peninsula as follows: Across the Tokyo Bay, near surface is a layer with velocities of 2.0-2.5 km/s. A low velocity area corresponds to the fore-arc basin sediment (post Early Miocene) which extends to a depth of approximately 4 km. High velocity patches are located at a depth of approximately 6 km under the Miura peninsula, which we interpreted as Pre-Neogene basement rocks. Finally, the velocity structure obtained by the tomography analysis is used to improve the processing of the reflection profiling data to clarify the deeper structure in the peninsula, including a good velocity constraint for a pre-stack migration of the reflection profiling data.

  20. 3-D Perspective Pasadena, California

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This perspective view shows the western part of the city of Pasadena, California, looking north towards the San Gabriel Mountains. Portions of the cities of Altadena and La Canada, Flintridge are also shown. The image was created from three datasets: the Shuttle Radar Topography Mission (SRTM) supplied the elevation data; Landsat data from November 11, 1986 provided the land surface color (not the sky) and U.S. Geological Survey digital aerial photography provides the image detail. The Rose Bowl, surrounded by a golf course, is the circular feature at the bottom center of the image. The Jet Propulsion Laboratory is the cluster of large buildings north of the Rose Bowl at the base of the mountains. A large landfill, Scholl Canyon, is the smooth area in the lower left corner of the scene. This image shows the power of combining data from different sources to create planning tools to study problems that affect large urban areas. In addition to the well-known earthquake hazards, Southern California is affected by a natural cycle of fire and mudflows. Wildfires strip the mountains of vegetation, increasing the hazards from flooding and mudflows for several years afterwards. Data such as shown on this image can be used to predict both how wildfires will spread over the terrain and also how mudflows will be channeled down the canyons. The Shuttle Radar Topography Mission (SRTM), launched on February 11, 2000, uses the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. The mission was designed to collect three dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, an additional C-band imaging antenna and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration (NASA), the National Imagery and Mapping Agency

  1. The Esri 3D city information model

    NASA Astrophysics Data System (ADS)

    Reitz, T.; Schubiger-Banz, S.

    2014-02-01

    With residential and commercial space becoming increasingly scarce, cities are going vertical. Managing the urban environments in 3D is an increasingly important and complex undertaking. To help solving this problem, Esri has released the ArcGIS for 3D Cities solution. The ArcGIS for 3D Cities solution provides the information model, tools and apps for creating, analyzing and maintaining a 3D city using the ArcGIS platform. This paper presents an overview of the 3D City Information Model and some sample use cases.

  2. 3D surface defect analysis and evaluation

    NASA Astrophysics Data System (ADS)

    Yang, B.; Jia, M.; Song, G. J.; Tao, L.; Harding, K. G.

    2008-08-01

    A method is proposed for surface defect analysis and evaluation. Good 3D point clouds can now be obtained through a variety of surface profiling methods such as stylus tracers, structured light, or interferometry. In order to inspect a surface for defects, first a reference surface that represents the surface without any defects needs to be identified. This reference surface can then be fit to the point cloud. The algorithm we present finds the least square solution for the overdetermined equation set to obtain the parameters of the reference surface mathematical description. The distance between each point within the point cloud and the reference surface is then calculated using to the derived reference surface equation. For analysis of the data, the user can preset a threshold distance value. If the calculated distance is bigger than the threshold value, the corresponding point is marked as a defect point. The software then generates a color-coded map of the measured surface. Defect points that are connected together are formed into a defect-clustering domain. Each defect-clustering domain is treated as one defect area. We then use a clustering domain searching algorithm to auto-search all the defect areas in the point cloud. The different critical parameters used for evaluating the defect status of a point cloud that can be calculated are described as: P-Depth,a peak depth of all defects; Defect Number, the number of surface defects; Defects/Area, the defect number in unit area; and Defect Coverage Ratio which is a ratio of the defect area to the region of interest.

  3. Ultrasonic impact damage assessment in 3D woven composite materials

    NASA Astrophysics Data System (ADS)

    Mannai, E.; Lamboul, B.; Roche, J. M.

    2015-03-01

    An ultrasonic nondestructive methodology is proposed for the assessment of low velocity impact damage in a 3D woven composite material. The output data is intended for material scientists and numerical scientists to validate the damage tolerance performance of the manufactured materials and the reliability of damage modeling predictions. A depth-dependent threshold based on the reflectivity of flat bottom holes is applied to the ultrasonic data to remove the structural noise and isolate echoes of interest. The methodology was applied to a 3 mm thick 3D woven composite plate impacted with different energies. An artificial 3D representation of the detected echoes is proposed to enhance the spatial perception of the generated damage by the end user. The paper finally highlights some statistics made on the detected echoes to quantitatively assess the impact damage resistance of the tested specimens.

  4. Case study: Beauty and the Beast 3D: benefits of 3D viewing for 2D to 3D conversion

    NASA Astrophysics Data System (ADS)

    Handy Turner, Tara

    2010-02-01

    From the earliest stages of the Beauty and the Beast 3D conversion project, the advantages of accurate desk-side 3D viewing was evident. While designing and testing the 2D to 3D conversion process, the engineering team at Walt Disney Animation Studios proposed a 3D viewing configuration that not only allowed artists to "compose" stereoscopic 3D but also improved efficiency by allowing artists to instantly detect which image features were essential to the stereoscopic appeal of a shot and which features had minimal or even negative impact. At a time when few commercial 3D monitors were available and few software packages provided 3D desk-side output, the team designed their own prototype devices and collaborated with vendors to create a "3D composing" workstation. This paper outlines the display technologies explored, final choices made for Beauty and the Beast 3D, wish-lists for future development and a few rules of thumb for composing compelling 2D to 3D conversions.

  5. Reconstruction of 3D scenes from sequences of images

    NASA Astrophysics Data System (ADS)

    Niu, Bei; Sang, Xinzhu; Chen, Duo; Cai, Yuanfa

    2013-08-01

    Reconstruction of three-dimensional (3D) scenes is an active research topic in the field of computer vision and 3D display. It's a challenge to model 3D objects rapidly and effectively. A 3D model can be extracted from multiple images. The system only requires a sequence of images taken with cameras without knowing the parameters of camera, which provide flexibility to a high degree. We focus on quickly merging point cloud of the object from depth map sequences. The whole system combines algorithms of different areas in computer vision, such as camera calibration, stereo correspondence, point cloud splicing and surface reconstruction. The procedure of 3D reconstruction is decomposed into a number of successive steps. Firstly, image sequences are received by the camera freely moving around the object. Secondly, the scene depth is obtained by a non-local stereo matching algorithm. The pairwise is realized with the Scale Invariant Feature Transform (SIFT) algorithm. An initial matching is then made for the first two images of the sequence. For the subsequent image that is processed with previous image, the point of interest corresponding to ones in previous images are refined or corrected. The vertical parallax between the images is eliminated. The next step is to calibrate camera, and intrinsic parameters and external parameters of the camera are calculated. Therefore, The relative position and orientation of camera are gotten. A sequence of depth maps are acquired by using a non-local cost aggregation method for stereo matching. Then point cloud sequence is achieved by the scene depths, which consists of point cloud model using the external parameters of camera and the point cloud sequence. The point cloud model is then approximated by a triangular wire-frame mesh to reduce geometric complexity and to tailor the model to the requirements of computer graphics visualization systems. Finally, the texture is mapped onto the wire-frame model, which can also be used for 3

  6. Reservoir geology using 3D modelling tools

    SciTech Connect

    Dubrule, O.; Samson, P.; Segonds, D.

    1996-12-31

    The last decade has seen tremendous developments in the area of quantitative geological modelling. These developments have a significant impact on the current practice of constructing reservoir models. A structural model can first be constructed on the basis of depth-converted structural interpretations produced on a seismic interpretation workstation. Surfaces and faults can be represented as geological objects, and interactively modified. Once the tectonic framework has been obtained, intermediate stratigraphic surfaces can be constructed between the main structural surfaces. Within each layer, reservoir attributes can be represented using various techniques. Examples show how the distribution of different facies (i.e. from fine to coarse grain) can be represented, or how various depositional units (for instance channels, crevasses and lobes in a turbidite setting) can be modelled as geological {open_quotes}objects{close_quotes} with complex geometries. Elf Aquitaine, in close co-operation with the GOCAD project in Nancy (France) is investigating how geological models can be made more realistic by developing interactive functionalities. Examples show that, contrary to standard deterministic or geostatistical modelling techniques (which tend to be difficult to control) the use of new 3D tools allows the geologist to interactively modify geological surfaces (including faults) or volumetric properties. Thus, the sensitivity of various economic parameters (oil in place, connected volumes, reserves) to major geological uncertainties can be evaluated. It is argued that future breakthroughs in geological modelling techniques are likely to happen in the development of interactive approaches rather than in the research of new mathematical algorithms.

  7. Reservoir geology using 3D modelling tools

    SciTech Connect

    Dubrule, O. ); Samson, P. ); Segonds, D. )

    1996-01-01

    The last decade has seen tremendous developments in the area of quantitative geological modelling. These developments have a significant impact on the current practice of constructing reservoir models. A structural model can first be constructed on the basis of depth-converted structural interpretations produced on a seismic interpretation workstation. Surfaces and faults can be represented as geological objects, and interactively modified. Once the tectonic framework has been obtained, intermediate stratigraphic surfaces can be constructed between the main structural surfaces. Within each layer, reservoir attributes can be represented using various techniques. Examples show how the distribution of different facies (i.e. from fine to coarse grain) can be represented, or how various depositional units (for instance channels, crevasses and lobes in a turbidite setting) can be modelled as geological [open quotes]objects[close quotes] with complex geometries. Elf Aquitaine, in close co-operation with the GOCAD project in Nancy (France) is investigating how geological models can be made more realistic by developing interactive functionalities. Examples show that, contrary to standard deterministic or geostatistical modelling techniques (which tend to be difficult to control) the use of new 3D tools allows the geologist to interactively modify geological surfaces (including faults) or volumetric properties. Thus, the sensitivity of various economic parameters (oil in place, connected volumes, reserves) to major geological uncertainties can be evaluated. It is argued that future breakthroughs in geological modelling techniques are likely to happen in the development of interactive approaches rather than in the research of new mathematical algorithms.

  8. A Practical Approach of Curved Ray Prestack Kirchhoff Time Migration on GPGPU

    NASA Astrophysics Data System (ADS)

    Shi, Xiaohua; Li, Chuang; Wang, Xu; Li, Kang

    We introduced four prototypes of General Purpose GPU solutions by Compute Unified Device Architecture (CUDA) on NVidia GeForce 8800GT and Tesla C870 for a practical Curved Ray Prestack Kirchhoff Time Migration program, which is one of the most widely adopted imaging methods in the seismic data processing industry. We presented how to re-design and re-implement the original CPU code to efficient GPU code step by step. We demonstrated optimization methods, such as how to reduce the overhead of memory transportation on PCI-E bus, how to significantly increase the kernel thread numbers on GPU cores, how to buffer the inputs and outputs of CUDA kernel modules, and how to utilize the memory streams to overlap GPU kernel execution time, etc., to improve the runtime performance on GPUs. We analyzed the floating point errors between CPUs and GPUs. We presented the images generated by CPU and GPU programs for the same real-world seismic data inputs. Our final approach of Prototype-IV on NVidia GeForce 8800GT is 16.3 times faster than its CPU version on Intel’s P4 3.0G.

  9. Pre-stack full wavefield inversion for elastic parameters of TI media

    NASA Astrophysics Data System (ADS)

    Zhang, Meigen; Huang, Zhongyu; Li, Xiaofan; Wang, Miaoyue; Xu, Guangyin

    2006-03-01

    Pre-stack full wavefield inversion for the elastic parameters of transversely isotropical media is implemented. The Jacobian matrix is derived directly with the finite element method, just like the full wavefield forward modelling. An absorbing boundary scheme combining Liao's transparent boundary condition with Sarma's attenuation boundary condition is applied to the forward modelling and Jacobian calculation. The input data are the complete ground-recorded wavefields containing full kinematic and dynamic information for the seismic waves. Inversion with such data is desirable as it should improve the accuracy of the estimated parameters and also reduce data pre-processing, such as wavefield identification and separation. A scheme called energy grading inversion is presented to deal with the instability caused by the large energy difference between different arrivals in the input data. With this method, parameters in the shallow areas, which mainly affect wave patterns with strong energy, converge before those of deeper media. Thus, the number of unknowns in each inversion step is reduced, and the stability and reliability of the inversion process is greatly improved. As a result, the scheme is helpful to reduce the non-uniqueness in the inversion. Two synthetic examples show that the inversion system is reliable and accurate even though initial models deviate significantly from the actual models. Also, the system can accurately invert for transversely isotropic model parameters even with the introduction of strong random noise.

  10. A hybrid method for strong low-frequency noise suppression in prestack seismic data

    NASA Astrophysics Data System (ADS)

    Hu, Chunhua; Lu, Wenkai

    2014-09-01

    Low-frequency components are important portion of seismic data in exploration geophysics, and have great effects on seismic imaging of deep subsurface and full waveform inversion. Unfortunately, seismic data usually suffers from various kinds of noises and has low signal to noise ratio (SNR) in low-frequency band, although this situation has been improved by developments of acquisition technology. In this paper, we propose a low-frequency cascade filter (LFCF) in Fourier domain for strong low-frequency noise suppression in prestack gathers. LFCF includes a 1D adaptive median filter in f-x domain and a 2D notch filter in f-k domain, which is able to process high-amplitude swell noise, random noise, and seismic interference noise. We employ traces rearrangement and spike-detection mechanisms in adaptive f-x median filter, which can handle strong noise specifically, such as wide-spreading swell noise and tug noise. And a notch filter in f-k domain is designed to separate reflection signal and random noise by different apparent velocities. Through these means, our method can effectively attenuate low-frequency random and coherent noise while simultaneously protect the signal. Experiments on synthetic example and field data are conducted, and the results demonstrate that our method is practical and effective and can preserve signal down to 2 Hz.

  11. Insect stereopsis demonstrated using a 3D insect cinema

    PubMed Central

    Nityananda, Vivek; Tarawneh, Ghaith; Rosner, Ronny; Nicolas, Judith; Crichton, Stuart; Read, Jenny

    2016-01-01

    Stereopsis - 3D vision – has become widely used as a model of perception. However, all our knowledge of possible underlying mechanisms comes almost exclusively from vertebrates. While stereopsis has been demonstrated for one invertebrate, the praying mantis, a lack of techniques to probe invertebrate stereopsis has prevented any further progress for three decades. We therefore developed a stereoscopic display system for insects, using miniature 3D glasses to present separate images to each eye, and tested our ability to deliver stereoscopic illusions to praying mantises. We find that while filtering by circular polarization failed due to excessive crosstalk, “anaglyph” filtering by spectral content clearly succeeded in giving the mantis the illusion of 3D depth. We thus definitively demonstrate stereopsis in mantises and also demonstrate that the anaglyph technique can be effectively used to deliver virtual 3D stimuli to insects. This method opens up broad avenues of research into the parallel evolution of stereoscopic computations and possible new algorithms for depth perception. PMID:26740144

  12. 3-D Terahertz Synthetic-Aperture Imaging and Spectroscopy

    NASA Astrophysics Data System (ADS)

    Henry, Samuel C.

    Terahertz (THz) wavelengths have attracted recent interest in multiple disciplines within engineering and science. Situated between the infrared and the microwave region of the electromagnetic spectrum, THz energy can propagate through non-polar materials such as clothing or packaging layers. Moreover, many chemical compounds, including explosives and many drugs, reveal strong absorption signatures in the THz range. For these reasons, THz wavelengths have great potential for non-destructive evaluation and explosive detection. Three-dimensional (3-D) reflection imaging with considerable depth resolution is also possible using pulsed THz systems. While THz imaging (especially 3-D) systems typically operate in transmission mode, reflection offers the most practical configuration for standoff detection, especially for objects with high water content (like human tissue) which are opaque at THz frequencies. In this research, reflection-based THz synthetic-aperture (SA) imaging is investigated as a potential imaging solution. THz SA imaging results presented in this dissertation are unique in that a 2-D planar synthetic array was used to generate a 3-D image without relying on a narrow time-window for depth isolation cite [Shen 2005]. Novel THz chemical detection techniques are developed and combined with broadband THz SA capabilities to provide concurrent 3-D spectral imaging. All algorithms are tested with various objects and pressed pellets using a pulsed THz time-domain system in the Northwest Electromagnetics and Acoustics Research Laboratory (NEAR-Lab).

  13. Insect stereopsis demonstrated using a 3D insect cinema.

    PubMed

    Nityananda, Vivek; Tarawneh, Ghaith; Rosner, Ronny; Nicolas, Judith; Crichton, Stuart; Read, Jenny

    2016-01-01

    Stereopsis - 3D vision - has become widely used as a model of perception. However, all our knowledge of possible underlying mechanisms comes almost exclusively from vertebrates. While stereopsis has been demonstrated for one invertebrate, the praying mantis, a lack of techniques to probe invertebrate stereopsis has prevented any further progress for three decades. We therefore developed a stereoscopic display system for insects, using miniature 3D glasses to present separate images to each eye, and tested our ability to deliver stereoscopic illusions to praying mantises. We find that while filtering by circular polarization failed due to excessive crosstalk, "anaglyph" filtering by spectral content clearly succeeded in giving the mantis the illusion of 3D depth. We thus definitively demonstrate stereopsis in mantises and also demonstrate that the anaglyph technique can be effectively used to deliver virtual 3D stimuli to insects. This method opens up broad avenues of research into the parallel evolution of stereoscopic computations and possible new algorithms for depth perception. PMID:26740144

  14. Sydney-Gunnedah-Bowen Basin deep 3D structure

    NASA Astrophysics Data System (ADS)

    Danis, Cara

    2012-01-01

    Studies of the Sydney-Gunnedah-Bowen Basin (SGBB), one of the largest extensional rift sedimentary basins on the east coast of Australia, lack an understanding of the 3D upper crustal structure. Understanding of the subsurface structure is essential for many areas of resource exploration, development and management, as well as scientific research. Geological models provide a way to visualise and investigate the subsurface structure. The integrated regional scale gravity modelling approach, which uses boreholes and seismic data constraints, provides an understanding of the upper crustal structure and allows the development of a 3D geological model which can be used as the architectural framework for many different applications. This work presents a 3D geological model of the SGBB developed for application in high resolution thermal models. It is the culmination of geological surfaces derived from the interpolation of previous regional scale 2D gravity models and numerous borehole records. The model outlines the basement structure of the SGBB and provides information on depth to basement, depth to basal volcanics and thickness of overlying sediments. Through understanding the uncertainties, limitations, confidence and reliability of this model, the 3D geological model can provide the ideal framework for future research.

  15. Relationship between ridge segmentation and Moho transition zone structure from 3D multichannel seismic data collected over the fast-spreading East Pacific Rise at 9°50'N

    NASA Astrophysics Data System (ADS)

    Aghaei, O.; Nedimovic, M. R.; Canales, J.; Carton, H. D.; Carbotte, S. M.; Mutter, J. C.

    2010-12-01

    We present stack and migrated stack volumes of a fast-spreading center produced from the high-resolution 3D multichannel seismic (MCS) data collected in summer of 2008 over the East Pacific Rise (EPR) at 9°50’N during cruise MGL0812. These volumes give us new insight into the 3D structure of the lower crust and Moho Transition Zone (MTZ) along and across the ridge axis, and how this structure relates to the ridge segmentation at the spreading axis. The area of 3D coverage is between 9°38’N and 9°58’N (~1000 km2) where the documented eruptions of 1990-91 and 2005-06 occurred. This high-resolution survey has a nominal bin size of 6.25 m in cross-axis direction and 37.5 m in along-axis direction. The prestack processing sequence applied to data includes 1D and 2D filtering to remove low-frequency cable noise, offset-dependent spherical divergence correction to compensate for geometrical spreading, surface-consistent amplitude correction to balance abnormally high/low shot and channel amplitudes, trace editing, velocity analysis, normal moveout (NMO), and CMP mute of stretched far offset arrivals. The poststack processing includes seafloor multiple mute to reduce migration noise and poststack time migration. We also will apply primary multiple removal and prestack time migration to the data and compare the results to the migrated stack volume. The poststack and prestack migrated volumes will then be used to detail Moho seismic signature variations and their relationship to ridge segmentation, crustal age, bathymetry, and magmatism. We anticipate that the results will also provide insight into the mantle upwelling pattern, which is actively debated for the study area.

  16. Improvement in metrology on new 3D-AFM platform

    NASA Astrophysics Data System (ADS)

    Schmitz, Ingo; Osborn, Marc; Hand, Sean; Chen, Qi

    2008-10-01

    According to the 2007 edition of the ITRS roadmap, the requirement for CD uniformity of isolated lines on a binary or attenuated phase shift mask is 2.1nm (3σ) in 2008 and requires improvement to1.3 nm (3σ) in 2010. In order to meet the increasing demand for CD uniformity on photo masks, improved CD metrology is required. A next generation AFM, InSightTM 3DAFM, has been developed to meet these increased requirements for advanced photo mask metrology. The new system achieves 2X improvement in CD and depth precision on advanced photo masks features over the previous generation 3D-AFM. This paper provides measurement data including depth, CD, and sidewall angle metrology. In addition the unique capabilities of damage-free defect inspection and Nanoimprint characterization by 3D AFM are presented.

  17. 3-D Technology Approaches for Biological Ecologies

    NASA Astrophysics Data System (ADS)

    Liu, Liyu; Austin, Robert; U. S-China Physical-Oncology Sciences Alliance (PS-OA) Team

    Constructing three dimensional (3-D) landscapes is an inevitable issue in deep study of biological ecologies, because in whatever scales in nature, all of the ecosystems are composed by complex 3-D environments and biological behaviors. Just imagine if a 3-D technology could help complex ecosystems be built easily and mimic in vivo microenvironment realistically with flexible environmental controls, it will be a fantastic and powerful thrust to assist researchers for explorations. For years, we have been utilizing and developing different technologies for constructing 3-D micro landscapes for biophysics studies in in vitro. Here, I will review our past efforts, including probing cancer cell invasiveness with 3-D silicon based Tepuis, constructing 3-D microenvironment for cell invasion and metastasis through polydimethylsiloxane (PDMS) soft lithography, as well as explorations of optimized stenting positions for coronary bifurcation disease with 3-D wax printing and the latest home designed 3-D bio-printer. Although 3-D technologies is currently considered not mature enough for arbitrary 3-D micro-ecological models with easy design and fabrication, I hope through my talk, the audiences will be able to sense its significance and predictable breakthroughs in the near future. This work was supported by the State Key Development Program for Basic Research of China (Grant No. 2013CB837200), the National Natural Science Foundation of China (Grant No. 11474345) and the Beijing Natural Science Foundation (Grant No. 7154221).

  18. RT3D tutorials for GMS users

    SciTech Connect

    Clement, T.P.; Jones, N.L.

    1998-02-01

    RT3D (Reactive Transport in 3-Dimensions) is a computer code that solves coupled partial differential equations that describe reactive-flow and transport of multiple mobile and/or immobile species in a three dimensional saturated porous media. RT3D was developed from the single-species transport code, MT3D (DoD-1.5, 1997 version). As with MT3D, RT3D also uses the USGS groundwater flow model MODFLOW for computing spatial and temporal variations in groundwater head distribution. This report presents a set of tutorial problems that are designed to illustrate how RT3D simulations can be performed within the Department of Defense Groundwater Modeling System (GMS). GMS serves as a pre- and post-processing interface for RT3D. GMS can be used to define all the input files needed by RT3D code, and later the code can be launched from within GMS and run as a separate application. Once the RT3D simulation is completed, the solution can be imported to GMS for graphical post-processing. RT3D v1.0 supports several reaction packages that can be used for simulating different types of reactive contaminants. Each of the tutorials, described below, provides training on a different RT3D reaction package. Each reaction package has different input requirements, and the tutorials are designed to describe these differences. Furthermore, the tutorials illustrate the various options available in GMS for graphical post-processing of RT3D results. Users are strongly encouraged to complete the tutorials before attempting to use RT3D and GMS on a routine basis.

  19. Programming standards for effective S-3D game development

    NASA Astrophysics Data System (ADS)

    Schneider, Neil; Matveev, Alexander

    2008-02-01

    When a video game is in development, more often than not it is being rendered in three dimensions - complete with volumetric depth. It's the PC monitor that is taking this three-dimensional information, and artificially displaying it in a flat, two-dimensional format. Stereoscopic drivers take the three-dimensional information captured from DirectX and OpenGL calls and properly display it with a unique left and right sided view for each eye so a proper stereoscopic 3D image can be seen by the gamer. The two-dimensional limitation of how information is displayed on screen has encouraged programming short-cuts and work-arounds that stifle this stereoscopic 3D effect, and the purpose of this guide is to outline techniques to get the best of both worlds. While the programming requirements do not significantly add to the game development time, following these guidelines will greatly enhance your customer's stereoscopic 3D experience, increase your likelihood of earning Meant to be Seen certification, and give you instant cost-free access to the industry's most valued consumer base. While this outline is mostly based on NVIDIA's programming guide and iZ3D resources, it is designed to work with all stereoscopic 3D hardware solutions and is not proprietary in any way.

  20. Optimizing 3D image quality and performance for stereoscopic gaming

    NASA Astrophysics Data System (ADS)

    Flack, Julien; Sanderson, Hugh; Pegg, Steven; Kwok, Simon; Paterson, Daniel

    2009-02-01

    The successful introduction of stereoscopic TV systems, such as Samsung's 3D Ready Plasma, requires high quality 3D content to be commercially available to the consumer. Console and PC games provide the most readily accessible source of high quality 3D content. This paper describes innovative developments in a generic, PC-based game driver architecture that addresses the two key issues affecting 3D gaming: quality and speed. At the heart of the quality issue are the same considerations that studios face producing stereoscopic renders from CG movies: how best to perform the mapping from a geometric CG environment into the stereoscopic display volume. The major difference being that for game drivers this mapping cannot be choreographed by hand but must be automatically calculated in real-time without significant impact on performance. Performance is a critical issue when dealing with gaming. Stereoscopic gaming has traditionally meant rendering the scene twice with the associated performance overhead. An alternative approach is to render the scene from one virtual camera position and use information from the z-buffer to generate a stereo pair using Depth-Image-Based Rendering (DIBR). We analyze this trade-off in more detail and provide some results relating to both 3D image quality and render performance.

  1. Development of a stereo 3-D pictorial primary flight display

    NASA Technical Reports Server (NTRS)

    Nataupsky, Mark; Turner, Timothy L.; Lane, Harold; Crittenden, Lucille

    1989-01-01

    Computer-generated displays are becoming increasingly popular in aerospace applications. The use of stereo 3-D technology provides an opportunity to present depth perceptions which otherwise might be lacking. In addition, the third dimension could also be used as an additional dimension along which information can be encoded. Historically, the stereo 3-D displays have been used in entertainment, in experimental facilities, and in the handling of hazardous waste. In the last example, the source of the stereo images generally has been remotely controlled television camera pairs. The development of a stereo 3-D pictorial primary flight display used in a flight simulation environment is described. The applicability of stereo 3-D displays for aerospace crew stations to meet the anticipated needs for 2000 to 2020 time frame is investigated. Although, the actual equipment that could be used in an aerospace vehicle is not currently available, the lab research is necessary to determine where stereo 3-D enhances the display of information and how the displays should be formatted.

  2. 3D visualization techniques for the STEREO-mission

    NASA Astrophysics Data System (ADS)

    Wiegelmann, T.; Podlipnik, B.; Inhester, B.; Feng, L.; Ruan, P.

    The forthcoming STEREO-mission will observe the Sun from two different viewpoints We expect about 2GB data per day which ask for suitable data presentation techniques A key feature of STEREO is that it will provide for the first time a 3D-view of the Sun and the solar corona In our normal environment we see objects three dimensional because the light from real 3D objects needs different travel times to our left and right eye As a consequence we see slightly different images with our eyes which gives us information about the depth of objects and a corresponding 3D impression Techniques for the 3D-visualization of scientific and other data on paper TV computer screen cinema etc are well known e g two colour anaglyph technique shutter glasses polarization filters and head-mounted displays We discuss advantages and disadvantages of these techniques and how they can be applied to STEREO-data The 3D-visualization techniques are not limited to visual images but can be also used to show the reconstructed coronal magnetic field and energy and helicity distribution In the advent of STEREO we test the method with data from SOHO which provides us different viewpoints by the solar rotation This restricts the analysis to structures which remain stationary for several days Real STEREO-data will not be affected by these limitations however

  3. 3D Medical Collaboration Technology to Enhance Emergency Healthcare

    PubMed Central

    Welch, Greg; Sonnenwald, Diane H; Fuchs, Henry; Cairns, Bruce; Mayer-Patel, Ketan; Söderholm, Hanna M.; Yang, Ruigang; State, Andrei; Towles, Herman; Ilie, Adrian; Ampalam, Manoj; Krishnan, Srinivas; Noel, Vincent; Noland, Michael; Manning, James E.

    2009-01-01

    Two-dimensional (2D) videoconferencing has been explored widely in the past 15–20 years to support collaboration in healthcare. Two issues that arise in most evaluations of 2D videoconferencing in telemedicine are the difficulty obtaining optimal camera views and poor depth perception. To address these problems, we are exploring the use of a small array of cameras to reconstruct dynamic three-dimensional (3D) views of a remote environment and of events taking place within. The 3D views could be sent across wired or wireless networks to remote healthcare professionals equipped with fixed displays or with mobile devices such as personal digital assistants (PDAs). The remote professionals’ viewpoints could be specified manually or automatically (continuously) via user head or PDA tracking, giving the remote viewers head-slaved or hand-slaved virtual cameras for monoscopic or stereoscopic viewing of the dynamic reconstructions. We call this idea remote 3D medical collaboration. In this article we motivate and explain the vision for 3D medical collaboration technology; we describe the relevant computer vision, computer graphics, display, and networking research; we present a proof-of-concept prototype system; and we present evaluation results supporting the general hypothesis that 3D remote medical collaboration technology could offer benefits over conventional 2D videoconferencing in emergency healthcare. PMID:19521951

  4. Multivariate 3D modelling of Scottish soil properties

    NASA Astrophysics Data System (ADS)

    Poggio, Laura; Gimona, Alessandro

    2015-04-01

    Information regarding soil properties across landscapes at national or continental scales is critical for better soil and environmental management and for climate regulation and adaptation policy. The prediction of soil properties variation in space and time and their uncertainty is an important part of environmental modelling. Soil properties, and in particular the 3 fractions of soil texture, exhibit strong co-variation among themselves and therefore taking into account this correlation leads to spatially more accurate results. In this study the continuous vertical and lateral distributions of relevant soil properties in Scottish soils were modelled with a multivariate 3D-GAM+GS approach. The approach used involves 1) modelling the multivariate trend with full 3D spatial correlation, i.e., exploiting the values of the neighbouring pixels in 3D-space, and 2) 3D kriging to interpolate the residuals. The values at each cell for each of the considered depth layers were defined using a hybrid GAM-geostatistical 3D model, combining the fitting of a GAM (generalised Additive Models) to estimate multivariate trend of the variables, using a 3D smoother with related covariates. Gaussian simulations of the model residuals were used as spatial component to account for local details. A dataset of about 26,000 horizons (7,800 profiles) was used for this study. A validation set was randomly selected as 25% of the full dataset. Numerous covariates derived from globally available data, such as MODIS and SRTM, are considered. The results of the 3D-GAM+kriging showed low RMSE values, good R squared and an accurate reproduction of the spatial structure of the data for a range of soil properties. The results have an out-of-sample RMSE between 10 to 15% of the observed range when taking into account the whole profile. The approach followed allows the assessment of the uncertainty of both the trend and the residuals.

  5. Progress in 3D imaging and display by integral imaging

    NASA Astrophysics Data System (ADS)

    Martinez-Cuenca, R.; Saavedra, G.; Martinez-Corral, M.; Pons, A.; Javidi, B.

    2009-05-01

    Three-dimensionality is currently considered an important added value in imaging devices, and therefore the search for an optimum 3D imaging and display technique is a hot topic that is attracting important research efforts. As main value, 3D monitors should provide the observers with different perspectives of a 3D scene by simply varying the head position. Three-dimensional imaging techniques have the potential to establish a future mass-market in the fields of entertainment and communications. Integral imaging (InI), which can capture true 3D color images, has been seen as the right technology to 3D viewing to audiences of more than one person. Due to the advanced degree of development, InI technology could be ready for commercialization in the coming years. This development is the result of a strong research effort performed along the past few years by many groups. Since Integral Imaging is still an emerging technology, the first aim of the "3D Imaging and Display Laboratory" at the University of Valencia, has been the realization of a thorough study of the principles that govern its operation. Is remarkable that some of these principles have been recognized and characterized by our group. Other contributions of our research have been addressed to overcome some of the classical limitations of InI systems, like the limited depth of field (in pickup and in display), the poor axial and lateral resolution, the pseudoscopic-to-orthoscopic conversion, the production of 3D images with continuous relief, or the limited range of viewing angles of InI monitors.

  6. Stereo and motion in the display of 3-D scattergrams

    SciTech Connect

    Littlefield, R.J.

    1982-04-01

    A display technique is described that is useful for detecting structure in a 3-dimensional distribution of points. The technique uses a high resolution color raster display to produce a 3-D scattergram. Depth cueing is provided by motion parallax using a capture-replay mechanism. Stereo vision depth cues can also be provided. The paper discusses some general aspects of stereo scattergrams and describes their implementation as red/green anaglyphs. These techniques have been used with data sets containing over 20,000 data points. They can be implemented on relatively inexpensive hardware. (A film of the display was shown at the conference.)

  7. A perceptual preprocess method for 3D-HEVC

    NASA Astrophysics Data System (ADS)

    Shi, Yawen; Wang, Yongfang; Wang, Yubing

    2015-08-01

    A perceptual preprocessing method for 3D-HEVC coding is proposed in the paper. Firstly we proposed a new JND model, which accounts for luminance contrast masking effect, spatial masking effect, and temporal masking effect, saliency characteristic as well as depth information. We utilize spectral residual approach to obtain the saliency map and built a visual saliency factor based on saliency map. In order to distinguish the sensitivity of objects in different depth. We segment each texture frame into foreground and background by a automatic threshold selection algorithm using corresponding depth information, and then built a depth weighting factor. A JND modulation factor is built with a linear combined with visual saliency factor and depth weighting factor to adjust the JND threshold. Then, we applied the proposed JND model to 3D-HEVC for residual filtering and distortion coefficient processing. The filtering process is that the residual value will be set to zero if the JND threshold is greater than residual value, or directly subtract the JND threshold from residual value if JND threshold is less than residual value. Experiment results demonstrate that the proposed method can achieve average bit rate reduction of 15.11%, compared to the original coding scheme with HTM12.1, while maintains the same subjective quality.

  8. 3D video coding: an overview of present and upcoming standards

    NASA Astrophysics Data System (ADS)

    Merkle, Philipp; Müller, Karsten; Wiegand, Thomas

    2010-07-01

    An overview of existing and upcoming 3D video coding standards is given. Various different 3D video formats are available, each with individual pros and cons. The 3D video formats can be separated into two classes: video-only formats (such as stereo and multiview video) and depth-enhanced formats (such as video plus depth and multiview video plus depth). Since all these formats exist of at least two video sequences and possibly additional depth data, efficient compression is essential for the success of 3D video applications and technologies. For the video-only formats the H.264 family of coding standards already provides efficient and widely established compression algorithms: H.264/AVC simulcast, H.264/AVC stereo SEI message, and H.264/MVC. For the depth-enhanced formats standardized coding algorithms are currently being developed. New and specially adapted coding approaches are necessary, as the depth or disparity information included in these formats has significantly different characteristics than video and is not displayed directly, but used for rendering. Motivated by evolving market needs, MPEG has started an activity to develop a generic 3D video standard within the 3DVC ad-hoc group. Key features of the standard are efficient and flexible compression of depth-enhanced 3D video representations and decoupling of content creation and display requirements.

  9. Advanced 3D imaging lidar concepts for long range sensing

    NASA Astrophysics Data System (ADS)

    Gordon, K. J.; Hiskett, P. A.; Lamb, R. A.

    2014-06-01

    Recent developments in 3D imaging lidar are presented. Long range 3D imaging using photon counting is now a possibility, offering a low-cost approach to integrated remote sensing with step changing advantages in size, weight and power compared to conventional analogue active imaging technology. We report results using a Geiger-mode array for time-of-flight, single photon counting lidar for depth profiling and determination of the shape and size of tree canopies and distributed surface reflections at a range of 9km, with 4μJ pulses with a frame rate of 100kHz using a low-cost fibre laser operating at a wavelength of λ=1.5 μm. The range resolution is less than 4cm providing very high depth resolution for target identification. This specification opens up several additional functionalities for advanced lidar, for example: absolute rangefinding and depth profiling for long range identification, optical communications, turbulence sensing and time-of-flight spectroscopy. Future concepts for 3D time-of-flight polarimetric and multispectral imaging lidar, with optical communications in a single integrated system are also proposed.

  10. Modeling 3D faces from samplings via compressive sensing

    NASA Astrophysics Data System (ADS)

    Sun, Qi; Tang, Yanlong; Hu, Ping

    2013-07-01

    3D data is easier to acquire for family entertainment purpose today because of the mass-production, cheapness and portability of domestic RGBD sensors, e.g., Microsoft Kinect. However, the accuracy of facial modeling is affected by the roughness and instability of the raw input data from such sensors. To overcome this problem, we introduce compressive sensing (CS) method to build a novel 3D super-resolution scheme to reconstruct high-resolution facial models from rough samples captured by Kinect. Unlike the simple frame fusion super-resolution method, this approach aims to acquire compressed samples for storage before a high-resolution image is produced. In this scheme, depth frames are firstly captured and then each of them is measured into compressed samples using sparse coding. Next, the samples are fused to produce an optimal one and finally a high-resolution image is recovered from the fused sample. This framework is able to recover 3D facial model of a given user from compressed simples and this can reducing storage space as well as measurement cost in future devices e.g., single-pixel depth cameras. Hence, this work can potentially be applied into future applications, such as access control system using face recognition, and smart phones with depth cameras, which need high resolution and little measure time.

  11. 3D Dynamic Echocardiography with a Digitizer

    NASA Astrophysics Data System (ADS)

    Oshiro, Osamu; Matani, Ayumu; Chihara, Kunihiro

    1998-05-01

    In this paper,a three-dimensional (3D) dynamic ultrasound (US) imaging system,where a US brightness-mode (B-mode) imagetriggered with an R-wave of electrocardiogram (ECG)was obtained with an ultrasound diagnostic deviceand the location and orientation of the US probewere simultaneously measured with a 3D digitizer, is described.The obtained B-mode imagewas then projected onto a virtual 3D spacewith the proposed interpolation algorithm using a Gaussian operator.Furthermore, a 3D image was presented on a cathode ray tube (CRT)and stored in virtual reality modeling language (VRML).We performed an experimentto reconstruct a 3D heart image in systole using this system.The experimental results indicatethat the system enables the visualization ofthe 3D and internal structure of a heart viewed from any angleand has potential for use in dynamic imaging,intraoperative ultrasonography and tele-medicine.

  12. 3D Scientific Visualization with Blender

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.

    2015-03-01

    This is the first book written on using Blender for scientific visualization. It is a practical and interesting introduction to Blender for understanding key parts of 3D rendering and animation that pertain to the sciences via step-by-step guided tutorials. 3D Scientific Visualization with Blender takes you through an understanding of 3D graphics and modelling for different visualization scenarios in the physical sciences.

  13. 3D shape measurements for non-diffusive objects using fringe projection techniques

    NASA Astrophysics Data System (ADS)

    Su, Wei-Hung; Tseng, Bae-Heng; Cheng, Nai-Jen

    2013-09-01

    A scanning approach using holographic techniques to perform the 3D shape measurement for a non-diffusive object is proposed. Even though the depth discontinuity on the inspected surface is pretty high, the proposed method can retrieve the 3D shape precisely.

  14. Formation of 3D structures in a volumetric photocurable material via a holographic method

    NASA Astrophysics Data System (ADS)

    Vorzobova, N. D.; Bulgakova, V. G.; Veselov, V. O.

    2015-12-01

    The principle of forming 3D polymer structures is considered, based on the display of the 3D intensity distribution of radiation formed by a hologram in the bulk of a photocurable material. The conditions are determined for limiting the cure depth and reproducing the projected wavefront configuration.

  15. Stereo 3-D Vision in Teaching Physics

    NASA Astrophysics Data System (ADS)

    Zabunov, Svetoslav

    2012-03-01

    Stereo 3-D vision is a technology used to present images on a flat surface (screen, paper, etc.) and at the same time to create the notion of three-dimensional spatial perception of the viewed scene. A great number of physical processes are much better understood when viewed in stereo 3-D vision compared to standard flat 2-D presentation. The current paper describes the modern stereo 3-D technologies that are applicable to various tasks in teaching physics in schools, colleges, and universities. Examples of stereo 3-D simulations developed by the author can be observed on online.

  16. Accuracy in Quantitative 3D Image Analysis

    PubMed Central

    Bassel, George W.

    2015-01-01

    Quantitative 3D imaging is becoming an increasingly popular and powerful approach to investigate plant growth and development. With the increased use of 3D image analysis, standards to ensure the accuracy and reproducibility of these data are required. This commentary highlights how image acquisition and postprocessing can introduce artifacts into 3D image data and proposes steps to increase both the accuracy and reproducibility of these analyses. It is intended to aid researchers entering the field of 3D image processing of plant cells and tissues and to help general readers in understanding and evaluating such data. PMID:25804539

  17. FastScript3D - A Companion to Java 3D

    NASA Technical Reports Server (NTRS)

    Koenig, Patti

    2005-01-01

    FastScript3D is a computer program, written in the Java 3D(TM) programming language, that establishes an alternative language that helps users who lack expertise in Java 3D to use Java 3D for constructing three-dimensional (3D)-appearing graphics. The FastScript3D language provides a set of simple, intuitive, one-line text-string commands for creating, controlling, and animating 3D models. The first word in a string is the name of a command; the rest of the string contains the data arguments for the command. The commands can also be used as an aid to learning Java 3D. Developers can extend the language by adding custom text-string commands. The commands can define new 3D objects or load representations of 3D objects from files in formats compatible with such other software systems as X3D. The text strings can be easily integrated into other languages. FastScript3D facilitates communication between scripting languages [which enable programming of hyper-text markup language (HTML) documents to interact with users] and Java 3D. The FastScript3D language can be extended and customized on both the scripting side and the Java 3D side.

  18. 3D PDF - a means of public access to geological 3D - objects, using the example of GTA3D

    NASA Astrophysics Data System (ADS)

    Slaby, Mark-Fabian; Reimann, Rüdiger

    2013-04-01

    In geology, 3D modeling has become very important. In the past, two-dimensional data such as isolines, drilling profiles, or cross-sections based on those, were used to illustrate the subsurface geology, whereas now, we can create complex digital 3D models. These models are produced with special software, such as GOCAD ®. The models can be viewed, only through the software used to create them, or through viewers available for free. The platform-independent PDF (Portable Document Format), enforced by Adobe, has found a wide distribution. This format has constantly evolved over time. Meanwhile, it is possible to display CAD data in an Adobe 3D PDF file with the free Adobe Reader (version 7). In a 3D PDF, a 3D model is freely rotatable and can be assembled from a plurality of objects, which can thus be viewed from all directions on their own. In addition, it is possible to create moveable cross-sections (profiles), and to assign transparency to the objects. Based on industry-standard CAD software, 3D PDFs can be generated from a large number of formats, or even be exported directly from this software. In geoinformatics, different approaches to creating 3D PDFs exist. The intent of the Authority for Mining, Energy and Geology to allow free access to the models of the Geotectonic Atlas (GTA3D), could not be realized with standard software solutions. A specially designed code converts the 3D objects to VRML (Virtual Reality Modeling Language). VRML is one of the few formats that allow using image files (maps) as textures, and to represent colors and shapes correctly. The files were merged in Acrobat X Pro, and a 3D PDF was generated subsequently. A topographic map, a display of geographic directions and horizontal and vertical scales help to facilitate the use.

  19. Applications of 2D to 3D conversion for educational purposes

    NASA Astrophysics Data System (ADS)

    Koido, Yoshihisa; Morikawa, Hiroyuki; Shiraishi, Saki; Takeuchi, Soya; Maruyama, Wataru; Nakagori, Toshio; Hirakata, Masataka; Shinkai, Hirohisa; Kawai, Takashi

    2013-03-01

    There are three main approaches creating stereoscopic S3D content: stereo filming using two cameras, stereo rendering of 3D computer graphics, and 2D to S3D conversion by adding binocular information to 2D material images. Although manual "off-line" conversion can control the amount of parallax flexibly, 2D material images are converted according to monocular information in most cases, and the flexibility of 2D to S3D conversion has not been exploited. If the depth is expressed flexibly, comprehensions and interests from converted S3D contents are anticipated to be differed from those from 2D. Therefore, in this study we created new S3D content for education by applying 2D to S3D conversion. For surgical education, we created S3D surgical operation content under a surgeon using a partial 2D to S3D conversion technique which was expected to concentrate viewers' attention on significant areas. And for art education, we converted Ukiyoe prints; traditional Japanese artworks made from a woodcut. The conversion of this content, which has little depth information, into S3D, is expected to produce different cognitive processes from those evoked by 2D content, e.g., the excitation of interest, and the understanding of spatial information. In addition, the effects of the representation of these contents were investigated.

  20. 3D seismic data reconstruction based on complex-valued curvelet transform in frequency domain

    NASA Astrophysics Data System (ADS)

    Zhang, Hua; Chen, Xiaohong; Li, Hongxing

    2015-02-01

    Traditional seismic data sampling must follow the Nyquist Sampling Theorem. However, the field data acquisition may not meet the sampling criteria due to missing traces or limits in exploration cost, causing a prestack data reconstruction problem. Recently researchers have proposed many useful methods to regularize the seismic data. In this paper, a 3D seismic data reconstruction method based on the Projections Onto Convex Sets (POCS) algorithm and a complex-valued curvelet transform (CCT) has been introduced in the frequency domain. In order to improve reconstruction efficiency and reduce the computation time, the seismic data are transformed from the t-x-y domain to the f-x-y domain and the data reconstruction is processed for every frequency slice during the reconstruction process. The selection threshold parameter is important for reconstruction efficiency for each iteration, therefore an exponential square root decreased (ESRD) threshold is proposed. The experimental results show that the ESRD threshold can greatly reduce iterations and improve reconstruction efficiency compared to the other thresholds for the same reconstruction result. We also analyze the antinoise ability of the CCT-based POCS reconstruction method. The example studies on synthetic and real marine seismic data showed that our proposed method is more efficient and applicable.

  1. An aerial 3D printing test mission

    NASA Astrophysics Data System (ADS)

    Hirsch, Michael; McGuire, Thomas; Parsons, Michael; Leake, Skye; Straub, Jeremy

    2016-05-01

    This paper provides an overview of an aerial 3D printing technology, its development and its testing. This technology is potentially useful in its own right. In addition, this work advances the development of a related in-space 3D printing technology. A series of aerial 3D printing test missions, used to test the aerial printing technology, are discussed. Through completing these test missions, the design for an in-space 3D printer may be advanced. The current design for the in-space 3D printer involves focusing thermal energy to heat an extrusion head and allow for the extrusion of molten print material. Plastics can be used as well as composites including metal, allowing for the extrusion of conductive material. A variety of experiments will be used to test this initial 3D printer design. High altitude balloons will be used to test the effects of microgravity on 3D printing, as well as parabolic flight tests. Zero pressure balloons can be used to test the effect of long 3D printing missions subjected to low temperatures. Vacuum chambers will be used to test 3D printing in a vacuum environment. The results will be used to adapt a current prototype of an in-space 3D printer. Then, a small scale prototype can be sent into low-Earth orbit as a 3-U cube satellite. With the ability to 3D print in space demonstrated, future missions can launch production hardware through which the sustainability and durability of structures in space will be greatly improved.

  2. 3D ultrafast ultrasound imaging in vivo

    NASA Astrophysics Data System (ADS)

    Provost, Jean; Papadacci, Clement; Esteban Arango, Juan; Imbault, Marion; Fink, Mathias; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-10-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32  ×  32 matrix-array probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging, and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra—and inter-observer variability.

  3. Interlopers 3D: experiences designing a stereoscopic game

    NASA Astrophysics Data System (ADS)

    Weaver, James; Holliman, Nicolas S.

    2014-03-01

    Background In recent years 3D-enabled televisions, VR headsets and computer displays have become more readily available in the home. This presents an opportunity for game designers to explore new stereoscopic game mechanics and techniques that have previously been unavailable in monocular gaming. Aims To investigate the visual cues that are present in binocular and monocular vision, identifying which are relevant when gaming using a stereoscopic display. To implement a game whose mechanics are so reliant on binocular cues that the game becomes impossible or at least very difficult to play in non-stereoscopic mode. Method A stereoscopic 3D game was developed whose objective was to shoot down advancing enemies (the Interlopers) before they reached their destination. Scoring highly required players to make accurate depth judgments and target the closest enemies first. A group of twenty participants played both a basic and advanced version of the game in both monoscopic 2D and stereoscopic 3D. Results The results show that in both the basic and advanced game participants achieved higher scores when playing in stereoscopic 3D. The advanced game showed that by disrupting the depth from motion cue the game became more difficult in monoscopic 2D. Results also show a certain amount of learning taking place over the course of the experiment, meaning that players were able to score higher and finish the game faster over the course of the experiment. Conclusions Although the game was not impossible to play in monoscopic 2D, participants results show that it put them at a significant disadvantage when compared to playing in stereoscopic 3D.

  4. 360-degree panorama in 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    This 360-degree panorama was taken in stereo by the deployed Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses (red left lens, blue right lens) are necessary to help identify surface detail. All three petals, the perimeter of the deflated airbags, deployed rover Sojourner, forward and backward ramps and prominent surface features are visible, including the double Twin Peaks at the horizon. Sojourner would later investigate the rock Barnacle Bill just to its left in this image, and the larger rock Yogi at its forward right.

    The IMP is a stereo imaging system with color capability provided by 24 selectable filters -- twelve filters per 'eye.' It stands 1.8 meters above the Martian surface, and has a resolution of two millimeters at a range of two meters. Stereoscopic imaging brings exceptional clarity and depth to many of the features in this image, particularly the ridge beyond the far left petal and the large rock Yogi. The curvature and misalignment of several section are due to image parallax.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is a division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  5. Regional geothermal 3D modelling in Denmark

    NASA Astrophysics Data System (ADS)

    Poulsen, S. E.; Balling, N.; Bording, T. S.; Nielsen, S. B.

    2012-04-01

    In the pursuit of sustainable and low carbon emission energy sources, increased global attention has been given to the exploration and exploitation of geothermal resources within recent decades. In 2009 a national multi-disciplinary geothermal research project was established. As a significant part of this project, 3D temperature modelling is to be carried out, with special emphasis on temperatures of potential geothermal reservoirs in the Danish area. The Danish subsurface encompasses low enthalpy geothermal reservoirs of mainly Triassic and Jurassic age. Geothermal plants at Amager (Copenhagen) and Thisted (Northern Jutland) have the capacity of supplying the district heating network with up to 14 MW and 7 MW, respectively, by withdrawing warm pore water from the Gassum (Lower Jurassic/Upper Triassic) and Bunter (Lower Triassic) sandstone reservoirs, respectively. Explorative studies of the subsurface temperature regime typically are based on a combination of observations and modelling. In this study, the open-source groundwater modelling code MODFLOW is modified to simulate the subsurface temperature distribution in three dimensions by taking advantage of the mathematical similarity between saturated groundwater flow (Darcy flow) and heat conduction. A numerical model of the subsurface geology in Denmark is built and parameterized from lithological information derived from joint interpretation of seismic surveys and borehole information. Boundary conditions are constructed from knowledge about the heat flow from the Earth's interior and the shallow ground temperature. Matrix thermal conductivities have been estimated from analysis of high-resolution temperature logs measured in deep wells and porosity-depth relations are included using interpreted main lithologies. The model takes into account the dependency of temperature and pressure on thermal conductivity. Moreover, a transient model based correction of the paleoclimatic thermal disturbance caused by the

  6. Efficient strategies and imaging conditions for elastic prestack reverse-time migration of reflection seismic data

    NASA Astrophysics Data System (ADS)

    Nguyen, Bao D.

    Imaging with prestack reverse-time migration (RTM) is typically approached via a zero-lag crosscorrelation between source and receiver wavefields, which imposes unnecessarily stringent requirements for computational resources and disk storage. The imaging principle for reflectivity is analyzed and we demonstrate that a single maximal energy arrival event is often sufficient for migration imaging. Methods to alleviate the cost of crosscorrelation imaging are proposed and categorized into reconstructive and non-reconstructive schemes. Source wavefield reconstruction treats the source extrapolation as a method of providing the auxiliary conditions for an initial-boundary value problem. A first-pass (forward-time) extrapolation for the source wavefield identifies the boundary and/or initial values necessary to uniquely reconstruct it using a second (reverse-time) backward propagation. Mixed value, or hybrid, reconstruction is proposed as the most accurate alternative to storing the source wavefield time history. Reconstructing the source wavefield reduces storage costs by up to two orders of magnitude without an appreciable loss of image quality. Boundary value and initial value reconstruction methods are extended from acoustic to elastic RTM. Non-reconstructive approaches deviate from the conventional imaging paradigm, as only the most salient information required for imaging is kept. A maximal energy arrival event (termed the `excitation amplitude') imaging condition is explored as the direct analog for the theoretical reflection coefficient for acoustic isotropic media, and extended for elastic RTM. Sparse crosscorrelation is proposed as an equivalent method to standard crosscorrelation where the migrated image is now represented with a minimized data set. Time-binning is dynamic sorting algorithm with linear time complexity proposed for use with both excitation amplitude and sparse crosscorrelation approches to further expedite imaging. These parsimonious imaging

  7. Topology dictionary for 3D video understanding.

    PubMed

    Tung, Tony; Matsuyama, Takashi

    2012-08-01

    This paper presents a novel approach that achieves 3D video understanding. 3D video consists of a stream of 3D models of subjects in motion. The acquisition of long sequences requires large storage space (2 GB for 1 min). Moreover, it is tedious to browse data sets and extract meaningful information. We propose the topology dictionary to encode and describe 3D video content. The model consists of a topology-based shape descriptor dictionary which can be generated from either extracted patterns or training sequences. The model relies on 1) topology description and classification using Reeb graphs, and 2) a Markov motion graph to represent topology change states. We show that the use of Reeb graphs as the high-level topology descriptor is relevant. It allows the dictionary to automatically model complex sequences, whereas other strategies would require prior knowledge on the shape and topology of the captured subjects. Our approach serves to encode 3D video sequences, and can be applied for content-based description and summarization of 3D video sequences. Furthermore, topology class labeling during a learning process enables the system to perform content-based event recognition. Experiments were carried out on various 3D videos. We showcase an application for 3D video progressive summarization using the topology dictionary. PMID:22745004

  8. 3-D seismology in the Arabian Gulf

    SciTech Connect

    Al-Husseini, M.; Chimblo, R.

    1995-08-01

    Since 1977 when Aramco and GSI (Geophysical Services International) pioneered the first 3-D seismic survey in the Arabian Gulf, under the guidance of Aramco`s Chief Geophysicist John Hoke, 3-D seismology has been effectively used to map many complex subsurface geological phenomena. By the mid-1990s extensive 3-D surveys were acquired in Abu Dhabi, Oman, Qatar and Saudi Arabia. Also in the mid-1990`s Bahrain, Kuwait and Dubai were preparing to record surveys over their fields. On the structural side 3-D has refined seismic maps, focused faults and fractures systems, as well as outlined the distribution of facies, porosity and fluid saturation. In field development, 3D has not only reduced drilling costs significantly, but has also improved the understanding of fluid behavior in the reservoir. In Oman, Petroleum Development Oman (PDO) has now acquired the first Gulf 4-D seismic survey (time-lapse 3D survey) over the Yibal Field. The 4-D survey will allow PDO to directly monitor water encroachment in the highly-faulted Cretaceous Shu`aiba reservoir. In exploration, 3-D seismology has resolved complex prospects with structural and stratigraphic complications and reduced the risk in the selection of drilling locations. The many case studies from Saudi Arabia, Oman, Qatar and the United Arab Emirates, which are reviewed in this paper, attest to the effectiveness of 3D seismology in exploration and producing, in clastics and carbonates reservoirs, and in the Mesozoic and Paleozoic.

  9. A 3D Geostatistical Mapping Tool

    1999-02-09

    This software provides accurate 3D reservoir modeling tools and high quality 3D graphics for PC platforms enabling engineers and geologists to better comprehend reservoirs and consequently improve their decisions. The mapping algorithms are fractals, kriging, sequential guassian simulation, and three nearest neighbor methods.

  10. 3D, or Not to Be?

    ERIC Educational Resources Information Center

    Norbury, Keith

    2012-01-01

    It may be too soon for students to be showing up for class with popcorn and gummy bears, but technology similar to that behind the 3D blockbuster movie "Avatar" is slowly finding its way into college classrooms. 3D classroom projectors are taking students on fantastic voyages inside the human body, to the ruins of ancient Greece--even to faraway…

  11. Stereoscopic Investigations of 3D Coulomb Balls

    SciTech Connect

    Kaeding, Sebastian; Melzer, Andre; Arp, Oliver; Block, Dietmar; Piel, Alexander

    2005-10-31

    In dusty plasmas particles are arranged due to the influence of external forces and the Coulomb interaction. Recently Arp et al. were able to generate 3D spherical dust clouds, so-called Coulomb balls. Here, we present measurements that reveal the full 3D particle trajectories from stereoscopic imaging.

  12. 3-D structures of planetary nebulae

    NASA Astrophysics Data System (ADS)

    Steffen, W.

    2016-07-01

    Recent advances in the 3-D reconstruction of planetary nebulae are reviewed. We include not only results for 3-D reconstructions, but also the current techniques in terms of general methods and software. In order to obtain more accurate reconstructions, we suggest to extend the widely used assumption of homologous nebula expansion to map spectroscopically measured velocity to position along the line of sight.

  13. Wow! 3D Content Awakens the Classroom

    ERIC Educational Resources Information Center

    Gordon, Dan

    2010-01-01

    From her first encounter with stereoscopic 3D technology designed for classroom instruction, Megan Timme, principal at Hamilton Park Pacesetter Magnet School in Dallas, sensed it could be transformative. Last spring, when she began pilot-testing 3D content in her third-, fourth- and fifth-grade classrooms, Timme wasn't disappointed. Students…

  14. 3D Printed Block Copolymer Nanostructures

    ERIC Educational Resources Information Center

    Scalfani, Vincent F.; Turner, C. Heath; Rupar, Paul A.; Jenkins, Alexander H.; Bara, Jason E.

    2015-01-01

    The emergence of 3D printing has dramatically advanced the availability of tangible molecular and extended solid models. Interestingly, there are few nanostructure models available both commercially and through other do-it-yourself approaches such as 3D printing. This is unfortunate given the importance of nanotechnology in science today. In this…

  15. Static & Dynamic Response of 3D Solids

    1996-07-15

    NIKE3D is a large deformations 3D finite element code used to obtain the resulting displacements and stresses from multi-body static and dynamic structural thermo-mechanics problems with sliding interfaces. Many nonlinear and temperature dependent constitutive models are available.

  16. Immersive 3D Geovisualization in Higher Education

    ERIC Educational Resources Information Center

    Philips, Andrea; Walz, Ariane; Bergner, Andreas; Graeff, Thomas; Heistermann, Maik; Kienzler, Sarah; Korup, Oliver; Lipp, Torsten; Schwanghart, Wolfgang; Zeilinger, Gerold

    2015-01-01

    In this study, we investigate how immersive 3D geovisualization can be used in higher education. Based on MacEachren and Kraak's geovisualization cube, we examine the usage of immersive 3D geovisualization and its usefulness in a research-based learning module on flood risk, called GEOSimulator. Results of a survey among participating students…

  17. Stereo 3-D Vision in Teaching Physics

    ERIC Educational Resources Information Center

    Zabunov, Svetoslav

    2012-01-01

    Stereo 3-D vision is a technology used to present images on a flat surface (screen, paper, etc.) and at the same time to create the notion of three-dimensional spatial perception of the viewed scene. A great number of physical processes are much better understood when viewed in stereo 3-D vision compared to standard flat 2-D presentation. The…

  18. Pathways for Learning from 3D Technology

    ERIC Educational Resources Information Center

    Carrier, L. Mark; Rab, Saira S.; Rosen, Larry D.; Vasquez, Ludivina; Cheever, Nancy A.

    2012-01-01

    The purpose of this study was to find out if 3D stereoscopic presentation of information in a movie format changes a viewer's experience of the movie content. Four possible pathways from 3D presentation to memory and learning were considered: a direct connection based on cognitive neuroscience research; a connection through "immersion" in that 3D…

  19. Sensing and 3D Mapping of Soil Compaction

    PubMed Central

    Tekin, Yücel; Kul, Basri; Okursoy, Rasim

    2008-01-01

    Soil compaction is an important physical limiting factor for the root growth and plant emergence and is one of the major causes for reduced crop yield worldwide. The objective of this study was to generate 2D/3D soil compaction maps for different depth layers of the soil. To do so, a soil penetrometer was designed, which was mounted on the three-point hitch of an agricultural tractor, consisting of a mechanical system, data acquisition system (DAS), and 2D/3D imaging and analysis software. The system was successfully tested in field conditions, measuring soil penetration resistances as a function of depth from 0 to 40 cm at 1 cm intervals. The software allows user to either tabulate the measured quantities or generate maps as soon as data collection has been terminated. The system may also incorporate GPS data to create geo-referenced soil maps. The software enables the user to graph penetration resistances at a specified coordinate. Alternately, soil compaction maps could be generated using data collected from multiple coordinates. The data could be automatically stratified to determine soil compaction distribution at different layers of 5, 10,.…, 40 cm depths. It was concluded that the system tested in this study could be used to assess the soil compaction at topsoil and the randomly distributed hardpan formations just below the common tillage depths, enabling visualization of spatial variability through the imaging software.

  20. Clinical applications of 3-D dosimeters

    NASA Astrophysics Data System (ADS)

    Wuu, Cheng-Shie

    2015-01-01

    Both 3-D gels and radiochromic plastic dosimeters, in conjunction with dose image readout systems (MRI or optical-CT), have been employed to measure 3-D dose distributions in many clinical applications. The 3-D dose maps obtained from these systems can provide a useful tool for clinical dose verification for complex treatment techniques such as IMRT, SRS/SBRT, brachytherapy, and proton beam therapy. These complex treatments present high dose gradient regions in the boundaries between the target and surrounding critical organs. Dose accuracy in these areas can be critical, and may affect treatment outcome. In this review, applications of 3-D gels and PRESAGE dosimeter are reviewed and evaluated in terms of their performance in providing information on clinical dose verification as well as commissioning of various treatment modalities. Future interests and clinical needs on studies of 3-D dosimetry are also discussed.

  1. Biocompatible 3D Matrix with Antimicrobial Properties.

    PubMed

    Ion, Alberto; Andronescu, Ecaterina; Rădulescu, Dragoș; Rădulescu, Marius; Iordache, Florin; Vasile, Bogdan Ștefan; Surdu, Adrian Vasile; Albu, Madalina Georgiana; Maniu, Horia; Chifiriuc, Mariana Carmen; Grumezescu, Alexandru Mihai; Holban, Alina Maria

    2016-01-01

    The aim of this study was to develop, characterize and assess the biological activity of a new regenerative 3D matrix with antimicrobial properties, based on collagen (COLL), hydroxyapatite (HAp), β-cyclodextrin (β-CD) and usnic acid (UA). The prepared 3D matrix was characterized by Scanning Electron Microscopy (SEM), Fourier Transform Infrared Microscopy (FT-IRM), Transmission Electron Microscopy (TEM), and X-ray Diffraction (XRD). In vitro qualitative and quantitative analyses performed on cultured diploid cells demonstrated that the 3D matrix is biocompatible, allowing the normal development and growth of MG-63 osteoblast-like cells and exhibited an antimicrobial effect, especially on the Staphylococcus aureus strain, explained by the particular higher inhibitory activity of usnic acid (UA) against Gram positive bacterial strains. Our data strongly recommend the obtained 3D matrix to be used as a successful alternative for the fabrication of three dimensional (3D) anti-infective regeneration matrix for bone tissue engineering. PMID:26805790

  2. Fabrication of 3D Silicon Sensors

    SciTech Connect

    Kok, A.; Hansen, T.E.; Hansen, T.A.; Lietaer, N.; Summanwar, A.; Kenney, C.; Hasi, J.; Da Via, C.; Parker, S.I.; /Hawaii U.

    2012-06-06

    Silicon sensors with a three-dimensional (3-D) architecture, in which the n and p electrodes penetrate through the entire substrate, have many advantages over planar silicon sensors including radiation hardness, fast time response, active edge and dual readout capabilities. The fabrication of 3D sensors is however rather complex. In recent years, there have been worldwide activities on 3D fabrication. SINTEF in collaboration with Stanford Nanofabrication Facility have successfully fabricated the original (single sided double column type) 3D detectors in two prototype runs and the third run is now on-going. This paper reports the status of this fabrication work and the resulted yield. The work of other groups such as the development of double sided 3D detectors is also briefly reported.

  3. BEAMS3D Neutral Beam Injection Model

    SciTech Connect

    Lazerson, Samuel

    2014-04-14

    With the advent of applied 3D fi elds in Tokamaks and modern high performance stellarators, a need has arisen to address non-axisymmetric effects on neutral beam heating and fueling. We report on the development of a fully 3D neutral beam injection (NBI) model, BEAMS3D, which addresses this need by coupling 3D equilibria to a guiding center code capable of modeling neutral and charged particle trajectories across the separatrix and into the plasma core. Ionization, neutralization, charge-exchange, viscous velocity reduction, and pitch angle scattering are modeled with the ADAS atomic physics database [1]. Benchmark calculations are presented to validate the collisionless particle orbits, neutral beam injection model, frictional drag, and pitch angle scattering effects. A calculation of neutral beam heating in the NCSX device is performed, highlighting the capability of the code to handle 3D magnetic fields.

  4. 3D Visualization Development of SIUE Campus

    NASA Astrophysics Data System (ADS)

    Nellutla, Shravya

    Geographic Information Systems (GIS) has progressed from the traditional map-making to the modern technology where the information can be created, edited, managed and analyzed. Like any other models, maps are simplified representations of real world. Hence visualization plays an essential role in the applications of GIS. The use of sophisticated visualization tools and methods, especially three dimensional (3D) modeling, has been rising considerably due to the advancement of technology. There are currently many off-the-shelf technologies available in the market to build 3D GIS models. One of the objectives of this research was to examine the available ArcGIS and its extensions for 3D modeling and visualization and use them to depict a real world scenario. Furthermore, with the advent of the web, a platform for accessing and sharing spatial information on the Internet, it is possible to generate interactive online maps. Integrating Internet capacity with GIS functionality redefines the process of sharing and processing the spatial information. Enabling a 3D map online requires off-the-shelf GIS software, 3D model builders, web server, web applications and client server technologies. Such environments are either complicated or expensive because of the amount of hardware and software involved. Therefore, the second objective of this research was to investigate and develop simpler yet cost-effective 3D modeling approach that uses available ArcGIS suite products and the free 3D computer graphics software for designing 3D world scenes. Both ArcGIS Explorer and ArcGIS Online will be used to demonstrate the way of sharing and distributing 3D geographic information on the Internet. A case study of the development of 3D campus for the Southern Illinois University Edwardsville is demonstrated.

  5. 3D Ultrafast Ultrasound Imaging In Vivo

    PubMed Central

    Provost, Jean; Papadacci, Clement; Arango, Juan Esteban; Imbault, Marion; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-01-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative real-time imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in three dimensions based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32×32 matrix-array probe. Its capability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3-D Shear-Wave Imaging, 3-D Ultrafast Doppler Imaging and finally 3D Ultrafast combined Tissue and Flow Doppler. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3-D Ultrafast Doppler was used to obtain 3-D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, for the first time, the complex 3-D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, and the 3-D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3-D Ultrafast Ultrasound Imaging for the 3-D real-time mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra- and inter-observer variability. PMID:25207828

  6. The psychology of the 3D experience

    NASA Astrophysics Data System (ADS)

    Janicke, Sophie H.; Ellis, Andrew

    2013-03-01

    With 3D televisions expected to reach 50% home saturation as early as 2016, understanding the psychological mechanisms underlying the user response to 3D technology is critical for content providers, educators and academics. Unfortunately, research examining the effects of 3D technology has not kept pace with the technology's rapid adoption, resulting in large-scale use of a technology about which very little is actually known. Recognizing this need for new research, we conducted a series of studies measuring and comparing many of the variables and processes underlying both 2D and 3D media experiences. In our first study, we found narratives within primetime dramas had the power to shift viewer attitudes in both 2D and 3D settings. However, we found no difference in persuasive power between 2D and 3D content. We contend this lack of effect was the result of poor conversion quality and the unique demands of 3D production. In our second study, we found 3D technology significantly increased enjoyment when viewing sports content, yet offered no added enjoyment when viewing a movie trailer. The enhanced enjoyment of the sports content was shown to be the result of heightened emotional arousal and attention in the 3D condition. We believe the lack of effect found for the movie trailer may be genre-related. In our final study, we found 3D technology significantly enhanced enjoyment of two video games from different genres. The added enjoyment was found to be the result of an increased sense of presence.

  7. Improvements of 3-D image quality in integral display by reducing distortion errors

    NASA Astrophysics Data System (ADS)

    Kawakita, Masahiro; Sasaki, Hisayuki; Arai, Jun; Okano, Fumio; Suehiro, Koya; Haino, Yasuyuki; Yoshimura, Makoto; Sato, Masahito

    2008-02-01

    An integral three-dimensional (3-D) system based on the principle of integral photography can display natural 3-D images. We studied ways of improving the resolution and viewing angle of 3-D images by using extremely highresolution (EHR) video in an integral 3-D video system. One of the problems with the EHR projection-type integral 3-D system is that positional errors appear between the elemental image and the elemental lens when there is geometric distortion in the projected image. We analyzed the relationships between the geometric distortion in the elemental images caused by the projection lens and the spatial distortion of the reconstructed 3-D image. As a result, we clarified that 3-D images reconstructed far from the lens array were greatly affected by the distortion of the elemental images, and that the 3-D images were significantly distorted in the depth direction at the corners of the displayed images. Moreover, we developed a video signal processor that electrically compensated the distortion in the elemental images for an EHR projection-type integral 3-D system. Therefore, the distortion in the displayed 3-D image was removed, and the viewing angle of the 3-D image was expanded to nearly double that obtained with the previous prototype system.

  8. Improvements in the Visualization of Stereoscopic 3D Imagery

    NASA Astrophysics Data System (ADS)

    Gurrieri, Luis E.

    2015-09-01

    A pleasant visualization of stereoscopic imagery must take into account factors that may produce eye strain and fatigue. Fortunately, our binocular vision system has embedded mechanisms to perceive depth for extended periods of time without producing eye fatigue; however, stereoscopic imagery may still induce visual discomfort in certain displaying scenarios. An important source of eye fatigue originates in the conflict between vergence eye movement and focusing mechanisms. Today's eye-tracking technology makes possible to know the viewers' gaze direction; hence, 3D imagery can be dynamically corrected based on this information. In this paper, I introduce a method to improve the visualization of stereoscopic imagery on planar displays based on emulating vergence and accommodation mechanisms of binocular human vision. Unlike other methods to improve the visual comfort that introduce depth distortions, in the stereoscopic visual media, this technique aims to produce a gentler and more natural binocular viewing experience without distorting the original depth of the scene.

  9. 3D Multi-Spectrum Sensor System with Face Recognition

    PubMed Central

    Kim, Joongrock; Yu, Sunjin; Kim, Ig-Jae; Lee, Sangyoun

    2013-01-01

    This paper presents a novel three-dimensional (3D) multi-spectrum sensor system, which combines a 3D depth sensor and multiple optical sensors for different wavelengths. Various image sensors, such as visible, infrared (IR) and 3D sensors, have been introduced into the commercial market. Since each sensor has its own advantages under various environmental conditions, the performance of an application depends highly on selecting the correct sensor or combination of sensors. In this paper, a sensor system, which we will refer to as a 3D multi-spectrum sensor system, which comprises three types of sensors, visible, thermal-IR and time-of-flight (ToF), is proposed. Since the proposed system integrates information from each sensor into one calibrated framework, the optimal sensor combination for an application can be easily selected, taking into account all combinations of sensors information. To demonstrate the effectiveness of the proposed system, a face recognition system with light and pose variation is designed. With the proposed sensor system, the optimal sensor combination, which provides new effectively fused features for a face recognition system, is obtained. PMID:24072025

  10. 3D surface configuration modulates 2D symmetry detection.

    PubMed

    Chen, Chien-Chung; Sio, Lok-Teng

    2015-02-01

    We investigated whether three-dimensional (3D) information in a scene can affect symmetry detection. The stimuli were random dot patterns with 15% dot density. We measured the coherence threshold, or the proportion of dots that were the mirror reflection of the other dots in the other half of the image about a central vertical axis, at 75% accuracy with a 2AFC paradigm under various 3D configurations produced by the disparity between the left and right eye images. The results showed that symmetry detection was difficult when the corresponding dots across the symmetry axis were on different frontoparallel or inclined planes. However, this effect was not due to a difference in distance, as the observers could detect symmetry on a slanted surface, where the depth of the two sides of the symmetric axis was different. The threshold was reduced for a hinge configuration where the join of two slanted surfaces coincided with the axis of symmetry. Our result suggests that the detection of two-dimensional (2D) symmetry patterns is subject to the 3D configuration of the scene; and that coplanarity across the symmetry axis and consistency between the 2D pattern and 3D structure are important factors for symmetry detection. PMID:25536469

  11. Pavement cracking measurements using 3D laser-scan images

    NASA Astrophysics Data System (ADS)

    Ouyang, W.; Xu, B.

    2013-10-01

    Pavement condition surveying is vital for pavement maintenance programs that ensure ride quality and traffic safety. This paper first introduces an automated pavement inspection system which uses a three-dimensional (3D) camera and a structured laser light to acquire dense transverse profiles of a pavement lane surface when it carries a moving vehicle. After the calibration, the 3D system can yield a depth resolution of 0.5 mm and a transverse resolution of 1.56 mm pixel-1 at 1.4 m camera height from the ground. The scanning rate of the camera can be set to its maximum at 5000 lines s-1, allowing the density of scanned profiles to vary with the vehicle's speed. The paper then illustrates the algorithms that utilize 3D information to detect pavement distress, such as transverse, longitudinal and alligator cracking, and presents the field tests on the system's repeatability when scanning a sample pavement in multiple runs at the same vehicle speed, at different vehicle speeds and under different weather conditions. The results show that this dedicated 3D system can capture accurate pavement images that detail surface distress, and obtain consistent crack measurements in repeated tests and under different driving and lighting conditions.

  12. 3D multi-spectrum sensor system with face recognition.

    PubMed

    Kim, Joongrock; Yu, Sunjin; Kim, Ig-Jae; Lee, Sangyoun

    2013-01-01

    This paper presents a novel three-dimensional (3D) multi-spectrum sensor system, which combines a 3D depth sensor and multiple optical sensors for different wavelengths. Various image sensors, such as visible, infrared (IR) and 3D sensors, have been introduced into the commercial market. Since each sensor has its own advantages under various environmental conditions, the performance of an application depends highly on selecting the correct sensor or combination of sensors. In this paper, a sensor system, which we will refer to as a 3D multi-spectrum sensor system, which comprises three types of sensors, visible, thermal-IR and time-of-flight (ToF), is proposed. Since the proposed system integrates information from each sensor into one calibrated framework, the optimal sensor combination for an application can be easily selected, taking into account all combinations of sensors information. To demonstrate the effectiveness of the proposed system, a face recognition system with light and pose variation is designed. With the proposed sensor system, the optimal sensor combination, which provides new effectively fused features for a face recognition system, is obtained. PMID:24072025

  13. Northern California Seismic Attenuation: 3-D Qp and Qs models

    NASA Astrophysics Data System (ADS)

    Eberhart-Phillips, D. M.

    2015-12-01

    The northern California crust exhibits a wide range of rock types and deformation processes which produce pronounced heterogeneity in regional attenuation. Using local earthquakes, 3-D Qp and Qs crustal models have been obtained for this region which includes the San Andreas fault system, the Central Valley, the Sierra Nevada batholith, and the Mendocino subduction volcanic system. Path attenuation t* values were determined from P and S spectra of 959 spatially distributed earthquakes, magnitude 2.5-6.0 from 2005-2014, using 1254 stations from NCEDC networks and IRIS Mendocino and Sierra Nevada temporary arrays. The t* data were used in Q inversions, using existing hypocenters and 3-D velocity models, with basic 10-km node spacing. The uneven data coverage was accounted for with linking of nodes into larger areas in order to provide useful Q images across the 3-D volume. The results at shallow depth (< 2 km) show very low Q in the Sacramento Delta, the Eureka area, and parts of the Bay Area. In the brittle crust, fault zones that have high seismicity exhibit low Q. In the lower crust, low Q is observed along fault zones that have large cumulative displacement and have experienced grain size reduction. Underlying active volcanic areas, low Q features are apparent below 20-km depth. Moderately high Q is associated with igneous rocks of the Sierra Nevada and Salinian block, while the Franciscan subduction complex shows moderately low Q. The most prominent high Q feature is related to the Great Valley Ophiolite.

  14. Research on gaze-based interaction to 3D display system

    NASA Astrophysics Data System (ADS)

    Kwon, Yong-Moo; Jeon, Kyeong-Won; Kim, Sung-Kyu

    2006-10-01

    There have been reported several researches on gaze tracking techniques using monocular camera or stereo camera. The most popular used gaze estimation techniques are based on PCCR (Pupil Center & Cornea Reflection). These techniques are for gaze tracking for 2D screen or images. In this paper, we address the gaze-based 3D interaction to stereo image for 3D virtual space. To the best of our knowledge, our paper first addresses the 3D gaze interaction techniques to 3D display system. Our research goal is the estimation of both of gaze direction and gaze depth. Until now, the most researches are focused on only gaze direction for the application to 2D display system. It should be noted that both of gaze direction and gaze depth should be estimated for the gaze-based interaction in 3D virtual space. In this paper, we address the gaze-based 3D interaction techniques with glassless stereo display. The estimation of gaze direction and gaze depth from both eyes is a new important research topic for gaze-based 3D interaction. We present our approach for the estimation of gaze direction and gaze depth and show experimentation results.

  15. 3D bioprinting of tissues and organs.

    PubMed

    Murphy, Sean V; Atala, Anthony

    2014-08-01

    Additive manufacturing, otherwise known as three-dimensional (3D) printing, is driving major innovations in many areas, such as engineering, manufacturing, art, education and medicine. Recent advances have enabled 3D printing of biocompatible materials, cells and supporting components into complex 3D functional living tissues. 3D bioprinting is being applied to regenerative medicine to address the need for tissues and organs suitable for transplantation. Compared with non-biological printing, 3D bioprinting involves additional complexities, such as the choice of materials, cell types, growth and differentiation factors, and technical challenges related to the sensitivities of living cells and the construction of tissues. Addressing these complexities requires the integration of technologies from the fields of engineering, biomaterials science, cell biology, physics and medicine. 3D bioprinting has already been used for the generation and transplantation of several tissues, including multilayered skin, bone, vascular grafts, tracheal splints, heart tissue and cartilaginous structures. Other applications include developing high-throughput 3D-bioprinted tissue models for research, drug discovery and toxicology. PMID:25093879

  16. Optically rewritable 3D liquid crystal displays.

    PubMed

    Sun, J; Srivastava, A K; Zhang, W; Wang, L; Chigrinov, V G; Kwok, H S

    2014-11-01

    Optically rewritable liquid crystal display (ORWLCD) is a concept based on the optically addressed bi-stable display that does not need any power to hold the image after being uploaded. Recently, the demand for the 3D image display has increased enormously. Several attempts have been made to achieve 3D image on the ORWLCD, but all of them involve high complexity for image processing on both hardware and software levels. In this Letter, we disclose a concept for the 3D-ORWLCD by dividing the given image in three parts with different optic axis. A quarter-wave plate is placed on the top of the ORWLCD to modify the emerging light from different domains of the image in different manner. Thereafter, Polaroid glasses can be used to visualize the 3D image. The 3D image can be refreshed, on the 3D-ORWLCD, in one-step with proper ORWLCD printer and image processing, and therefore, with easy image refreshing and good image quality, such displays can be applied for many applications viz. 3D bi-stable display, security elements, etc. PMID:25361316

  17. Extra Dimensions: 3D in PDF Documentation

    NASA Astrophysics Data System (ADS)

    Graf, Norman A.

    2012-12-01

    Experimental science is replete with multi-dimensional information which is often poorly represented by the two dimensions of presentation slides and print media. Past efforts to disseminate such information to a wider audience have failed for a number of reasons, including a lack of standards which are easy to implement and have broad support. Adobe's Portable Document Format (PDF) has in recent years become the de facto standard for secure, dependable electronic information exchange. It has done so by creating an open format, providing support for multiple platforms and being reliable and extensible. By providing support for the ECMA standard Universal 3D (U3D) and the ISO PRC file format in its free Adobe Reader software, Adobe has made it easy to distribute and interact with 3D content. Until recently, Adobe's Acrobat software was also capable of incorporating 3D content into PDF files from a variety of 3D file formats, including proprietary CAD formats. However, this functionality is no longer available in Acrobat X, having been spun off to a separate company. Incorporating 3D content now requires the additional purchase of a separate plug-in. In this talk we present alternatives based on open source libraries which allow the programmatic creation of 3D content in PDF format. While not providing the same level of access to CAD files as the commercial software, it does provide physicists with an alternative path to incorporate 3D content into PDF files from such disparate applications as detector geometries from Geant4, 3D data sets, mathematical surfaces or tesselated volumes.

  18. 3D Human cartilage surface characterization by optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Brill, Nicolai; Riedel, Jörn; Schmitt, Robert; Tingart, Markus; Truhn, Daniel; Pufe, Thomas; Jahr, Holger; Nebelung, Sven

    2015-10-01

    Early diagnosis and treatment of cartilage degeneration is of high clinical interest. Loss of surface integrity is considered one of the earliest and most reliable signs of degeneration, but cannot currently be evaluated objectively. Optical Coherence Tomography (OCT) is an arthroscopically available light-based non-destructive real-time imaging technology that allows imaging at micrometre resolutions to millimetre depths. As OCT-based surface evaluation standards remain to be defined, the present study investigated the diagnostic potential of 3D surface profile parameters in the comprehensive evaluation of cartilage degeneration. To this end, 45 cartilage samples of different degenerative grades were obtained from total knee replacements (2 males, 10 females; mean age 63.8 years), cut to standard size and imaged using a spectral-domain OCT device (Thorlabs, Germany). 3D OCT datasets of 8  ×  8, 4  ×  4 and 1  ×  1 mm (width  ×  length) were obtained and pre-processed (image adjustments, morphological filtering). Subsequent automated surface identification algorithms were used to obtain the 3D primary profiles, which were then filtered and processed using established algorithms employing ISO standards. The 3D surface profile thus obtained was used to calculate a set of 21 3D surface profile parameters, i.e. height (e.g. Sa), functional (e.g. Sk), hybrid (e.g. Sdq) and segmentation-related parameters (e.g. Spd). Samples underwent reference histological assessment according to the Degenerative Joint Disease classification. Statistical analyses included calculation of Spearman’s rho and assessment of inter-group differences using the Kruskal Wallis test. Overall, the majority of 3D surface profile parameters revealed significant degeneration-dependent differences and correlations with the exception of severe end-stage degeneration and were of distinct diagnostic value in the assessment of surface integrity. None of the 3D

  19. 3D Human cartilage surface characterization by optical coherence tomography.

    PubMed

    Brill, Nicolai; Riedel, Jörn; Schmitt, Robert; Tingart, Markus; Truhn, Daniel; Pufe, Thomas; Jahr, Holger; Nebelung, Sven

    2015-10-01

    Early diagnosis and treatment of cartilage degeneration is of high clinical interest. Loss of surface integrity is considered one of the earliest and most reliable signs of degeneration, but cannot currently be evaluated objectively. Optical Coherence Tomography (OCT) is an arthroscopically available light-based non-destructive real-time imaging technology that allows imaging at micrometre resolutions to millimetre depths. As OCT-based surface evaluation standards remain to be defined, the present study investigated the diagnostic potential of 3D surface profile parameters in the comprehensive evaluation of cartilage degeneration. To this end, 45 cartilage samples of different degenerative grades were obtained from total knee replacements (2 males, 10 females; mean age 63.8 years), cut to standard size and imaged using a spectral-domain OCT device (Thorlabs, Germany). 3D OCT datasets of 8  ×  8, 4  ×  4 and 1  ×  1 mm (width  ×  length) were obtained and pre-processed (image adjustments, morphological filtering). Subsequent automated surface identification algorithms were used to obtain the 3D primary profiles, which were then filtered and processed using established algorithms employing ISO standards. The 3D surface profile thus obtained was used to calculate a set of 21 3D surface profile parameters, i.e. height (e.g. Sa), functional (e.g. Sk), hybrid (e.g. Sdq) and segmentation-related parameters (e.g. Spd). Samples underwent reference histological assessment according to the Degenerative Joint Disease classification. Statistical analyses included calculation of Spearman's rho and assessment of inter-group differences using the Kruskal Wallis test. Overall, the majority of 3D surface profile parameters revealed significant degeneration-dependent differences and correlations with the exception of severe end-stage degeneration and were of distinct diagnostic value in the assessment of surface integrity. None of the 3D surface

  20. FUN3D Manual: 12.7

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2015-01-01

    This manual describes the installation and execution of FUN3D version 12.7, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  1. FUN3D Manual: 12.9

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2016-01-01

    This manual describes the installation and execution of FUN3D version 12.9, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  2. FUN3D Manual: 13.0

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bill; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2016-01-01

    This manual describes the installation and execution of FUN3D version 13.0, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  3. FUN3D Manual: 12.8

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2015-01-01

    This manual describes the installation and execution of FUN3D version 12.8, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  4. 3D packaging for integrated circuit systems

    SciTech Connect

    Chu, D.; Palmer, D.W.

    1996-11-01

    A goal was set for high density, high performance microelectronics pursued through a dense 3D packing of integrated circuits. A {open_quotes}tool set{close_quotes} of assembly processes have been developed that enable 3D system designs: 3D thermal analysis, silicon electrical through vias, IC thinning, mounting wells in silicon, adhesives for silicon stacking, pretesting of IC chips before commitment to stacks, and bond pad bumping. Validation of these process developments occurred through both Sandia prototypes and subsequent commercial examples.

  5. A high capacity 3D steganography algorithm.

    PubMed

    Chao, Min-Wen; Lin, Chao-hung; Yu, Cheng-Wei; Lee, Tong-Yee

    2009-01-01

    In this paper, we present a very high-capacity and low-distortion 3D steganography scheme. Our steganography approach is based on a novel multilayered embedding scheme to hide secret messages in the vertices of 3D polygon models. Experimental results show that the cover model distortion is very small as the number of hiding layers ranges from 7 to 13 layers. To the best of our knowledge, this novel approach can provide much higher hiding capacity than other state-of-the-art approaches, while obeying the low distortion and security basic requirements for steganography on 3D models. PMID:19147891

  6. New method of 3-D object recognition

    NASA Astrophysics Data System (ADS)

    He, An-Zhi; Li, Qun Z.; Miao, Peng C.

    1991-12-01

    In this paper, a new method of 3-D object recognition using optical techniques and a computer is presented. We perform 3-D object recognition using moire contour to obtain the object's 3- D coordinates, projecting drawings of the object in three coordinate planes to describe it and using a method of inquiring library of judgement to match objects. The recognition of a simple geometrical entity is simulated by computer and studied experimentally. The recognition of an object which is composed of a few simple geometrical entities is discussed.

  7. Explicit 3-D Hydrodynamic FEM Program

    2000-11-07

    DYNA3D is a nonlinear explicit finite element code for analyzing 3-D structures and solid continuum. The code is vectorized and available on several computer platforms. The element library includes continuum, shell, beam, truss and spring/damper elements to allow maximum flexibility in modeling physical problems. Many materials are available to represent a wide range of material behavior, including elasticity, plasticity, composites, thermal effects and rate dependence. In addition, DYNA3D has a sophisticated contact interface capability, includingmore » frictional sliding, single surface contact and automatic contact generation.« less

  8. How We 3D-Print Aerogel

    SciTech Connect

    2015-04-23

    A new type of graphene aerogel will make for better energy storage, sensors, nanoelectronics, catalysis and separations. Lawrence Livermore National Laboratory researchers have made graphene aerogel microlattices with an engineered architecture via a 3D printing technique known as direct ink writing. The research appears in the April 22 edition of the journal, Nature Communications. The 3D printed graphene aerogels have high surface area, excellent electrical conductivity, are lightweight, have mechanical stiffness and exhibit supercompressibility (up to 90 percent compressive strain). In addition, the 3D printed graphene aerogel microlattices show an order of magnitude improvement over bulk graphene materials and much better mass transport.

  9. An Improved Version of TOPAZ 3D

    SciTech Connect

    Krasnykh, Anatoly

    2003-07-29

    An improved version of the TOPAZ 3D gun code is presented as a powerful tool for beam optics simulation. In contrast to the previous version of TOPAZ 3D, the geometry of the device under test is introduced into TOPAZ 3D directly from a CAD program, such as Solid Edge or AutoCAD. In order to have this new feature, an interface was developed, using the GiD software package as a meshing code. The article describes this method with two models to illustrate the results.

  10. FUN3D Manual: 12.4

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2014-01-01

    This manual describes the installation and execution of FUN3D version 12.4, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixedelement unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  11. FUN3D Manual: 12.5

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, William L.; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2014-01-01

    This manual describes the installation and execution of FUN3D version 12.5, including optional dependent packages. FUN3D is a suite of computational uid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables ecient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  12. FUN3D Manual: 12.6

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, William L.; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2015-01-01

    This manual describes the installation and execution of FUN3D version 12.6, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  13. Explicit 3-D Hydrodynamic FEM Program

    SciTech Connect

    2000-11-07

    DYNA3D is a nonlinear explicit finite element code for analyzing 3-D structures and solid continuum. The code is vectorized and available on several computer platforms. The element library includes continuum, shell, beam, truss and spring/damper elements to allow maximum flexibility in modeling physical problems. Many materials are available to represent a wide range of material behavior, including elasticity, plasticity, composites, thermal effects and rate dependence. In addition, DYNA3D has a sophisticated contact interface capability, including frictional sliding, single surface contact and automatic contact generation.

  14. 3D model of the Bernese Part of the Swiss Molasse Basin: visualization of uncertainties in a 3D model

    NASA Astrophysics Data System (ADS)

    Mock, Samuel; Allenbach, Robin; Reynolds, Lance; Wehrens, Philip; Kurmann-Matzenauer, Eva; Kuhn, Pascal; Michael, Salomè; Di Tommaso, Gennaro; Herwegh, Marco

    2016-04-01

    The Swiss Molasse Basin comprises the western and central part of the North Alpine Foreland Basin. In recent years it has come under closer scrutiny due to its promising geopotentials such as geothermal energy and CO2 sequestration. In order to adress these topics good knowledge of the subsurface is a key prerequisite. For that matter, geological 3D models serve as valuable tools. In collaboration with the Swiss Geological Survey (swisstopo) and as part of the project GeoMol CH, a geological 3D model of the Swiss Molasse Basin in the Canton of Bern has been built. The model covers an area of 1810 km2and reaches depth of up to 6.7 km. It comprises 10 major Cenozoic and Mesozoic units and numerous faults. The 3D model is mainly based on 2D seismic data complemented by information from few deep wells. Additionally, data from geological maps and profiles were used for refinement at shallow depths. In total, 1163 km of reflection seismic data, along 77 seismic lines, have been interpreted by different authors with respect to stratigraphy and structures. Both, horizons and faults, have been interpreted in 2D and modelled in 3D using IHS's Kingdom Suite and Midland Valley's MOVE software packages, respectively. Given the variable degree of subsurface information available, each 3D model is subject of uncertainty. With the primary input data coming from interpretation of reflection seismic data, a variety of uncertainties comes into play. Some of them are difficult to address (e.g. author's style of interpretation) while others can be quantified (e.g. mis-tie correction, well-tie). An important source of uncertainties is the quality of seismic data; this affects the traceability and lateral continuation of seismic reflectors. By defining quality classes we can semi-quantify this source of uncertainty. In order to visualize the quality and density of the input data in a meaningful way, we introduce quality-weighted data density maps. In combination with the geological 3D

  15. XML3D and Xflow: combining declarative 3D for the Web with generic data flows.

    PubMed

    Klein, Felix; Sons, Kristian; Rubinstein, Dmitri; Slusallek, Philipp

    2013-01-01

    Researchers have combined XML3D, which provides declarative, interactive 3D scene descriptions based on HTML5, with Xflow, a language for declarative, high-performance data processing. The result lets Web developers combine a 3D scene graph with data flows for dynamic meshes, animations, image processing, and postprocessing. PMID:24808080

  16. JAR3D Webserver: Scoring and aligning RNA loop sequences to known 3D motifs.

    PubMed

    Roll, James; Zirbel, Craig L; Sweeney, Blake; Petrov, Anton I; Leontis, Neocles

    2016-07-01

    Many non-coding RNAs have been identified and may function by forming 2D and 3D structures. RNA hairpin and internal loops are often represented as unstructured on secondary structure diagrams, but RNA 3D structures show that most such loops are structured by non-Watson-Crick basepairs and base stacking. Moreover, different RNA sequences can form the same RNA 3D motif. JAR3D finds possible 3D geometries for hairpin and internal loops by matching loop sequences to motif groups from the RNA 3D Motif Atlas, by exact sequence match when possible, and by probabilistic scoring and edit distance for novel sequences. The scoring gauges the ability of the sequences to form the same pattern of interactions observed in 3D structures of the motif. The JAR3D webserver at http://rna.bgsu.edu/jar3d/ takes one or many sequences of a single loop as input, or else one or many sequences of longer RNAs with multiple loops. Each sequence is scored against all current motif groups. The output shows the ten best-matching motif groups. Users can align input sequences to each of the motif groups found by JAR3D. JAR3D will be updated with every release of the RNA 3D Motif Atlas, and so its performance is expected to improve over time. PMID:27235417

  17. Colored 3D surface reconstruction using Kinect sensor

    NASA Astrophysics Data System (ADS)

    Guo, Lian-peng; Chen, Xiang-ning; Chen, Ying; Liu, Bin

    2015-03-01

    A colored 3D surface reconstruction method which effectively fuses the information of both depth and color image using Microsoft Kinect is proposed and demonstrated by experiment. Kinect depth images are processed with the improved joint-bilateral filter based on region segmentation which efficiently combines the depth and color data to improve its quality. The registered depth data are integrated to achieve a surface reconstruction through the colored truncated signed distance fields presented in this paper. Finally, the improved ray casting for rendering full colored surface is implemented to estimate color texture of the reconstruction object. Capturing the depth and color images of a toy car, the improved joint-bilateral filter based on region segmentation is used to improve the quality of depth images and the peak signal-to-noise ratio (PSNR) is approximately 4.57 dB, which is better than 1.16 dB of the joint-bilateral filter. The colored construction results of toy car demonstrate the suitability and ability of the proposed method.

  18. Ultra-High Resolution 3D Imaging of Whole Cells.

    PubMed

    Huang, Fang; Sirinakis, George; Allgeyer, Edward S; Schroeder, Lena K; Duim, Whitney C; Kromann, Emil B; Phan, Thomy; Rivera-Molina, Felix E; Myers, Jordan R; Irnov, Irnov; Lessard, Mark; Zhang, Yongdeng; Handel, Mary Ann; Jacobs-Wagner, Christine; Lusk, C Patrick; Rothman, James E; Toomre, Derek; Booth, Martin J; Bewersdorf, Joerg

    2016-08-11

    Fluorescence nanoscopy, or super-resolution microscopy, has become an important tool in cell biological research. However, because of its usually inferior resolution in the depth direction (50-80 nm) and rapidly deteriorating resolution in thick samples, its practical biological application has been effectively limited to two dimensions and thin samples. Here, we present the development of whole-cell 4Pi single-molecule switching nanoscopy (W-4PiSMSN), an optical nanoscope that allows imaging of three-dimensional (3D) structures at 10- to 20-nm resolution throughout entire mammalian cells. We demonstrate the wide applicability of W-4PiSMSN across diverse research fields by imaging complex molecular architectures ranging from bacteriophages to nuclear pores, cilia, and synaptonemal complexes in large 3D cellular volumes. PMID:27397506

  19. 3D resolved mapping of optical aberrations in thick tissues

    PubMed Central

    Zeng, Jun; Mahou, Pierre; Schanne-Klein, Marie-Claire; Beaurepaire, Emmanuel; Débarre, Delphine

    2012-01-01

    We demonstrate a simple method for mapping optical aberrations with 3D resolution within thick samples. The method relies on the local measurement of the variation in image quality with externally applied aberrations. We discuss the accuracy of the method as a function of the signal strength and of the aberration amplitude and we derive the achievable resolution for the resulting measurements. We then report on measured 3D aberration maps in human skin biopsies and mouse brain slices. From these data, we analyse the consequences of tissue structure and refractive index distribution on aberrations and imaging depth in normal and cleared tissue samples. The aberration maps allow the estimation of the typical aplanetism region size over which aberrations can be uniformly corrected. This method and data pave the way towards efficient correction strategies for tissue imaging applications. PMID:22876353

  20. Large Area Printing of 3D Photonic Crystals

    NASA Astrophysics Data System (ADS)

    Watkins, James J.; Beaulieu, Michael R.; Hendricks, Nicholas R.; Kothari, Rohit

    2014-03-01

    We have developed a readily scalable print, lift, and stack approach for producing large area, 3D photonic crystal (PC) structures. UV-assisted nanoimprint lithography (UV-NIL) was used to pattern grating structures comprised of highly filled nanoparticle polymer composite resists with tune-able refractive indices (RI). The gratings were robust and upon release from a support substrate were oriented and stacked to yield 3D PCs. The RI of the composite resists was tuned between 1.58 and 1.92 at 800 nm while maintaining excellent optical transparency. The grating structure dimensions, line width, depth, and pitch, were easily varied by simply changing the imprint mold. For example, a 6 layer log-pile stack was prepared using a composite resist a RI of 1.72 yielding 72 % reflection at 900 nm. The process is scalable for roll-to-roll (R2R) production. Center for Hierarchical Manufacturing - an NSF Nanoscale Science and Engineering Center.

  1. 3D-printed bioanalytical devices

    NASA Astrophysics Data System (ADS)

    Bishop, Gregory W.; Satterwhite-Warden, Jennifer E.; Kadimisetty, Karteek; Rusling, James F.

    2016-07-01

    While 3D printing technologies first appeared in the 1980s, prohibitive costs, limited materials, and the relatively small number of commercially available printers confined applications mainly to prototyping for manufacturing purposes. As technologies, printer cost, materials, and accessibility continue to improve, 3D printing has found widespread implementation in research and development in many disciplines due to ease-of-use and relatively fast design-to-object workflow. Several 3D printing techniques have been used to prepare devices such as milli- and microfluidic flow cells for analyses of cells and biomolecules as well as interfaces that enable bioanalytical measurements using cellphones. This review focuses on preparation and applications of 3D-printed bioanalytical devices.

  2. Tropical Cyclone Jack in Satellite 3-D

    NASA Video Gallery

    This 3-D flyby from NASA's TRMM satellite of Tropical Cyclone Jack on April 21 shows that some of the thunderstorms were shown by TRMM PR were still reaching height of at least 17 km (10.5 miles). ...

  3. 3D Printing for Tissue Engineering

    PubMed Central

    Jia, Jia; Yao, Hai; Mei, Ying

    2016-01-01

    Tissue engineering aims to fabricate functional tissue for applications in regenerative medicine and drug testing. More recently, 3D printing has shown great promise in tissue fabrication with a structural control from micro- to macro-scale by using a layer-by-layer approach. Whether through scaffold-based or scaffold-free approaches, the standard for 3D printed tissue engineering constructs is to provide a biomimetic structural environment that facilitates tissue formation and promotes host tissue integration (e.g., cellular infiltration, vascularization, and active remodeling). This review will cover several approaches that have advanced the field of 3D printing through novel fabrication methods of tissue engineering constructs. It will also discuss the applications of synthetic and natural materials for 3D printing facilitated tissue fabrication. PMID:26869728

  4. 3D Visualization of Recent Sumatra Earthquake

    NASA Astrophysics Data System (ADS)

    Nayak, Atul; Kilb, Debi

    2005-04-01

    Scientists and visualization experts at the Scripps Institution of Oceanography have created an interactive three-dimensional visualization of the 28 March 2005 magnitude 8.7 earthquake in Sumatra. The visualization shows the earthquake's hypocenter and aftershocks recorded until 29 March 2005, and compares it with the location of the 26 December 2004 magnitude 9 event and the consequent seismicity in that region. The 3D visualization was created using the Fledermaus software developed by Interactive Visualization Systems (http://www.ivs.unb.ca/) and stored as a ``scene'' file. To view this visualization, viewers need to download and install the free viewer program iView3D (http://www.ivs3d.com/products/iview3d).

  5. Future Engineers 3-D Print Timelapse

    NASA Video Gallery

    NASA Challenges K-12 students to create a model of a container for space using 3-D modeling software. Astronauts need containers of all kinds - from advanced containers that can study fruit flies t...

  6. 3-D Flyover Visualization of Veil Nebula

    NASA Video Gallery

    This 3-D visualization flies across a small portion of the Veil Nebula as photographed by the Hubble Space Telescope. This region is a small part of a huge expanding remnant from a star that explod...

  7. Quantifying Modes of 3D Cell Migration.

    PubMed

    Driscoll, Meghan K; Danuser, Gaudenz

    2015-12-01

    Although it is widely appreciated that cells migrate in a variety of diverse environments in vivo, we are only now beginning to use experimental workflows that yield images with sufficient spatiotemporal resolution to study the molecular processes governing cell migration in 3D environments. Since cell migration is a dynamic process, it is usually studied via microscopy, but 3D movies of 3D processes are difficult to interpret by visual inspection. In this review, we discuss the technologies required to study the diversity of 3D cell migration modes with a focus on the visualization and computational analysis tools needed to study cell migration quantitatively at a level comparable to the analyses performed today on cells crawling on flat substrates. PMID:26603943

  8. 3D-patterned polymer brush surfaces

    NASA Astrophysics Data System (ADS)

    Zhou, Xuechang; Liu, Xuqing; Xie, Zhuang; Zheng, Zijian

    2011-12-01

    Polymer brush-based three-dimensional (3D) structures are emerging as a powerful platform to engineer a surface by providing abundant spatially distributed chemical and physical properties. In this feature article, we aim to give a summary of the recent progress on the fabrication of 3D structures with polymer brushes, with a particular focus on the micro- and nanoscale. We start with a brief introduction on polymer brushes and the challenges to prepare their 3D structures. Then, we highlight the recent advances of the fabrication approaches on the basis of traditional polymerization time and grafting density strategies, and a recently developed feature density strategy. Finally, we provide some perspective outlooks on the future directions of engineering the 3D structures with polymer brushes.

  9. Modeling Cellular Processes in 3-D

    PubMed Central

    Mogilner, Alex; Odde, David

    2011-01-01

    Summary Recent advances in photonic imaging and fluorescent protein technology offer unprecedented views of molecular space-time dynamics in living cells. At the same time, advances in computing hardware and software enable modeling of ever more complex systems, from global climate to cell division. As modeling and experiment become more closely integrated, we must address the issue of modeling cellular processes in 3-D. Here, we highlight recent advances related to 3-D modeling in cell biology. While some processes require full 3-D analysis, we suggest that others are more naturally described in 2-D or 1-D. Keeping the dimensionality as low as possible reduces computational time and makes models more intuitively comprehensible; however, the ability to test full 3-D models will build greater confidence in models generally and remains an important emerging area of cell biological modeling. PMID:22036197

  10. Eyes on the Earth 3D

    NASA Technical Reports Server (NTRS)

    Kulikov, anton I.; Doronila, Paul R.; Nguyen, Viet T.; Jackson, Randal K.; Greene, William M.; Hussey, Kevin J.; Garcia, Christopher M.; Lopez, Christian A.

    2013-01-01

    Eyes on the Earth 3D software gives scientists, and the general public, a realtime, 3D interactive means of accurately viewing the real-time locations, speed, and values of recently collected data from several of NASA's Earth Observing Satellites using a standard Web browser (climate.nasa.gov/eyes). Anyone with Web access can use this software to see where the NASA fleet of these satellites is now, or where they will be up to a year in the future. The software also displays several Earth Science Data sets that have been collected on a daily basis. This application uses a third-party, 3D, realtime, interactive game engine called Unity 3D to visualize the satellites and is accessible from a Web browser.

  11. 3-D Animation of Typhoon Bopha

    NASA Video Gallery

    This 3-D animation of NASA's TRMM satellite data showed Typhoon Bopha tracking over the Philippines on Dec. 3 and moving into the Sulu Sea on Dec. 4, 2012. TRMM saw heavy rain (red) was falling at ...

  12. 3-D TRMM Flyby of Hurricane Amanda

    NASA Video Gallery

    The TRMM satellite flew over Hurricane Amanda on Tuesday, May 27 at 1049 UTC (6:49 a.m. EDT) and captured rainfall rates and cloud height data that was used to create this 3-D simulated flyby. Cred...

  13. Cyclone Rusty's Landfall in 3-D

    NASA Video Gallery

    This 3-D image derived from NASA's TRMM satellite Precipitation Radar data on February 26, 2013 at 0654 UTC showed that the tops of some towering thunderstorms in Rusty's eye wall were reaching hei...

  14. TRMM 3-D Flyby of Ingrid

    NASA Video Gallery

    This 3-D flyby of Tropical Storm Ingrid's rainfall was created from TRMM satellite data for Sept. 16. Heaviest rainfall appears in red towers over the Gulf of Mexico, while moderate rainfall stretc...

  15. 3D-printed bioanalytical devices.

    PubMed

    Bishop, Gregory W; Satterwhite-Warden, Jennifer E; Kadimisetty, Karteek; Rusling, James F

    2016-07-15

    While 3D printing technologies first appeared in the 1980s, prohibitive costs, limited materials, and the relatively small number of commercially available printers confined applications mainly to prototyping for manufacturing purposes. As technologies, printer cost, materials, and accessibility continue to improve, 3D printing has found widespread implementation in research and development in many disciplines due to ease-of-use and relatively fast design-to-object workflow. Several 3D printing techniques have been used to prepare devices such as milli- and microfluidic flow cells for analyses of cells and biomolecules as well as interfaces that enable bioanalytical measurements using cellphones. This review focuses on preparation and applications of 3D-printed bioanalytical devices. PMID:27250897

  16. Palacios field: A 3-D case history

    SciTech Connect

    McWhorter, R.; Torguson, B.

    1994-12-31

    In late 1992, Mitchell Energy Corporation acquired a 7.75 sq mi (20.0 km{sup 2}) 3-D seismic survey over Palacios field. Matagorda County, Texas. The company shot the survey to help evaluate the field for further development by delineating the fault pattern of the producing Middle Oligocene Frio interval. They compare the mapping of the field before and after the 3-D survey. This comparison shows that the 3-D volume yields superior fault imaging and interpretability compared to the dense 2-D data set. The problems with the 2-D data set are improper imaging of small and oblique faults and insufficient coverage over a complex fault pattern. Whereas the 2-D data set validated a simple fault model, the 3-D volume revealed a more complex history of faulting that includes three different fault systems. This discovery enabled them to reconstruct the depositional and structural history of Palacios field.

  17. Radiosity diffusion model in 3D

    NASA Astrophysics Data System (ADS)

    Riley, Jason D.; Arridge, Simon R.; Chrysanthou, Yiorgos; Dehghani, Hamid; Hillman, Elizabeth M. C.; Schweiger, Martin

    2001-11-01

    We present the Radiosity-Diffusion model in three dimensions(3D), as an extension to previous work in 2D. It is a method for handling non-scattering spaces in optically participating media. We present the extension of the model to 3D including an extension to the model to cope with increased complexity of the 3D domain. We show that in 3D more careful consideration must be given to the issues of meshing and visibility to model the transport of light within reasonable computational bounds. We demonstrate the model to be comparable to Monte-Carlo simulations for selected geometries, and show preliminary results of comparisons to measured time-resolved data acquired on resin phantoms.

  18. 3D range scan enhancement using image-based methods

    NASA Astrophysics Data System (ADS)

    Herbort, Steffen; Gerken, Britta; Schugk, Daniel; Wöhler, Christian

    2013-10-01

    This paper addresses the problem of 3D surface scan refinement, which is desirable due to noise, outliers, and missing measurements being present in the 3D surfaces obtained with a laser scanner. We present a novel algorithm for the fusion of absolute laser scanner depth profiles and photometrically estimated surface normal data, which yields a noise-reduced and highly detailed depth profile with large scale shape robustness. In contrast to other approaches published in the literature, the presented algorithm (1) regards non-Lambertian surfaces, (2) simultaneously computes surface reflectance (i.e. BRDF) parameters required for 3D reconstruction, (3) models pixelwise incident light and viewing directions, and (4) accounts for interreflections. The algorithm as such relies on the minimization of a three-component error term, which penalizes intensity deviations, integrability deviations, and deviations from the known large-scale surface shape. The solution of the error minimization is obtained iteratively based on a calculus of variations. BRDF parameters are estimated by initially reducing and then iteratively refining the optical resolution, which provides the required robust data basis. The 3D reconstruction of concave surface regions affected by interreflections is improved by compensating global illumination in the image data. The algorithm is evaluated based on eight objects with varying albedos and reflectance behaviors (diffuse, specular, metallic). The qualitative evaluation shows a removal of outliers and a strong reduction of noise, while the large scale shape is preserved. Fine surface details Which are previously not contained in the surface scans, are incorporated through using image data. The algorithm is evaluated with respect to its absolute accuracy using two caliper objects of known shape, and based on synthetically generated data. The beneficial effect of interreflection compensation on the reconstruction accuracy is evaluated quantitatively in a

  19. 3D-HST results and prospects

    NASA Astrophysics Data System (ADS)

    Van Dokkum, Pieter G.

    2015-01-01

    The 3D-HST survey is providing a comprehensive census of the distant Universe, combining HST WFC3 imaging and grism spectroscopy with a myriad of other ground- and space-based datasets. This talk constitutes an overview of science results from the survey, with a focus on ongoing work and ways to exploit the rich public release of the 3D-HST data.

  20. Assessing 3d Photogrammetry Techniques in Craniometrics

    NASA Astrophysics Data System (ADS)

    Moshobane, M. C.; de Bruyn, P. J. N.; Bester, M. N.

    2016-06-01

    Morphometrics (the measurement of morphological features) has been revolutionized by the creation of new techniques to study how organismal shape co-varies with several factors such as ecophenotypy. Ecophenotypy refers to the divergence of phenotypes due to developmental changes induced by local environmental conditions, producing distinct ecophenotypes. None of the techniques hitherto utilized could explicitly address organismal shape in a complete biological form, i.e. three-dimensionally. This study investigates the use of the commercial software, Photomodeler Scanner® (PMSc®) three-dimensional (3D) modelling software to produce accurate and high-resolution 3D models. Henceforth, the modelling of Subantarctic fur seal (Arctocephalus tropicalis) and Antarctic fur seal (Arctocephalus gazella) skulls which could allow for 3D measurements. Using this method, sixteen accurate 3D skull models were produced and five metrics were determined. The 3D linear measurements were compared to measurements taken manually with a digital caliper. In addition, repetitive measurements were recorded by varying researchers to determine repeatability. To allow for comparison straight line measurements were taken with the software, assuming that close accord with all manually measured features would illustrate the model's accurate replication of reality. Measurements were not significantly different demonstrating that realistic 3D skull models can be successfully produced to provide a consistent basis for craniometrics, with the additional benefit of allowing non-linear measurements if required.

  1. 3D model reconstruction of underground goaf

    NASA Astrophysics Data System (ADS)

    Fang, Yuanmin; Zuo, Xiaoqing; Jin, Baoxuan

    2005-10-01

    Constructing 3D model of underground goaf, we can control the process of mining better and arrange mining work reasonably. However, the shape of goaf and the laneway among goafs are very irregular, which produce great difficulties in data-acquiring and 3D model reconstruction. In this paper, we research on the method of data-acquiring and 3D model construction of underground goaf, building topological relation among goafs. The main contents are as follows: a) The paper proposed an efficient encoding rule employed to structure the field measurement data. b) A 3D model construction method of goaf is put forward, which by means of combining several TIN (triangulated irregular network) pieces, and an efficient automatic processing algorithm of boundary of TIN is proposed. c) Topological relation of goaf models is established. TIN object is the basic modeling element of goaf 3D model, and the topological relation among goaf is created and maintained by building the topological relation among TIN objects. Based on this, various 3D spatial analysis functions can be performed including transect and volume calculation of goaf. A prototype is developed, which can realized the model and algorithm proposed in this paper.

  2. 3D steerable wavelets in practice.

    PubMed

    Chenouard, Nicolas; Unser, Michael

    2012-11-01

    We introduce a systematic and practical design for steerable wavelet frames in 3D. Our steerable wavelets are obtained by applying a 3D version of the generalized Riesz transform to a primary isotropic wavelet frame. The novel transform is self-reversible (tight frame) and its elementary constituents (Riesz wavelets) can be efficiently rotated in any 3D direction by forming appropriate linear combinations. Moreover, the basis functions at a given location can be linearly combined to design custom (and adaptive) steerable wavelets. The features of the proposed method are illustrated with the processing and analysis of 3D biomedical data. In particular, we show how those wavelets can be used to characterize directional patterns and to detect edges by means of a 3D monogenic analysis. We also propose a new inverse-problem formalism along with an optimization algorithm for reconstructing 3D images from a sparse set of wavelet-domain edges. The scheme results in high-quality image reconstructions which demonstrate the feature-reduction ability of the steerable wavelets as well as their potential for solving inverse problems. PMID:22752138

  3. DYNA3D example problem manual

    SciTech Connect

    Lovejoy, S.C.; Whirley, R.G.

    1990-10-10

    This manual describes in detail the solution of ten example problems using the explicit nonlinear finite element code DYNA3D. The sample problems include solid, shell, and beam element types, and a variety of linear and nonlinear material models. For each example, there is first an engineering description of the physical problem to be studied. Next, the analytical techniques incorporated in the model are discussed and key features of DYNA3D are highlighted. INGRID commands used to generate the mesh are listed, and sample plots from the DYNA3D analysis are given. Finally, there is a description of the TAURUS post-processing commands used to generate the plots of the solution. This set of example problems is useful in verifying the installation of DYNA3D on a new computer system. In addition, these documented analyses illustrate the application of DYNA3D to a variety of engineering problems, and thus this manual should be helpful to new analysts getting started with DYNA3D. 7 refs., 56 figs., 9 tabs.

  4. Recording stereoscopic 3D neurosurgery with a head-mounted 3D camera system.

    PubMed

    Lee, Brian; Chen, Brian R; Chen, Beverly B; Lu, James Y; Giannotta, Steven L

    2015-06-01

    Stereoscopic three-dimensional (3D) imaging can present more information to the viewer and further enhance the learning experience over traditional two-dimensional (2D) video. Most 3D surgical videos are recorded from the operating microscope and only feature the crux, or the most important part of the surgery, leaving out other crucial parts of surgery including the opening, approach, and closing of the surgical site. In addition, many other surgeries including complex spine, trauma, and intensive care unit procedures are also rarely recorded. We describe and share our experience with a commercially available head-mounted stereoscopic 3D camera system to obtain stereoscopic 3D recordings of these seldom recorded aspects of neurosurgery. The strengths and limitations of using the GoPro(®) 3D system as a head-mounted stereoscopic 3D camera system in the operating room are reviewed in detail. Over the past several years, we have recorded in stereoscopic 3D over 50 cranial and spinal surgeries and created a library for education purposes. We have found the head-mounted stereoscopic 3D camera system to be a valuable asset to supplement 3D footage from a 3D microscope. We expect that these comprehensive 3D surgical videos will become an important facet of resident education and ultimately lead to improved patient care. PMID:25620087

  5. RAG-3D: a search tool for RNA 3D substructures.

    PubMed

    Zahran, Mai; Sevim Bayrak, Cigdem; Elmetwaly, Shereef; Schlick, Tamar

    2015-10-30

    To address many challenges in RNA structure/function prediction, the characterization of RNA's modular architectural units is required. Using the RNA-As-Graphs (RAG) database, we have previously explored the existence of secondary structure (2D) submotifs within larger RNA structures. Here we present RAG-3D-a dataset of RNA tertiary (3D) structures and substructures plus a web-based search tool-designed to exploit graph representations of RNAs for the goal of searching for similar 3D structural fragments. The objects in RAG-3D consist of 3D structures translated into 3D graphs, cataloged based on the connectivity between their secondary structure elements. Each graph is additionally described in terms of its subgraph building blocks. The RAG-3D search tool then compares a query RNA 3D structure to those in the database to obtain structurally similar structures and substructures. This comparison reveals conserved 3D RNA features and thus may suggest functional connections. Though RNA search programs based on similarity in sequence, 2D, and/or 3D structural elements are available, our graph-based search tool may be advantageous for illuminating similarities that are not obvious; using motifs rather than sequence space also reduces search times considerably. Ultimately, such substructuring could be useful for RNA 3D structure prediction, structure/function inference and inverse folding. PMID:26304547

  6. 3-D SAR image formation from sparse aperture data using 3-D target grids

    NASA Astrophysics Data System (ADS)

    Bhalla, Rajan; Li, Junfei; Ling, Hao

    2005-05-01

    The performance of ATR systems can potentially be improved by using three-dimensional (3-D) SAR images instead of the traditional two-dimensional SAR images or one-dimensional range profiles. 3-D SAR image formation of targets from radar backscattered data collected on wide angle, sparse apertures has been identified by AFRL as fundamental to building an object detection and recognition capability. A set of data has been released as a challenge problem. This paper describes a technique based on the concept of 3-D target grids aimed at the formation of 3-D SAR images of targets from sparse aperture data. The 3-D target grids capture the 3-D spatial and angular scattering properties of the target and serve as matched filters for SAR formation. The results of 3-D SAR formation using the backhoe public release data are presented.

  7. CFL3D, FUN3d, and NSU3D Contributions to the Fifth Drag Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Laflin, Kelly R.; Chaffin, Mark S.; Powell, Nicholas; Levy, David W.

    2013-01-01

    Results presented at the Fifth Drag Prediction Workshop using CFL3D, FUN3D, and NSU3D are described. These are calculations on the workshop provided grids and drag adapted grids. The NSU3D results have been updated to reflect an improvement to skin friction calculation on skewed grids. FUN3D results generated after the workshop are included for custom participant generated grids and a grid from a previous workshop. Uniform grid refinement at the design condition shows a tight grouping in calculated drag, where the variation in the pressure component of drag is larger than the skin friction component. At this design condition, A fine-grid drag value was predicted with a smaller drag adjoint adapted grid via tetrahedral adaption to a metric and mixed-element subdivision. The buffet study produced larger variation than the design case, which is attributed to large differences in the predicted side-of-body separation extent. Various modeling and discretization approaches had a strong impact on predicted side-of-body separation. This large wing root separation bubble was not observed in wind tunnel tests indicating that more work is necessary in modeling wing root juncture flows to predict experiments.

  8. Large viewing angle projection type electro-holography using new type mist 3D screen

    NASA Astrophysics Data System (ADS)

    Sato, Koki; Zhao, Hongming; Takano, Kunihiko

    2008-02-01

    Recently, many type of 3-D displays are now being developed. We want to see 3-D moving image with comfortably and more expanded depth, Holography is different from the other 3-D display because natural stereoscopic image can be obtained. We have once developed a electro-holographic display using virtual image. But the viewing area is so small because the pixcel size of LCD is not so small. This time we developed the projection type electro-holographic display system. In the case of projection type holography [1], it needs to use the 3-D screen in order to project the reconstructed image clearly and viewing angle becomes wide. We developed the electro-holographic display system using mist 3-D screen. However, a reconstructed image with mist 3-D screen was flickered by gravity and flow of air. Then we considered to reduce the flicker of the image and we found that flicker could be reduced using flow controlled nozzle. Hence, at first we considered the most suitable shape of 3-D screen and then we constructed the array of flow controlled mist 3D screen. By the results of experiment we could get considerably high contrast 3-D moving image and get the viewing area more than 30°by this flow controlled nozzle attached new type mist 3-D screen and make clear the efficiency of this method.

  9. Underwater 3d Modeling: Image Enhancement and Point Cloud Filtering

    NASA Astrophysics Data System (ADS)

    Sarakinou, I.; Papadimitriou, K.; Georgoula, O.; Patias, P.

    2016-06-01

    This paper examines the results of image enhancement and point cloud filtering on the visual and geometric quality of 3D models for the representation of underwater features. Specifically it evaluates the combination of effects from the manual editing of images' radiometry (captured at shallow depths) and the selection of parameters for point cloud definition and mesh building (processed in 3D modeling software). Such datasets, are usually collected by divers, handled by scientists and used for geovisualization purposes. In the presented study, have been created 3D models from three sets of images (seafloor, part of a wreck and a small boat's wreck) captured at three different depths (3.5m, 10m and 14m respectively). Four models have been created from the first dataset (seafloor) in order to evaluate the results from the application of image enhancement techniques and point cloud filtering. The main process for this preliminary study included a) the definition of parameters for the point cloud filtering and the creation of a reference model, b) the radiometric editing of images, followed by the creation of three improved models and c) the assessment of results by comparing the visual and the geometric quality of improved models versus the reference one. Finally, the selected technique is tested on two other data sets in order to examine its appropriateness for different depths (at 10m and 14m) and different objects (part of a wreck and a small boat's wreck) in the context of an ongoing research in the Laboratory of Photogrammetry and Remote Sensing.

  10. Facial-paralysis diagnostic system based on 3D reconstruction

    NASA Astrophysics Data System (ADS)

    Khairunnisaa, Aida; Basah, Shafriza Nisha; Yazid, Haniza; Basri, Hassrizal Hassan; Yaacob, Sazali; Chin, Lim Chee

    2015-05-01

    The diagnostic process of facial paralysis requires qualitative assessment for the classification and treatment planning. This result is inconsistent assessment that potential affect treatment planning. We developed a facial-paralysis diagnostic system based on 3D reconstruction of RGB and depth data using a standard structured-light camera - Kinect 360 - and implementation of Active Appearance Models (AAM). We also proposed a quantitative assessment for facial paralysis based on triangular model. In this paper, we report on the design and development process, including preliminary experimental results. Our preliminary experimental results demonstrate the feasibility of our quantitative assessment system to diagnose facial paralysis.

  11. Implementation Of True 3D Cursors In Computer Graphics

    NASA Astrophysics Data System (ADS)

    Butts, David R.; McAllister, David F.

    1988-06-01

    The advances in stereoscopic image display techniques have shown an increased need for real-time interaction with the three-dimensional image. We have developed a prototype real-time stereoscopic cursor to investigate this interaction. The results have pointed out areas where hardware speeds are a limiting factor, as well as areas where various methodologies cause perceptual difficulties. This paper addresses the psychological and perceptual anomalies involved in stereo image techniques, cursor generation and motion, and the use of the device as a 3D drawing and depth measuring tool.

  12. 3D Multispectral Light Propagation Model For Subcutaneous Veins Imaging

    SciTech Connect

    Paquit, Vincent C; Price, Jeffery R; Meriaudeau, Fabrice; Tobin Jr, Kenneth William

    2008-01-01

    In this paper, we describe a new 3D light propagation model aimed at understanding the effects of various physiological properties on subcutaneous vein imaging. In particular, we build upon the well known MCML (Monte Carlo Multi Layer) code and present a tissue model that improves upon the current state-of-the-art by: incorporating physiological variation, such as melanin concentration, fat content, and layer thickness; including veins of varying depth and diameter; using curved surfaces from real arm shapes; and modeling the vessel wall interface. We describe our model, present results from the Monte Carlo modeling, and compare these results with those obtained with other Monte Carlo methods.

  13. PLOT3D Export Tool for Tecplot

    NASA Technical Reports Server (NTRS)

    Alter, Stephen

    2010-01-01

    The PLOT3D export tool for Tecplot solves the problem of modified data being impossible to output for use by another computational science solver. The PLOT3D Exporter add-on enables the use of the most commonly available visualization tools to engineers for output of a standard format. The exportation of PLOT3D data from Tecplot has far reaching effects because it allows for grid and solution manipulation within a graphical user interface (GUI) that is easily customized with macro language-based and user-developed GUIs. The add-on also enables the use of Tecplot as an interpolation tool for solution conversion between different grids of different types. This one add-on enhances the functionality of Tecplot so significantly, it offers the ability to incorporate Tecplot into a general suite of tools for computational science applications as a 3D graphics engine for visualization of all data. Within the PLOT3D Export Add-on are several functions that enhance the operations and effectiveness of the add-on. Unlike Tecplot output functions, the PLOT3D Export Add-on enables the use of the zone selection dialog in Tecplot to choose which zones are to be written by offering three distinct options - output of active, inactive, or all zones (grid blocks). As the user modifies the zones to output with the zone selection dialog, the zones to be written are similarly updated. This enables the use of Tecplot to create multiple configurations of a geometry being analyzed. For example, if an aircraft is loaded with multiple deflections of flaps, by activating and deactivating different zones for a specific flap setting, new specific configurations of that aircraft can be easily generated by only writing out specific zones. Thus, if ten flap settings are loaded into Tecplot, the PLOT3D Export software can output ten different configurations, one for each flap setting.

  14. Perceived crosstalk assessment on patterned retarder 3D display

    NASA Astrophysics Data System (ADS)

    Zou, Bochao; Liu, Yue; Huang, Yi; Wang, Yongtian

    2014-03-01

    CONTEXT: Nowadays, almost all stereoscopic displays suffer from crosstalk, which is one of the most dominant degradation factors of image quality and visual comfort for 3D display devices. To deal with such problems, it is worthy to quantify the amount of perceived crosstalk OBJECTIVE: Crosstalk measurements are usually based on some certain test patterns, but scene content effects are ignored. To evaluate the perceived crosstalk level for various scenes, subjective test may bring a more correct evaluation. However, it is a time consuming approach and is unsuitable for real­ time applications. Therefore, an objective metric that can reliably predict the perceived crosstalk is needed. A correct objective assessment of crosstalk for different scene contents would be beneficial to the development of crosstalk minimization and cancellation algorithms which could be used to bring a good quality of experience to viewers. METHOD: A patterned retarder 3D display is used to present 3D images in our experiment. By considering the mechanism of this kind of devices, an appropriate simulation of crosstalk is realized by image processing techniques to assign different values of crosstalk to each other between image pairs. It can be seen from the literature that the structures of scenes have a significant impact on the perceived crosstalk, so we first extract the differences of the structural information between original and distorted image pairs through Structural SIMilarity (SSIM) algorithm, which could directly evaluate the structural changes between two complex-structured signals. Then the structural changes of left view and right view are computed respectively and combined to an overall distortion map. Under 3D viewing condition, because of the added value of depth, the crosstalk of pop-out objects may be more perceptible. To model this effect, the depth map of a stereo pair is generated and the depth information is filtered by the distortion map. Moreover, human attention

  15. A microfluidic device for 2D to 3D and 3D to 3D cell navigation

    NASA Astrophysics Data System (ADS)

    Shamloo, Amir; Amirifar, Leyla

    2016-01-01

    Microfluidic devices have received wide attention and shown great potential in the field of tissue engineering and regenerative medicine. Investigating cell response to various stimulations is much more accurate and comprehensive with the aid of microfluidic devices. In this study, we introduced a microfluidic device by which the matrix density as a mechanical property and the concentration profile of a biochemical factor as a chemical property could be altered. Our microfluidic device has a cell tank and a cell culture chamber to mimic both 2D to 3D and 3D to 3D migration of three types of cells. Fluid shear stress is negligible on the cells and a stable concentration gradient can be obtained by diffusion. The device was designed by a numerical simulation so that the uniformity of the concentration gradients throughout the cell culture chamber was obtained. Adult neural cells were cultured within this device and they showed different branching and axonal navigation phenotypes within varying nerve growth factor (NGF) concentration profiles. Neural stem cells were also cultured within varying collagen matrix densities while exposed to NGF concentrations and they experienced 3D to 3D collective migration. By generating vascular endothelial growth factor concentration gradients, adult human dermal microvascular endothelial cells also migrated in a 2D to 3D manner and formed a stable lumen within a specific collagen matrix density. It was observed that a minimum absolute concentration and concentration gradient were required to stimulate migration of all types of the cells. This device has the advantage of changing multiple parameters simultaneously and is expected to have wide applicability in cell studies.

  16. Metric Calibration of a Focused Plenoptic Camera Based on a 3d Calibration Target

    NASA Astrophysics Data System (ADS)

    Zeller, N.; Noury, C. A.; Quint, F.; Teulière, C.; Stilla, U.; Dhome, M.

    2016-06-01

    In this paper we present a new calibration approach for focused plenoptic cameras. We derive a new mathematical projection model of a focused plenoptic camera which considers lateral as well as depth distortion. Therefore, we derive a new depth distortion model directly from the theory of depth estimation in a focused plenoptic camera. In total the model consists of five intrinsic parameters, the parameters for radial and tangential distortion in the image plane and two new depth distortion parameters. In the proposed calibration we perform a complete bundle adjustment based on a 3D calibration target. The residual of our optimization approach is three dimensional, where the depth residual is defined by a scaled version of the inverse virtual depth difference and thus conforms well to the measured data. Our method is evaluated based on different camera setups and shows good accuracy. For a better characterization of our approach we evaluate the accuracy of virtual image points projected back to 3D space.

  17. RAG-3D: A search tool for RNA 3D substructures

    DOE PAGESBeta

    Zahran, Mai; Sevim Bayrak, Cigdem; Elmetwaly, Shereef; Schlick, Tamar

    2015-08-24

    In this study, to address many challenges in RNA structure/function prediction, the characterization of RNA's modular architectural units is required. Using the RNA-As-Graphs (RAG) database, we have previously explored the existence of secondary structure (2D) submotifs within larger RNA structures. Here we present RAG-3D—a dataset of RNA tertiary (3D) structures and substructures plus a web-based search tool—designed to exploit graph representations of RNAs for the goal of searching for similar 3D structural fragments. The objects in RAG-3D consist of 3D structures translated into 3D graphs, cataloged based on the connectivity between their secondary structure elements. Each graph is additionally describedmore » in terms of its subgraph building blocks. The RAG-3D search tool then compares a query RNA 3D structure to those in the database to obtain structurally similar structures and substructures. This comparison reveals conserved 3D RNA features and thus may suggest functional connections. Though RNA search programs based on similarity in sequence, 2D, and/or 3D structural elements are available, our graph-based search tool may be advantageous for illuminating similarities that are not obvious; using motifs rather than sequence space also reduces search times considerably. Ultimately, such substructuring could be useful for RNA 3D structure prediction, structure/function inference and inverse folding.« less

  18. RAG-3D: A search tool for RNA 3D substructures

    SciTech Connect

    Zahran, Mai; Sevim Bayrak, Cigdem; Elmetwaly, Shereef; Schlick, Tamar

    2015-08-24

    In this study, to address many challenges in RNA structure/function prediction, the characterization of RNA's modular architectural units is required. Using the RNA-As-Graphs (RAG) database, we have previously explored the existence of secondary structure (2D) submotifs within larger RNA structures. Here we present RAG-3D—a dataset of RNA tertiary (3D) structures and substructures plus a web-based search tool—designed to exploit graph representations of RNAs for the goal of searching for similar 3D structural fragments. The objects in RAG-3D consist of 3D structures translated into 3D graphs, cataloged based on the connectivity between their secondary structure elements. Each graph is additionally described in terms of its subgraph building blocks. The RAG-3D search tool then compares a query RNA 3D structure to those in the database to obtain structurally similar structures and substructures. This comparison reveals conserved 3D RNA features and thus may suggest functional connections. Though RNA search programs based on similarity in sequence, 2D, and/or 3D structural elements are available, our graph-based search tool may be advantageous for illuminating similarities that are not obvious; using motifs rather than sequence space also reduces search times considerably. Ultimately, such substructuring could be useful for RNA 3D structure prediction, structure/function inference and inverse folding.

  19. RAG-3D: a search tool for RNA 3D substructures

    PubMed Central

    Zahran, Mai; Sevim Bayrak, Cigdem; Elmetwaly, Shereef; Schlick, Tamar

    2015-01-01

    To address many challenges in RNA structure/function prediction, the characterization of RNA's modular architectural units is required. Using the RNA-As-Graphs (RAG) database, we have previously explored the existence of secondary structure (2D) submotifs within larger RNA structures. Here we present RAG-3D—a dataset of RNA tertiary (3D) structures and substructures plus a web-based search tool—designed to exploit graph representations of RNAs for the goal of searching for similar 3D structural fragments. The objects in RAG-3D consist of 3D structures translated into 3D graphs, cataloged based on the connectivity between their secondary structure elements. Each graph is additionally described in terms of its subgraph building blocks. The RAG-3D search tool then compares a query RNA 3D structure to those in the database to obtain structurally similar structures and substructures. This comparison reveals conserved 3D RNA features and thus may suggest functional connections. Though RNA search programs based on similarity in sequence, 2D, and/or 3D structural elements are available, our graph-based search tool may be advantageous for illuminating similarities that are not obvious; using motifs rather than sequence space also reduces search times considerably. Ultimately, such substructuring could be useful for RNA 3D structure prediction, structure/function inference and inverse folding. PMID:26304547

  20. Automatic needle segmentation in 3D ultrasound images using 3D Hough transform

    NASA Astrophysics Data System (ADS)

    Zhou, Hua; Qiu, Wu; Ding, Mingyue; Zhang, Songgeng

    2007-12-01

    3D ultrasound (US) is a new technology that can be used for a variety of diagnostic applications, such as obstetrical, vascular, and urological imaging, and has been explored greatly potential in the applications of image-guided surgery and therapy. Uterine adenoma and uterine bleeding are the two most prevalent diseases in Chinese woman, and a minimally invasive ablation system using an RF button electrode which is needle-like is being used to destroy tumor cells or stop bleeding currently. Now a 3D US guidance system has been developed to avoid accidents or death of the patient by inaccurate localizations of the electrode and the tumor position during treatment. In this paper, we described two automated techniques, the 3D Hough Transform (3DHT) and the 3D Randomized Hough Transform (3DRHT), which is potentially fast, accurate, and robust to provide needle segmentation in 3D US image for use of 3D US imaging guidance. Based on the representation (Φ , θ , ρ , α ) of straight lines in 3D space, we used the 3DHT algorithm to segment needles successfully assumed that the approximate needle position and orientation are known in priori. The 3DRHT algorithm was developed to detect needles quickly without any information of the 3D US images. The needle segmentation techniques were evaluated using the 3D US images acquired by scanning water phantoms. The experiments demonstrated the feasibility of two 3D needle segmentation algorithms described in this paper.

  1. The design and implementation of stereoscopic 3D scalable vector graphics based on WebKit

    NASA Astrophysics Data System (ADS)

    Liu, Zhongxin; Wang, Wenmin; Wang, Ronggang

    2014-03-01

    Scalable Vector Graphics (SVG), which is a language designed based on eXtensible Markup Language (XML), is used to describe basic shapes embedded in webpages, such as circles and rectangles. However, it can only depict 2D shapes. As a consequence, web pages using classical SVG can only display 2D shapes on a screen. With the increasing development of stereoscopic 3D (S3D) technology, binocular 3D devices have been widely used. Under this circumstance, we intend to extend the widely used web rendering engine WebKit to support the description and display of S3D webpages. Therefore, the extension of SVG is of necessity. In this paper, we will describe how to design and implement SVG shapes with stereoscopic 3D mode. Two attributes representing the depth and thickness are added to support S3D shapes. The elimination of hidden lines and hidden surfaces, which is an important process in this project, is described as well. The modification of WebKit is also discussed, which is made to support the generation of both left view and right view at the same time. As is shown in the result, in contrast to the 2D shapes generated by the Google Chrome web browser, the shapes got from our modified browser are in S3D mode. With the feeling of depth and thickness, the shapes seem to be real 3D objects away from the screen, rather than simple curves and lines as before.

  2. Neighboring block based disparity vector derivation for multiview compatible 3D-AVC

    NASA Astrophysics Data System (ADS)

    Kang, Jewon; Chen, Ying; Zhang, Li; Zhao, Xin; Karczewicz, Marta

    2013-09-01

    3D-AVC being developed under Joint Collaborative Team on 3D Video Coding (JCT-3V) significantly outperforms the Multiview Video Coding plus Depth (MVC+D) which simultaneously encodes texture views and depth views with the multiview extension of H.264/AVC (MVC). However, when the 3D-AVC is configured to support multiview compatibility in which texture views are decoded without depth information, the coding performance becomes significantly degraded. The reason is that advanced coding tools incorporated into the 3D-AVC do not perform well due to the lack of a disparity vector converted from the depth information. In this paper, we propose a disparity vector derivation method utilizing only the information of texture views. Motion information of neighboring blocks is used to determine a disparity vector for a macroblock, so that the derived disparity vector is efficiently used for the coding tools in 3D-AVC. The proposed method significantly improves a coding gain of the 3D-AVC in the multiview compatible mode about 20% BD-rate saving in the coded views and 26% BD-rate saving in the synthesized views on average.

  3. ICER-3D Hyperspectral Image Compression Software

    NASA Technical Reports Server (NTRS)

    Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received

  4. Shim3d Helmholtz Solution Package

    2009-01-29

    This suite of codes solves the Helmholtz Equation for the steady-state propagation of single-frequency electromagnetic radiation in an arbitrary 2D or 3D dielectric medium. Materials can be either transparent or absorptive (including metals) and are described entirely by their shape and complex dielectric constant. Dielectric boundaries are assumed to always fall on grid boundaries and the material within a single grid cell is considered to be uniform. Input to the problem is in the formmore » of a Dirichlet boundary condition on a single boundary, and may be either analytic (Gaussian) in shape, or a mode shape computed using a separate code (such as the included eigenmode solver vwave20), and written to a file. Solution is via the finite difference method using Jacobi iteration for 3D problems or direct matrix inversion for 2D problems. Note that 3D problems that include metals will require different iteration parameters than described in the above reference. For structures with curved boundaries not easily modeled on a rectangular grid, the auxillary codes helmholtz11(2D), helm3d (semivectoral), and helmv3d (full vectoral) are provided. For these codes the finite difference equations are specified on a topological regular triangular grid and solved using Jacobi iteration or direct matrix inversion as before. An automatic grid generator is supplied.« less

  5. 3D Spray Droplet Distributions in Sneezes

    NASA Astrophysics Data System (ADS)

    Techet, Alexandra; Scharfman, Barry; Bourouiba, Lydia

    2015-11-01

    3D spray droplet clouds generated during human sneezing are investigated using the Synthetic Aperture Feature Extraction (SAFE) method, which relies on light field imaging (LFI) and synthetic aperture (SA) refocusing computational photographic techniques. An array of nine high-speed cameras are used to image sneeze droplets and tracked the droplets in 3D space and time (3D + T). An additional high-speed camera is utilized to track the motion of the head during sneezing. In the SAFE method, the raw images recorded by each camera in the array are preprocessed and binarized, simplifying post processing after image refocusing and enabling the extraction of feature sizes and positions in 3D + T. These binary images are refocused using either additive or multiplicative methods, combined with thresholding. Sneeze droplet centroids, radii, distributions and trajectories are determined and compared with existing data. The reconstructed 3D droplet centroids and radii enable a more complete understanding of the physical extent and fluid dynamics of sneeze ejecta. These measurements are important for understanding the infectious disease transmission potential of sneezes in various indoor environments.

  6. T-HEMP3D user manual

    SciTech Connect

    Turner, D.

    1983-08-01

    The T-HEMP3D (Transportable HEMP3D) computer program is a derivative of the STEALTH three-dimensional thermodynamics code developed by Science Applications, Inc., under the direction of Ron Hofmann. STEALTH, in turn, is based entirely on the original HEMP3D code written at Lawrence Livermore National Laboratory. The primary advantage STEALTH has over its predecessors is that it was designed using modern structured design techniques, with rigorous programming standards enforced. This yields two benefits. First, the code is easily changeable; this is a necessity for a physics code used for research. The second benefit is that the code is easily transportable between different types of computers. The STEALTH program was transferred to LLNL under a cooperative development agreement. Changes were made primarily in three areas: material specification, coordinate generation, and the addition of sliding surface boundary conditions. The code was renamed T-HEMP3D to avoid confusion with other versions of STEALTH. This document summarizes the input to T-HEMP3D, as used at LLNL. It does not describe the physics simulated by the program, nor the numerical techniques employed. Furthermore, it does not describe the separate job steps of coordinate generation and post-processing, including graphical display of results. (WHK)

  7. Magnetic Properties of 3D Printed Toroids

    NASA Astrophysics Data System (ADS)

    Bollig, Lindsey; Otto, Austin; Hilpisch, Peter; Mowry, Greg; Nelson-Cheeseman, Brittany; Renewable Energy; Alternatives Lab (REAL) Team

    Transformers are ubiquitous in electronics today. Although toroidal geometries perform most efficiently, transformers are traditionally made with rectangular cross-sections due to the lower manufacturing costs. Additive manufacturing techniques (3D printing) can easily achieve toroidal geometries by building up a part through a series of 2D layers. To get strong magnetic properties in a 3D printed transformer, a composite filament is used containing Fe dispersed in a polymer matrix. How the resulting 3D printed toroid responds to a magnetic field depends on two structural factors of the printed 2D layers: fill factor (planar density) and fill pattern. In this work, we investigate how the fill factor and fill pattern affect the magnetic properties of 3D printed toroids. The magnetic properties of the printed toroids are measured by a custom circuit that produces a hysteresis loop for each toroid. Toroids with various fill factors and fill patterns are compared to determine how these two factors can affect the magnetic field the toroid can produce. These 3D printed toroids can be used for numerous applications in order to increase the efficiency of transformers by making it possible for manufacturers to make a toroidal geometry.

  8. Lifting Object Detection Datasets into 3D.

    PubMed

    Carreira, Joao; Vicente, Sara; Agapito, Lourdes; Batista, Jorge

    2016-07-01

    While data has certainly taken the center stage in computer vision in recent years, it can still be difficult to obtain in certain scenarios. In particular, acquiring ground truth 3D shapes of objects pictured in 2D images remains a challenging feat and this has hampered progress in recognition-based object reconstruction from a single image. Here we propose to bypass previous solutions such as 3D scanning or manual design, that scale poorly, and instead populate object category detection datasets semi-automatically with dense, per-object 3D reconstructions, bootstrapped from:(i) class labels, (ii) ground truth figure-ground segmentations and (iii) a small set of keypoint annotations. Our proposed algorithm first estimates camera viewpoint using rigid structure-from-motion and then reconstructs object shapes by optimizing over visual hull proposals guided by loose within-class shape similarity assumptions. The visual hull sampling process attempts to intersect an object's projection cone with the cones of minimal subsets of other similar objects among those pictured from certain vantage points. We show that our method is able to produce convincing per-object 3D reconstructions and to accurately estimate cameras viewpoints on one of the most challenging existing object-category detection datasets, PASCAL VOC. We hope that our results will re-stimulate interest on joint object recognition and 3D reconstruction from a single image. PMID:27295458

  9. Full-color holographic 3D printer

    NASA Astrophysics Data System (ADS)

    Takano, Masami; Shigeta, Hiroaki; Nishihara, Takashi; Yamaguchi, Masahiro; Takahashi, Susumu; Ohyama, Nagaaki; Kobayashi, Akihiko; Iwata, Fujio

    2003-05-01

    A holographic 3D printer is a system that produces a direct hologram with full-parallax information using the 3-dimensional data of a subject from a computer. In this paper, we present a proposal for the reproduction of full-color images with the holographic 3D printer. In order to realize the 3-dimensional color image, we selected the 3 laser wavelength colors of red (λ=633nm), green (λ=533nm), and blue (λ=442nm), and we built a one-step optical system using a projection system and a liquid crystal display. The 3-dimensional color image is obtained by synthesizing in a 2D array the multiple exposure with these 3 wavelengths made on each 250mm elementary hologram, and moving recording medium on a x-y stage. For the natural color reproduction in the holographic 3D printer, we take the approach of the digital processing technique based on the color management technology. The matching between the input and output colors is performed by investigating first, the relation between the gray level transmittance of the LCD and the diffraction efficiency of the hologram and second, by measuring the color displayed by the hologram to establish a correlation. In our first experimental results a non-linear functional relation for single and multiple exposure of the three components were found. These results are the first step in the realization of a natural color 3D image produced by the holographic color 3D printer.

  10. Extra dimensions: 3D in PDF documentation

    SciTech Connect

    Graf, Norman A.

    2011-01-11

    Experimental science is replete with multi-dimensional information which is often poorly represented by the two dimensions of presentation slides and print media. Past efforts to disseminate such information to a wider audience have failed for a number of reasons, including a lack of standards which are easy to implement and have broad support. Adobe's Portable Document Format (PDF) has in recent years become the de facto standard for secure, dependable electronic information exchange. It has done so by creating an open format, providing support for multiple platforms and being reliable and extensible. By providing support for the ECMA standard Universal 3D (U3D) file format in its free Adobe Reader software, Adobe has made it easy to distribute and interact with 3D content. By providing support for scripting and animation, temporal data can also be easily distributed to a wide, non-technical audience. We discuss how the field of radiation imaging could benefit from incorporating full 3D information about not only the detectors, but also the results of the experimental analyses, in its electronic publications. In this article, we present examples drawn from high-energy physics, mathematics and molecular biology which take advantage of this functionality. Furthermore, we demonstrate how 3D detector elements can be documented, using either CAD drawings or other sources such as GEANT visualizations as input.

  11. Extra dimensions: 3D in PDF documentation

    DOE PAGESBeta

    Graf, Norman A.

    2011-01-11

    Experimental science is replete with multi-dimensional information which is often poorly represented by the two dimensions of presentation slides and print media. Past efforts to disseminate such information to a wider audience have failed for a number of reasons, including a lack of standards which are easy to implement and have broad support. Adobe's Portable Document Format (PDF) has in recent years become the de facto standard for secure, dependable electronic information exchange. It has done so by creating an open format, providing support for multiple platforms and being reliable and extensible. By providing support for the ECMA standard Universalmore » 3D (U3D) file format in its free Adobe Reader software, Adobe has made it easy to distribute and interact with 3D content. By providing support for scripting and animation, temporal data can also be easily distributed to a wide, non-technical audience. We discuss how the field of radiation imaging could benefit from incorporating full 3D information about not only the detectors, but also the results of the experimental analyses, in its electronic publications. In this article, we present examples drawn from high-energy physics, mathematics and molecular biology which take advantage of this functionality. Furthermore, we demonstrate how 3D detector elements can be documented, using either CAD drawings or other sources such as GEANT visualizations as input.« less

  12. The importance of 3D dosimetry

    NASA Astrophysics Data System (ADS)

    Low, Daniel

    2015-01-01

    Radiation therapy has been getting progressively more complex for the past 20 years. Early radiation therapy techniques needed only basic dosimetry equipment; motorized water phantoms, ionization chambers, and basic radiographic film techniques. As intensity modulated radiation therapy and image guided therapy came into widespread practice, medical physicists were challenged with developing effective and efficient dose measurement techniques. The complex 3-dimensional (3D) nature of the dose distributions that were being delivered demanded the development of more quantitative and more thorough methods for dose measurement. The quality assurance vendors developed a wide array of multidetector arrays that have been enormously useful for measuring and characterizing dose distributions, and these have been made especially useful with the advent of 3D dose calculation systems based on the array measurements, as well as measurements made using film and portal imagers. Other vendors have been providing 3D calculations based on data from the linear accelerator or the record and verify system, providing thorough evaluation of the dose but lacking quality assurance (QA) of the dose delivery process, including machine calibration. The current state of 3D dosimetry is one of a state of flux. The vendors and professional associations are trying to determine the optimal balance between thorough QA, labor efficiency, and quantitation. This balance will take some time to reach, but a necessary component will be the 3D measurement and independent calculation of delivered radiation therapy dose distributions.

  13. Integral 3D display using multiple LCDs

    NASA Astrophysics Data System (ADS)

    Okaichi, Naoto; Miura, Masato; Arai, Jun; Mishina, Tomoyuki

    2015-03-01

    The quality of the integral 3D images created by a 3D imaging system was improved by combining multiple LCDs to utilize a greater number of pixels than that possible with one LCD. A prototype of the display device was constructed by using four HD LCDs. An integral photography (IP) image displayed by the prototype is four times larger than that reconstructed by a single display. The pixel pitch of the HD display used is 55.5 μm, and the number of elemental lenses is 212 horizontally and 119 vertically. The 3D image pixel count is 25,228, and the viewing angle is 28°. Since this method is extensible, it is possible to display an integral 3D image of higher quality by increasing the number of LCDs. Using this integral 3D display structure makes it possible to make the whole device thinner than a projector-based display system. It is therefore expected to be applied to the home television in the future.

  14. 3D bioprinting for engineering complex tissues.

    PubMed

    Mandrycky, Christian; Wang, Zongjie; Kim, Keekyoung; Kim, Deok-Ho

    2016-01-01

    Bioprinting is a 3D fabrication technology used to precisely dispense cell-laden biomaterials for the construction of complex 3D functional living tissues or artificial organs. While still in its early stages, bioprinting strategies have demonstrated their potential use in regenerative medicine to generate a variety of transplantable tissues, including skin, cartilage, and bone. However, current bioprinting approaches still have technical challenges in terms of high-resolution cell deposition, controlled cell distributions, vascularization, and innervation within complex 3D tissues. While no one-size-fits-all approach to bioprinting has emerged, it remains an on-demand, versatile fabrication technique that may address the growing organ shortage as well as provide a high-throughput method for cell patterning at the micrometer scale for broad biomedical engineering applications. In this review, we introduce the basic principles, materials, integration strategies and applications of bioprinting. We also discuss the recent developments, current challenges and future prospects of 3D bioprinting for engineering complex tissues. Combined with recent advances in human pluripotent stem cell technologies, 3D-bioprinted tissue models could serve as an enabling platform for high-throughput predictive drug screening and more effective regenerative therapies. PMID:26724184

  15. Miniaturized 3D microscope imaging system

    NASA Astrophysics Data System (ADS)

    Lan, Yung-Sung; Chang, Chir-Weei; Sung, Hsin-Yueh; Wang, Yen-Chang; Chang, Cheng-Yi

    2015-05-01

    We designed and assembled a portable 3-D miniature microscopic image system with the size of 35x35x105 mm3 . By integrating a microlens array (MLA) into the optical train of a handheld microscope, the biological specimen's image will be captured for ease of use in a single shot. With the light field raw data and program, the focal plane can be changed digitally and the 3-D image can be reconstructed after the image was taken. To localize an object in a 3-D volume, an automated data analysis algorithm to precisely distinguish profundity position is needed. The ability to create focal stacks from a single image allows moving or specimens to be recorded. Applying light field microscope algorithm to these focal stacks, a set of cross sections will be produced, which can be visualized using 3-D rendering. Furthermore, we have developed a series of design rules in order to enhance the pixel using efficiency and reduce the crosstalk between each microlens for obtain good image quality. In this paper, we demonstrate a handheld light field microscope (HLFM) to distinguish two different color fluorescence particles separated by a cover glass in a 600um range, show its focal stacks, and 3-D position.

  16. 3D optical measuring technologies and systems

    NASA Astrophysics Data System (ADS)

    Chugui, Yuri V.

    2005-02-01

    The results of the R & D activity of TDI SIE SB RAS in the field of the 3D optical measuring technologies and systems for noncontact 3D optical dimensional inspection applied to atomic and railway industry safety problems are presented. This activity includes investigations of diffraction phenomena on some 3D objects, using the original constructive calculation method. The efficient algorithms for precise determining the transverse and longitudinal sizes of 3D objects of constant thickness by diffraction method, peculiarities on formation of the shadow and images of the typical elements of the extended objects were suggested. Ensuring the safety of nuclear reactors and running trains as well as their high exploitation reliability requires a 100% noncontact precise inspection of geometrical parameters of their components. To solve this problem we have developed methods and produced the technical vision measuring systems LMM, CONTROL, PROFIL, and technologies for noncontact 3D dimensional inspection of grid spacers and fuel elements for the nuclear reactor VVER-1000 and VVER-440, as well as automatic laser diagnostic COMPLEX for noncontact inspection of geometric parameters of running freight car wheel pairs. The performances of these systems and the results of industrial testing are presented and discussed. The created devices are in pilot operation at Atomic and Railway Companies.

  17. BEAMS3D Neutral Beam Injection Model

    NASA Astrophysics Data System (ADS)

    McMillan, Matthew; Lazerson, Samuel A.

    2014-09-01

    With the advent of applied 3D fields in Tokamaks and modern high performance stellarators, a need has arisen to address non-axisymmetric effects on neutral beam heating and fueling. We report on the development of a fully 3D neutral beam injection (NBI) model, BEAMS3D, which addresses this need by coupling 3D equilibria to a guiding center code capable of modeling neutral and charged particle trajectories across the separatrix and into the plasma core. Ionization, neutralization, charge-exchange, viscous slowing down, and pitch angle scattering are modeled with the ADAS atomic physics database. Elementary benchmark calculations are presented to verify the collisionless particle orbits, NBI model, frictional drag, and pitch angle scattering effects. A calculation of neutral beam heating in the NCSX device is performed, highlighting the capability of the code to handle 3D magnetic fields. Notice: this manuscript has been authored by Princeton University under Contract Number DE-AC02-09CH11466 with the US Department of Energy. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.

  18. Accurate 3-D finite difference computation of traveltimes in strongly heterogeneous media

    NASA Astrophysics Data System (ADS)

    Noble, M.; Gesret, A.; Belayouni, N.

    2014-12-01

    Seismic traveltimes and their spatial derivatives are the basis of many imaging methods such as pre-stack depth migration and tomography. A common approach to compute these quantities is to solve the eikonal equation with a finite-difference scheme. If many recently published algorithms for resolving the eikonal equation do now yield fairly accurate traveltimes for most applications, the spatial derivatives of traveltimes remain very approximate. To address this accuracy issue, we develop a new hybrid eikonal solver that combines a spherical approximation when close to the source and a plane wave approximation when far away. This algorithm reproduces properly the spherical behaviour of wave fronts in the vicinity of the source. We implement a combination of 16 local operators that enables us to handle velocity models with sharp vertical and horizontal velocity contrasts. We associate to these local operators a global fast sweeping method to take into account all possible directions of wave propagation. Our formulation allows us to introduce a variable grid spacing in all three directions of space. We demonstrate the efficiency of this algorithm in terms of computational time and the gain in accuracy of the computed traveltimes and their derivatives on several numerical examples.

  19. A semi-automatic 2D-to-3D video conversion with adaptive key-frame selection

    NASA Astrophysics Data System (ADS)

    Ju, Kuanyu; Xiong, Hongkai

    2014-11-01

    To compensate the deficit of 3D content, 2D to 3D video conversion (2D-to-3D) has recently attracted more attention from both industrial and academic communities. The semi-automatic 2D-to-3D conversion which estimates corresponding depth of non-key-frames through key-frames is more desirable owing to its advantage of balancing labor cost and 3D effects. The location of key-frames plays a role on quality of depth propagation. This paper proposes a semi-automatic 2D-to-3D scheme with adaptive key-frame selection to keep temporal continuity more reliable and reduce the depth propagation errors caused by occlusion. The potential key-frames would be localized in terms of clustered color variation and motion intensity. The distance of key-frame interval is also taken into account to keep the accumulated propagation errors under control and guarantee minimal user interaction. Once their depth maps are aligned with user interaction, the non-key-frames depth maps would be automatically propagated by shifted bilateral filtering. Considering that depth of objects may change due to the objects motion or camera zoom in/out effect, a bi-directional depth propagation scheme is adopted where a non-key frame is interpolated from two adjacent key frames. The experimental results show that the proposed scheme has better performance than existing 2D-to-3D scheme with fixed key-frame interval.

  20. Real-time monitoring of 3D cell culture using a 3D capacitance biosensor.

    PubMed

    Lee, Sun-Mi; Han, Nalae; Lee, Rimi; Choi, In-Hong; Park, Yong-Beom; Shin, Jeon-Soo; Yoo, Kyung-Hwa

    2016-03-15

    Three-dimensional (3D) cell cultures have recently received attention because they represent a more physiologically relevant environment compared to conventional two-dimensional (2D) cell cultures. However, 2D-based imaging techniques or cell sensors are insufficient for real-time monitoring of cellular behavior in 3D cell culture. Here, we report investigations conducted with a 3D capacitance cell sensor consisting of vertically aligned pairs of electrodes. When GFP-expressing human breast cancer cells (GFP-MCF-7) encapsulated in alginate hydrogel were cultured in a 3D cell culture system, cellular activities, such as cell proliferation and apoptosis at different heights, could be monitored non-invasively and in real-time by measuring the change in capacitance with the 3D capacitance sensor. Moreover, we were able to monitor cell migration of human mesenchymal stem cells (hMSCs) with our 3D capacitance sensor. PMID:26386332