Science.gov

Sample records for 3d rendering software

  1. 3D rendering of passive millimeter-wave scenes using modified open source software

    NASA Astrophysics Data System (ADS)

    Murakowski, Maciej; Wilson, John; Murakowski, Janusz; Schneider, Garrett; Schuetz, Christopher; Prather, Dennis

    2011-05-01

    As millimeter-wave imaging technology becomes more mature, several applications are emerging for which this technology may be useful. However, effectively predicting the nuances of millimeter-wave phenomenology on the usefulness for a given application remains a challenge. To this end, an accurate millimeter-wave scene simulator would have tremendous value in predicting imager requirements for a given application. Herein, we present a passive millimeter-wave scene simulator built on the open-source 3d modeling software Blender. We describe the changes made to the Blender rendering engine to make it suitable for this purpose, including physically accurate reflections at each material interface, volumetric absorption and scattering, and tracking of both s and p polarizations. In addition, we have incorporated a mmW material database and world model that emulates the effects of cold sky profiles for varying weather conditions and frequencies of operation. The images produced by this model have been validated against calibrated experimental imagery captured by a passive scanning millimeter-wave imager for maritime, desert, and standoff detection applications.

  2. Spatial 3D infrastructure: display-independent software framework, high-speed rendering electronics, and several new displays

    NASA Astrophysics Data System (ADS)

    Chun, Won-Suk; Napoli, Joshua; Cossairt, Oliver S.; Dorval, Rick K.; Hall, Deirdre M.; Purtell, Thomas J., II; Schooler, James F.; Banker, Yigal; Favalora, Gregg E.

    2005-03-01

    We present a software and hardware foundation to enable the rapid adoption of 3-D displays. Different 3-D displays - such as multiplanar, multiview, and electroholographic displays - naturally require different rendering methods. The adoption of these displays in the marketplace will be accelerated by a common software framework. The authors designed the SpatialGL API, a new rendering framework that unifies these display methods under one interface. SpatialGL enables complementary visualization assets to coexist through a uniform infrastructure. Also, SpatialGL supports legacy interfaces such as the OpenGL API. The authors" first implementation of SpatialGL uses multiview and multislice rendering algorithms to exploit the performance of modern graphics processing units (GPUs) to enable real-time visualization of 3-D graphics from medical imaging, oil & gas exploration, and homeland security. At the time of writing, SpatialGL runs on COTS workstations (both Windows and Linux) and on Actuality"s high-performance embedded computational engine that couples an NVIDIA GeForce 6800 Ultra GPU, an AMD Athlon 64 processor, and a proprietary, high-speed, programmable volumetric frame buffer that interfaces to a 1024 x 768 x 3 digital projector. Progress is illustrated using an off-the-shelf multiview display, Actuality"s multiplanar Perspecta Spatial 3D System, and an experimental multiview display. The experimental display is a quasi-holographic view-sequential system that generates aerial imagery measuring 30 mm x 25 mm x 25 mm, providing 198 horizontal views.

  3. 3-D Volume Rendering of Sand Specimen

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Computed tomography (CT) images of resin-impregnated Mechanics of Granular Materials (MGM) specimens are assembled to provide 3-D volume renderings of density patterns formed by dislocation under the external loading stress profile applied during the experiments. Experiments flown on STS-79 and STS-89. Principal Investigator: Dr. Stein Sture

  4. Volume rendering for interactive 3D segmentation

    NASA Astrophysics Data System (ADS)

    Toennies, Klaus D.; Derz, Claus

    1997-05-01

    Combined emission/absorption and reflection/transmission volume rendering is able to display poorly segmented structures from 3D medical image sequences. Visual cues such as shading and color let the user distinguish structures in the 3D display that are incompletely extracted by threshold segmentation. In order to be truly helpful, analyzed information needs to be quantified and transferred back into the data. We extend our previously presented scheme for such display be establishing a communication between visual analysis and the display process. The main tool is a selective 3D picking device. For being useful on a rather rough segmentation, the device itself and the display offer facilities for object selection. Selective intersection planes let the user discard information prior to choosing a tissue of interest. Subsequently, a picking is carried out on the 2D display by casting a ray into the volume. The picking device is made pre-selective using already existing segmentation information. Thus, objects can be picked that are visible behind semi-transparent surfaces of other structures. Information generated by a later connected- component analysis can then be integrated into the data. Data examination is continued on an improved display letting the user actively participate in the analysis process. Results of this display-and-interaction scheme proved to be very effective. The viewer's ability to extract relevant information form a complex scene is combined with the computer's ability to quantify this information. The approach introduces 3D computer graphics methods into user- guided image analysis creating an analysis-synthesis cycle for interactive 3D segmentation.

  5. Software for Acoustic Rendering

    NASA Technical Reports Server (NTRS)

    Miller, Joel D.

    2003-01-01

    SLAB is a software system that can be run on a personal computer to simulate an acoustic environment in real time. SLAB was developed to enable computational experimentation in which one can exert low-level control over a variety of signal-processing parameters, related to spatialization, for conducting psychoacoustic studies. Among the parameters that can be manipulated are the number and position of reflections, the fidelity (that is, the number of taps in finite-impulse-response filters), the system latency, and the update rate of the filters. Another goal in the development of SLAB was to provide an inexpensive means of dynamic synthesis of virtual audio over headphones, without need for special-purpose signal-processing hardware. SLAB has a modular, object-oriented design that affords the flexibility and extensibility needed to accommodate a variety of computational experiments and signal-flow structures. SLAB s spatial renderer has a fixed signal-flow architecture corresponding to a set of parallel signal paths from each source to a listener. This fixed architecture can be regarded as a compromise that optimizes efficiency at the expense of complete flexibility. Such a compromise is necessary, given the design goal of enabling computational psychoacoustic experimentation on inexpensive personal computers.

  6. The rendering context for stereoscopic 3D web

    NASA Astrophysics Data System (ADS)

    Chen, Qinshui; Wang, Wenmin; Wang, Ronggang

    2014-03-01

    3D technologies on the Web has been studied for many years, but they are basically monoscopic 3D. With the stereoscopic technology gradually maturing, we are researching to integrate the binocular 3D technology into the Web, creating a stereoscopic 3D browser that will provide users with a brand new experience of human-computer interaction. In this paper, we propose a novel approach to apply stereoscopy technologies to the CSS3 3D Transforms. Under our model, each element can create or participate in a stereoscopic 3D rendering context, in which 3D Transforms such as scaling, translation and rotation, can be applied and be perceived in a truly 3D space. We first discuss the underlying principles of stereoscopy. After that we discuss how these principles can be applied to the Web. A stereoscopic 3D browser with backward compatibility is also created for demonstration purposes. We take advantage of the open-source WebKit project, integrating the 3D display ability into the rendering engine of the web browser. For each 3D web page, our 3D browser will create two slightly different images, each representing the left-eye view and right-eye view, both to be combined on the 3D display to generate the illusion of depth. And as the result turns out, elements can be manipulated in a truly 3D space.

  7. Incremental volume reconstruction and rendering for 3-D ultrasound imaging

    NASA Astrophysics Data System (ADS)

    Ohbuchi, Ryutarou; Chen, David; Fuchs, Henry

    1992-09-01

    In this paper, we present approaches toward an interactive visualization of a real time input, applied to 3-D visualizations of 2-D ultrasound echography data. The first, 3 degrees-of- freedom (DOF) incremental system visualizes a 3-D volume acquired as a stream of 2-D slices with location and orientation with 3 DOF. As each slice arrives, the system reconstructs a regular 3-D volume and renders it. Rendering is done by an incremental image-order ray- casting algorithm which stores and reuses the results of expensive resampling along the rays for speed. The second is our first experiment toward real-time 6 DOF acquisition and visualization. Two-dimensional slices with 6 DOF are reconstructed off-line, and visualized at an interactive rate using a parallel volume rendering code running on the graphics multicomputer Pixel-Planes 5.

  8. Algorithms for Haptic Rendering of 3D Objects

    NASA Technical Reports Server (NTRS)

    Basdogan, Cagatay; Ho, Chih-Hao; Srinavasan, Mandayam

    2003-01-01

    Algorithms have been developed to provide haptic rendering of three-dimensional (3D) objects in virtual (that is, computationally simulated) environments. The goal of haptic rendering is to generate tactual displays of the shapes, hardnesses, surface textures, and frictional properties of 3D objects in real time. Haptic rendering is a major element of the emerging field of computer haptics, which invites comparison with computer graphics. We have already seen various applications of computer haptics in the areas of medicine (surgical simulation, telemedicine, haptic user interfaces for blind people, and rehabilitation of patients with neurological disorders), entertainment (3D painting, character animation, morphing, and sculpting), mechanical design (path planning and assembly sequencing), and scientific visualization (geophysical data analysis and molecular manipulation).

  9. Wire bonded 3D coils render air core microtransformers competitive

    NASA Astrophysics Data System (ADS)

    Moazenzadeh, A.; Spengler, N.; Lausecker, R.; Rezvani, A.; Mayer, M.; Korvink, J. G.; Wallrabe, U.

    2013-11-01

    We present a novel wafer-level fabrication method for 3D solenoidal microtransformers using an automatic wire bonder for chip-scale, very high frequency regime applications. Using standard microelectromechanical systems fabrication processes for the manufacturing of supporting structures, together with ultra-fast wire bonding for the fabrication of solenoids, enables the flexible and repeatable fabrication, at high throughput, of high performance air core microtransformers. The primary and secondary solenoids are wound one on top of the other in the lateral direction, using a 25 µm thick insulated wire. Besides commonly available gold wire, we also introduce insulated copper wire to our coil winding process. The influence of copper on the transformer properties is explored and compared to gold. A simulation model based on the solenoids’ wire bonding trajectories has been defined using the FastHenry software to accurately predict and optimize the transformer's inductive properties. The transformer chips are encapsulated in polydimethylsiloxane in order to protect the coils from environmental influences and mechanical damage. Meanwhile, the effect of the increase in the internal capacitance of the chips as a result of the encapsulation is analyzed. A fabricated transformer with 20 windings in both the primary and the secondary coils, and a footprint of 1 mm2, yields an inductance of 490 nH, a maximum efficiency of 68%, and a coupling factor of 94%. The repeatability of the coil winding process was investigated by comparing the data of 25 identically processed devices. Finally, the microtransformers are benchmarked to underline the potential of the technology in rendering air core transformers competitive.

  10. 3D virtual colonoscopy with real-time volume rendering

    NASA Astrophysics Data System (ADS)

    Wan, Ming; Li, Wei J.; Kreeger, Kevin; Bitter, Ingmar; Kaufman, Arie E.; Liang, Zhengrong; Chen, Dongqing; Wax, Mark R.

    2000-04-01

    In our previous work, we developed a virtual colonoscopy system on a high-end 16-processor SGI Challenge with an expensive hardware graphics accelerator. The goal of this work is to port the system to a low cost PC in order to increase its availability for mass screening. Recently, Mitsubishi Electric has developed a volume-rendering PC board, called VolumePro, which includes 128 MB of RAM and vg500 rendering chip. The vg500 chip, based on Cube-4 technology, can render a 2563 volume at 30 frames per second. High image quality of volume rendering inside the colon is guaranteed by the full lighting model and 3D interpolation supported by the vg500 chip. However, the VolumePro board is lacking some features required by our interactive colon navigation. First, VolumePro currently does not support perspective projection which is paramount for interior colon navigation. Second, the patient colon data is usually much larger than 2563 and cannot be rendered in real-time. In this paper, we present our solutions to these problems, including simulated perspective projection and axis aligned boxing techniques, and demonstrate the high performance of our virtual colonoscopy system on low cost PCs.

  11. Hardware-accelerated autostereogram rendering for interactive 3D visualization

    NASA Astrophysics Data System (ADS)

    Petz, Christoph; Goldluecke, Bastian; Magnor, Marcus

    2003-05-01

    Single Image Random Dot Stereograms (SIRDS) are an attractive way of depicting three-dimensional objects using conventional display technology. Once trained in decoupling the eyes' convergence and focusing, autostereograms of this kind are able to convey the three-dimensional impression of a scene. We present in this work an algorithm that generates SIRDS at interactive frame rates on a conventional PC. The presented system allows rotating a 3D geometry model and observing the object from arbitrary positions in real-time. Subjective tests show that the perception of a moving or rotating 3D scene presents no problem: The gaze remains focused onto the object. In contrast to conventional SIRDS algorithms, we render multiple pixels in a single step using a texture-based approach, exploiting the parallel-processing architecture of modern graphics hardware. A vertex program determines the parallax for each vertex of the geometry model, and the graphics hardware's texture unit is used to render the dot pattern. No data has to be transferred between main memory and the graphics card for generating the autostereograms, leaving CPU capacity available for other tasks. Frame rates of 25 fps are attained at a resolution of 1024x512 pixels on a standard PC using a consumer-grade nVidia GeForce4 graphics card, demonstrating the real-time capability of the system.

  12. Software for 3D radiotherapy dosimetry. Validation

    NASA Astrophysics Data System (ADS)

    Kozicki, Marek; Maras, Piotr; Karwowski, Andrzej C.

    2014-08-01

    The subject of this work is polyGeVero® software (GeVero Co., Poland), which has been developed to fill the requirements of fast calculations of 3D dosimetry data with the emphasis on polymer gel dosimetry for radiotherapy. This software comprises four workspaces that have been prepared for: (i) calculating calibration curves and calibration equations, (ii) storing the calibration characteristics of the 3D dosimeters, (iii) calculating 3D dose distributions in irradiated 3D dosimeters, and (iv) comparing 3D dose distributions obtained from measurements with the aid of 3D dosimeters and calculated with the aid of treatment planning systems (TPSs). The main features and functions of the software are described in this work. Moreover, the core algorithms were validated and the results are presented. The validation was performed using the data of the new PABIGnx polymer gel dosimeter. The polyGeVero® software simplifies and greatly accelerates the calculations of raw 3D dosimetry data. It is an effective tool for fast verification of TPS-generated plans for tumor irradiation when combined with a 3D dosimeter. Consequently, the software may facilitate calculations by the 3D dosimetry community. In this work, the calibration characteristics of the PABIGnx obtained through four calibration methods: multi vial, cross beam, depth dose, and brachytherapy, are discussed as well.

  13. ICER-3D Hyperspectral Image Compression Software

    NASA Technical Reports Server (NTRS)

    Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received

  14. Real-time volume rendering of 4D image using 3D texture mapping

    NASA Astrophysics Data System (ADS)

    Hwang, Jinwoo; Kim, June-Sic; Kim, Jae Seok; Kim, In Young; Kim, Sun Il

    2001-05-01

    Four dimensional image is 3D volume data that varies with time. It is used to express deforming or moving object in virtual surgery of 4D ultrasound. It is difficult to render 4D image by conventional ray-casting or shear-warp factorization methods because of their time-consuming rendering time or pre-processing stage whenever the volume data are changed. Even 3D texture mapping is used, repeated volume loading is also time-consuming in 4D image rendering. In this study, we propose a method to reduce data loading time using coherence between currently loaded volume and previously loaded volume in order to achieve real time rendering based on 3D texture mapping. Volume data are divided into small bricks and each brick being loaded is tested for similarity to one which was already loaded in memory. If the brick passed the test, it is defined as 3D texture by OpenGL functions. Later, the texture slices of the brick are mapped into polygons and blended by OpenGL blending functions. All bricks undergo this test. Continuously deforming fifty volumes are rendered in interactive time with SGI ONYX. Real-time volume rendering based on 3D texture mapping is currently available on PC.

  15. Order-of-magnitude faster isosurface rendering in software on a PC than using dedicated general-purpose rendering hardware

    NASA Astrophysics Data System (ADS)

    Grevera, George J.; Udupa, Jayaram K.; Odhner, Dewey

    1999-05-01

    The purpose of this work is to compare the speed of isosurface rendering in software with that using dedicated hardware. Input data consists of 10 different objects form various parts of the body and various modalities with a variety of surface sizes and shapes. The software rendering technique consists of a particular method of voxel-based surface rendering, called shell rendering. The hardware method is OpenGL-based and uses the surfaces constructed from our implementation of the 'Marching Cubes' algorithm. The hardware environment consists of a variety of platforms including a Sun Ultra I with a Creator3D graphics card and a Silicon Graphics Reality Engine II, both with polygon rendering hardware, and a 300Mhz Pentium PC. The results indicate that the software method was 18 to 31 times faster than any hardware rendering methods. This work demonstrates that a software implementation of a particular rendering algorithm can outperform dedicated hardware. We conclude that for medical surface visualization, expensive dedicated hardware engines are not required. More importantly, available software algorithms on a 300Mhz Pentium PC outperform the speed of rendering via hardware engines by a factor of 18 to 31.

  16. Points based reconstruction and rendering of 3D shapes from large volume dataset

    NASA Astrophysics Data System (ADS)

    Zhao, Mingchang; Tian, Jie; He, Huiguang; Li, Guangming

    2003-05-01

    In the field of medical imaging, researchers often need visualize lots of 3D datasets to get the informaiton contained in these datasets. But the huge data genreated by modern medical imaging device challenge the real time processing and rendering algorithms at all the time. Spurring by the great achievement of Points Based Rendering (PBR) in the fields of computer graphics to render very large meshes, we propose a new algorithm to use the points as basic primitive of surface reconstruction and rendering to interactively reconstruct and render very large volume dataset. By utilizing the special characteristics of medical image datasets, we obtain a fast and efficient points-based reconstruction and rendering algorithm in common PC. The experimental results show taht this algorithm is feasible and efficient.

  17. Real-time rendering method and performance evaluation of composable 3D lenses for interactive VR.

    PubMed

    Borst, Christoph W; Tiesel, Jan-Phillip; Best, Christopher M

    2010-01-01

    We present and evaluate a new approach for real-time rendering of composable 3D lenses for polygonal scenes. Such lenses, usually called "volumetric lenses," are an extension of 2D Magic Lenses to 3D volumes in which effects are applied to scene elements. Although the composition of 2D lenses is well known, 3D composition was long considered infeasible due to both geometric and semantic complexity. Nonetheless, for a scene with multiple interactive 3D lenses, the problem of intersecting lenses must be considered. Intersecting 3D lenses in meaningful ways supports new interfaces such as hierarchical 3D windows, 3D lenses for managing and composing visualization options, or interactive shader development by direct manipulation of lenses providing component effects. Our 3D volumetric lens approach differs from other approaches and is one of the first to address efficient composition of multiple lenses. It is well-suited to head-tracked VR environments because it requires no view-dependent generation of major data structures, allowing caching and reuse of full or partial results. A Composite Shader Factory module composes shader programs for rendering composite visual styles and geometry of intersection regions. Geometry is handled by Boolean combinations of region tests in fragment shaders, which allows both convex and nonconvex CSG volumes for lens shape. Efficiency is further addressed by a Region Analyzer module and by broad-phase culling. Finally, we consider the handling of order effects for composed 3D lenses. PMID:20224135

  18. 3D rendering of SAR distributions from Thermotron RF-8 using a ray casting technique.

    PubMed

    Paliwal, B R; Gehring, M A; Sanders, C; Mackie, T R; Raffety, H M; Song, C W

    1991-01-01

    A comprehensive 3D visualization package developed for CT-based 3D radiation treatment planning has been modified to volume-render SAR data. The program accepts data from sequential thermographic thermometry measurements as well as calculated data from thermal models. In this presentation sample data obtained from a capacitive heating system 'Thermotron-RF8' is presented. This capability allows the generation of accurate standardized volumetric images of SAR and provides a valuable tool to better preplan hyperthermia treatments. PMID:1919152

  19. 3D colour visualization of label images using volume rendering techniques.

    PubMed

    Vandenhouten, R; Kottenhoff, R; Grebe, R

    1995-01-01

    Volume rendering methods for the visualization of 3D image data sets have been developed and collected in a C library. The core algorithm consists of a perspective ray casting technique for a natural and realistic view of the 3D scene. New edge operator shading methods are employed for a fast and information preserving representation of surfaces. Control parameters of the algorithm can be tuned to have either smoothed surfaces or a very detailed rendering of the geometrical structure. Different objects can be distinguished by different colours. Shadow ray tracing has been implemented to improve the realistic impression of the 3D image. For a simultaneous representation of objects in different depths, hiding each other, two types of transparency mode are used (wireframe and glass transparency). Single objects or groups of objects can be excluded from the rendering (peeling). Three orthogonal cutting planes or one arbitrarily placed cutting plane can be applied to the rendered objects in order to get additional information about inner structures, contours, and relative positions. PMID:8569308

  20. Volume rendering segmented data using 3D textures: a practical approach for intra-operative visualization

    NASA Astrophysics Data System (ADS)

    Subramanian, Navneeth; Mullick, Rakesh; Vaidya, Vivek

    2006-03-01

    Volume rendering has high utility in visualization of segmented datasets. However, volume rendering of the segmented labels along with the original data causes undesirable intermixing/bleeding artifacts arising from interpolation at the sharp boundaries. This issue is further amplified in 3D textures based volume rendering due to the inaccessibility of the interpolation stage. We present an approach which helps minimize intermixing artifacts while maintaining the high performance of 3D texture based volume rendering - both of which are critical for intra-operative visualization. Our approach uses a 2D transfer function based classification scheme where label distinction is achieved through an encoding that generates unique gradient values for labels. This helps ensure that labelled voxels always map to distinct regions in the 2D transfer function, irrespective of interpolation. In contrast to previously reported algorithms, our algorithm does not require multiple passes for rendering and supports greater than 4 masks. It also allows for real-time modification of the colors/opacities of the segmented structures along with the original data. Additionally, these capabilities are available with minimal texture memory requirements amongst comparable algorithms. Results are presented on clinical and phantom data.

  1. Research on transformation and optimization of large scale 3D modeling for real time rendering

    NASA Astrophysics Data System (ADS)

    Yan, Hu; Yang, Yongchao; Zhao, Gang; He, Bin; Shen, Guosheng

    2011-12-01

    During the simulation process of real-time three-dimensional scene, the popular modeling software and the real-time rendering platform are not compatible. The common solution is to create three-dimensional scene model by using modeling software and then transform the format supported by rendering platform. This paper takes digital campus scene simulation as an example, analyzes and solves the problems of surface loss; texture distortion and loss; model flicker and so on during the transformation from 3Ds Max to MultiGen Creator. Besides, it proposes the optimization strategy of model which is transformed. The operation results show that this strategy is a good solution to all kinds of problems existing in transformation and it can speed up the rendering speed of the model.

  2. Segmentation, surface rendering, and surface simplification of 3-D skull images for the repair of a large skull defect

    NASA Astrophysics Data System (ADS)

    Wan, Weibing; Shi, Pengfei; Li, Shuguang

    2009-10-01

    Given the potential demonstrated by research into bone-tissue engineering, the use of medical image data for the rapid prototyping (RP) of scaffolds is a subject worthy of research. Computer-aided design and manufacture and medical imaging have created new possibilities for RP. Accurate and efficient design and fabrication of anatomic models is critical to these applications. We explore the application of RP computational methods to the repair of a pediatric skull defect. The focus of this study is the segmentation of the defect region seen in computerized tomography (CT) slice images of this patient's skull and the three-dimensional (3-D) surface rendering of the patient's CT-scan data. We see if our segmentation and surface rendering software can improve the generation of an implant model to fill a skull defect.

  3. a Cache Design Method for Spatial Information Visualization in 3d Real-Time Rendering Engine

    NASA Astrophysics Data System (ADS)

    Dai, X.; Xiong, H.; Zheng, X.

    2012-07-01

    A well-designed cache system has positive impacts on the 3D real-time rendering engine. As the amount of visualization data getting larger, the effects become more obvious. They are the base of the 3D real-time rendering engine to smoothly browsing through the data, which is out of the core memory, or from the internet. In this article, a new kind of caches which are based on multi threads and large file are introduced. The memory cache consists of three parts, the rendering cache, the pre-rendering cache and the elimination cache. The rendering cache stores the data that is rendering in the engine; the data that is dispatched according to the position of the view point in the horizontal and vertical directions is stored in the pre-rendering cache; the data that is eliminated from the previous cache is stored in the eliminate cache and is going to write to the disk cache. Multi large files are used in the disk cache. When a disk cache file size reaches the limit length(128M is the top in the experiment), no item will be eliminated from the file, but a new large cache file will be created. If the large file number is greater than the maximum number that is pre-set, the earliest file will be deleted from the disk. In this way, only one file is opened for writing and reading, and the rest are read-only so the disk cache can be used in a high asynchronous way. The size of the large file is limited in order to map to the core memory to save loading time. Multi-thread is used to update the cache data. The threads are used to load data to the rendering cache as soon as possible for rendering, to load data to the pre-rendering cache for rendering next few frames, and to load data to the elimination cache which is not necessary for the moment. In our experiment, two threads are designed. The first thread is to organize the memory cache according to the view point, and created two threads: the adding list and the deleting list, the adding list index the data that should be

  4. 3D Reconstruction from X-ray Fluoroscopy for Clinical Veterinary Medicine using Differential Volume Rendering

    NASA Astrophysics Data System (ADS)

    Khongsomboon, Khamphong; Hamamoto, Kazuhiko; Kondo, Shozo

    3D reconstruction from ordinary X-ray equipment which is not CT or MRI is required in clinical veterinary medicine. Authors have already proposed a 3D reconstruction technique from X-ray photograph to present bone structure. Although the reconstruction is useful for veterinary medicine, the thechnique has two problems. One is about exposure of X-ray and the other is about data acquisition process. An x-ray equipment which is not special one but can solve the problems is X-ray fluoroscopy. Therefore, in this paper, we propose a method for 3D-reconstruction from X-ray fluoroscopy for clinical veterinary medicine. Fluoroscopy is usually used to observe a movement of organ or to identify a position of organ for surgery by weak X-ray intensity. Since fluoroscopy can output a observed result as movie, the previous two problems which are caused by use of X-ray photograph can be solved. However, a new problem arises due to weak X-ray intensity. Although fluoroscopy can present information of not only bone structure but soft tissues, the contrast is very low and it is very difficult to recognize some soft tissues. It is very useful to be able to observe not only bone structure but soft tissues clearly by ordinary X-ray equipment in the field of clinical veterinary medicine. To solve this problem, this paper proposes a new method to determine opacity in volume rendering process. The opacity is determined according to 3D differential coefficient of 3D reconstruction. This differential volume rendering can present a 3D structure image of multiple organs volumetrically and clearly for clinical veterinary medicine. This paper shows results of simulation and experimental investigation of small dog and evaluation by veterinarians.

  5. Automatic bone-free rendering of cerebral aneurysms via 3D CTA

    NASA Astrophysics Data System (ADS)

    Saha, Punam K.; Abrahams, John M.; Udupa, Jayaram K.

    2001-07-01

    3D computed tomographic angiography (3D-CTA) has been described as an alternative to digital subtraction angiography (DSA) in the clinical evaluation of cerebrovascular diseases. A bone-free rendition of 3D-CTA facilitates a quick and accurate clinical evaluation of the disease. We propose a new bone removal process that is accomplished in three sequential steps - (1) primary delineation and removal of bones, (2) removing the effect of partial voluming around bone surfaces, and (3) removal of thin bones around nose, mouth and eyes. The bone removed image of vasculature and aneurysms is rendered via maximum intensity projection (MIP). The method has been tested on 10 patients' 3D-CTA images acquired on a general Electric Hi-Speed Spiral CT Scanner. The algorithm successfully subtracted bone showing the cerebral vasculature in all 10 patients' data. The method allows for a unique analysis of 3D-CTA data for near automatic removal of bones. This greatly reduces the need for manual removal of bones that is currently utilized and greatly facilitates the visualization of the anatomy of vascular lesions.

  6. Software-based geometry operations for 3D computer graphics

    NASA Astrophysics Data System (ADS)

    Sima, Mihai; Iancu, Daniel; Glossner, John; Schulte, Michael; Mamidi, Suman

    2006-02-01

    In order to support a broad dynamic range and a high degree of precision, many of 3D renderings fundamental algorithms have been traditionally performed in floating-point. However, fixed-point data representation is preferable over floating-point representation in graphics applications on embedded devices where performance is of paramount importance, while the dynamic range and precision requirements are limited due to the small display sizes (current PDA's are 640 × 480 (VGA), while cell-phones are even smaller). In this paper we analyze the efficiency of a CORDIC-augmented Sandbridge processor when implementing a vertex processor in software using fixed-point arithmetic. A CORDIC-based solution for vertex processing exhibits a number of advantages over classical Multiply-and-Acumulate solutions. First, since a single primitive is used to describe the computation, the code can easily be vectorized and multithreaded, and thus fits the major Sandbridge architectural features. Second, since a CORDIC iteration consists of only a shift operation followed by an addition, the computation may be deeply pipelined. Initially, we outline the Sandbridge architecture extension which encompasses a CORDIC functional unit and the associated instructions. Then, we consider rigid-body rotation, lighting, exponentiation, vector normalization, and perspective division (which are some of the most important data-intensive 3D graphics kernels) and propose a scheme to implement them on the CORDIC-augmented Sandbridge processor. Preliminary results indicate that the performance improvement within the extended instruction set ranges from 3× to 10× (with the exception of rigid body rotation).

  7. Scoops3D: software to analyze 3D slope stability throughout a digital landscape

    USGS Publications Warehouse

    Reid, Mark E.; Christian, Sarah B.; Brien, Dianne L.; Henderson, Scott T.

    2015-01-01

    The computer program, Scoops3D, evaluates slope stability throughout a digital landscape represented by a digital elevation model (DEM). The program uses a three-dimensional (3D) method of columns approach to assess the stability of many (typically millions) potential landslides within a user-defined size range. For each potential landslide (or failure), Scoops3D assesses the stability of a rotational, spherical slip surface encompassing many DEM cells using a 3D version of either Bishop’s simplified method or the Ordinary (Fellenius) method of limit-equilibrium analysis. Scoops3D has several options for the user to systematically and efficiently search throughout an entire DEM, thereby incorporating the effects of complex surface topography. In a thorough search, each DEM cell is included in multiple potential failures, and Scoops3D records the lowest stability (factor of safety) for each DEM cell, as well as the size (volume or area) associated with each of these potential landslides. It also determines the least-stable potential failure for the entire DEM. The user has a variety of options for building a 3D domain, including layers or full 3D distributions of strength and pore-water pressures, simplistic earthquake loading, and unsaturated suction conditions. Results from Scoops3D can be readily incorporated into a geographic information system (GIS) or other visualization software. This manual includes information on the theoretical basis for the slope-stability analysis, requirements for constructing and searching a 3D domain, a detailed operational guide (including step-by-step instructions for using the graphical user interface [GUI] software, Scoops3D-i) and input/output file specifications, practical considerations for conducting an analysis, results of verification tests, and multiple examples illustrating the capabilities of Scoops3D. Easy-to-use software installation packages are available for the Windows or Macintosh operating systems; these packages

  8. Performance Evaluation of 3d Modeling Software for Uav Photogrammetry

    NASA Astrophysics Data System (ADS)

    Yanagi, H.; Chikatsu, H.

    2016-06-01

    UAV (Unmanned Aerial Vehicle) photogrammetry, which combines UAV and freely available internet-based 3D modeling software, is widely used as a low-cost and user-friendly photogrammetry technique in the fields such as remote sensing and geosciences. In UAV photogrammetry, only the platform used in conventional aerial photogrammetry is changed. Consequently, 3D modeling software contributes significantly to its expansion. However, the algorithms of the 3D modelling software are black box algorithms. As a result, only a few studies have been able to evaluate their accuracy using 3D coordinate check points. With this motive, Smart3DCapture and Pix4Dmapper were downloaded from the Internet and commercial software PhotoScan was also employed; investigations were performed in this paper using check points and images obtained from UAV.

  9. 3D chromosome rendering from Hi-C data using virtual reality

    NASA Astrophysics Data System (ADS)

    Zhu, Yixin; Selvaraj, Siddarth; Weber, Philip; Fang, Jennifer; Schulze, Jürgen P.; Ren, Bing

    2015-01-01

    Most genome browsers display DNA linearly, using single-dimensional depictions that are useful to examine certain epigenetic mechanisms such as DNA methylation. However, these representations are insufficient to visualize intrachromosomal interactions and relationships between distal genome features. Relationships between DNA regions may be difficult to decipher or missed entirely if those regions are distant in one dimension but could be spatially proximal when mapped to three-dimensional space. For example, the visualization of enhancers folding over genes is only fully expressed in three-dimensional space. Thus, to accurately understand DNA behavior during gene expression, a means to model chromosomes is essential. Using coordinates generated from Hi-C interaction frequency data, we have created interactive 3D models of whole chromosome structures and its respective domains. We have also rendered information on genomic features such as genes, CTCF binding sites, and enhancers. The goal of this article is to present the procedure, findings, and conclusions of our models and renderings.

  10. Development of visual 3D virtual environment for control software

    NASA Technical Reports Server (NTRS)

    Hirose, Michitaka; Myoi, Takeshi; Amari, Haruo; Inamura, Kohei; Stark, Lawrence

    1991-01-01

    Virtual environments for software visualization may enable complex programs to be created and maintained. A typical application might be for control of regional electric power systems. As these encompass broader computer networks than ever, construction of such systems becomes very difficult. Conventional text-oriented environments are useful in programming individual processors. However, they are obviously insufficient to program a large and complicated system, that includes large numbers of computers connected to each other; such programming is called 'programming in the large.' As a solution for this problem, the authors are developing a graphic programming environment wherein one can visualize complicated software in virtual 3D world. One of the major features of the environment is the 3D representation of concurrent process. 3D representation is used to supply both network-wide interprocess programming capability (capability for 'programming in the large') and real-time programming capability. The authors' idea is to fuse both the block diagram (which is useful to check relationship among large number of processes or processors) and the time chart (which is useful to check precise timing for synchronization) into a single 3D space. The 3D representation gives us a capability for direct and intuitive planning or understanding of complicated relationship among many concurrent processes. To realize the 3D representation, a technology to enable easy handling of virtual 3D object is a definite necessity. Using a stereo display system and a gesture input device (VPL DataGlove), our prototype of the virtual workstation has been implemented. The workstation can supply the 'sensation' of the virtual 3D space to a programmer. Software for the 3D programming environment is implemented on the workstation. According to preliminary assessments, a 50 percent reduction of programming effort is achieved by using the virtual 3D environment. The authors expect that the 3D

  11. Adaptive volume rendering of cardiac 3D ultrasound images: utilizing blood pool statistics

    NASA Astrophysics Data System (ADS)

    Åsen, Jon Petter; Steen, Erik; Kiss, Gabriel; Thorstensen, Anders; Rabben, Stein Inge

    2012-03-01

    In this paper we introduce and investigate an adaptive direct volume rendering (DVR) method for real-time visualization of cardiac 3D ultrasound. DVR is commonly used in cardiac ultrasound to visualize interfaces between tissue and blood. However, this is particularly challenging with ultrasound images due to variability of the signal within tissue as well as variability of noise signal within the blood pool. Standard DVR involves a global mapping of sample values to opacity by an opacity transfer function (OTF). While a global OTF may represent the interface correctly in one part of the image, it may result in tissue dropouts, or even artificial interfaces within the blood pool in other parts of the image. In order to increase correctness of the rendered image, the presented method utilizes blood pool statistics to do regional adjustments of the OTF. The regional adaptive OTF was compared with a global OTF in a dataset of apical recordings from 18 subjects. For each recording, three renderings from standard views (apical 4-chamber (A4C), inverted A4C (IA4C) and mitral valve (MV)) were generated for both methods, and each rendering was tuned to the best visual appearance by a physician echocardiographer. For each rendering we measured the mean absolute error (MAE) between the rendering depth buffer and a validated left ventricular segmentation. The difference d in MAE between the global and regional method was calculated and t-test results are reported with significant improvements for the regional adaptive method (dA4C = 1.5 +/- 0.3 mm, dIA4C = 2.5 +/- 0.4 mm, dMV = 1.7 +/- 0.2 mm, d.f. = 17, all p < 0.001). This improvement by the regional adaptive method was confirmed through qualitative visual assessment by an experienced physician echocardiographer who concluded that the regional adaptive method produced rendered images with fewer tissue dropouts and less spurious structures inside the blood pool in the vast majority of the renderings. The algorithm has been

  12. [Development of a software for 3D virtual phantom design].

    PubMed

    Zou, Lian; Xie, Zhao; Wu, Qi

    2014-02-01

    In this paper, we present a 3D virtual phantom design software, which was developed based on object-oriented programming methodology and dedicated to medical physics research. This software was named Magical Phan tom (MPhantom), which is composed of 3D visual builder module and virtual CT scanner. The users can conveniently construct any complex 3D phantom, and then export the phantom as DICOM 3.0 CT images. MPhantom is a user-friendly and powerful software for 3D phantom configuration, and has passed the real scene's application test. MPhantom will accelerate the Monte Carlo simulation for dose calculation in radiation therapy and X ray imaging reconstruction algorithm research. PMID:24804488

  13. Post-processing methods of rendering and visualizing 3-D reconstructed tomographic images

    SciTech Connect

    Wong, S.T.C.

    1997-02-01

    The purpose of this presentation is to discuss the computer processing techniques of tomographic images, after they have been generated by imaging scanners, for volume visualization. Volume visualization is concerned with the representation, manipulation, and rendering of volumetric data. Since the first digital images were produced from computed tomography (CT) scanners in the mid 1970s, applications of visualization in medicine have expanded dramatically. Today, three-dimensional (3D) medical visualization has expanded from using CT data, the first inherently digital source of 3D medical data, to using data from various medical imaging modalities, including magnetic resonance scanners, positron emission scanners, digital ultrasound, electronic and confocal microscopy, and other medical imaging modalities. We have advanced from rendering anatomy to aid diagnosis and visualize complex anatomic structures to planning and assisting surgery and radiation treatment. New, more accurate and cost-effective procedures for clinical services and biomedical research have become possible by integrating computer graphics technology with medical images. This trend is particularly noticeable in current market-driven health care environment. For example, interventional imaging, image-guided surgery, and stereotactic and visualization techniques are now stemming into surgical practice. In this presentation, we discuss only computer-display-based approaches of volumetric medical visualization. That is, we assume that the display device available is two-dimensional (2D) in nature and all analysis of multidimensional image data is to be carried out via the 2D screen of the device. There are technologies such as holography and virtual reality that do provide a {open_quotes}true 3D screen{close_quotes}. To confine the scope, this presentation will not discuss such approaches.

  14. Beyond the Renderer: Software Architecture for Parallel Graphics and Visualization

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1996-01-01

    As numerous implementations have demonstrated, software-based parallel rendering is an effective way to obtain the needed computational power for a variety of challenging applications in computer graphics and scientific visualization. To fully realize their potential, however, parallel renderers need to be integrated into a complete environment for generating, manipulating, and delivering visual data. We examine the structure and components of such an environment, including the programming and user interfaces, rendering engines, and image delivery systems. We consider some of the constraints imposed by real-world applications and discuss the problems and issues involved in bringing parallel rendering out of the lab and into production.

  15. Comparative analysis of video processing and 3D rendering for cloud video games using different virtualization technologies

    NASA Astrophysics Data System (ADS)

    Bada, Adedayo; Alcaraz-Calero, Jose M.; Wang, Qi; Grecos, Christos

    2014-05-01

    This paper describes a comprehensive empirical performance evaluation of 3D video processing employing the physical/virtual architecture implemented in a cloud environment. Different virtualization technologies, virtual video cards and various 3D benchmarks tools have been utilized in order to analyse the optimal performance in the context of 3D online gaming applications. This study highlights 3D video rendering performance under each type of hypervisors, and other factors including network I/O, disk I/O and memory usage. Comparisons of these factors under well-known virtual display technologies such as VNC, Spice and Virtual 3D adaptors reveal the strengths and weaknesses of the various hypervisors with respect to 3D video rendering and streaming.

  16. SOAX: A software for quantification of 3D biopolymer networks

    PubMed Central

    Xu, Ting; Vavylonis, Dimitrios; Tsai, Feng-Ching; Koenderink, Gijsje H.; Nie, Wei; Yusuf, Eddy; I-Ju Lee; Wu, Jian-Qiu; Huang, Xiaolei

    2015-01-01

    Filamentous biopolymer networks in cells and tissues are routinely imaged by confocal microscopy. Image analysis methods enable quantitative study of the properties of these curvilinear networks. However, software tools to quantify the geometry and topology of these often dense 3D networks and to localize network junctions are scarce. To fill this gap, we developed a new software tool called “SOAX”, which can accurately extract the centerlines of 3D biopolymer networks and identify network junctions using Stretching Open Active Contours (SOACs). It provides an open-source, user-friendly platform for network centerline extraction, 2D/3D visualization, manual editing and quantitative analysis. We propose a method to quantify the performance of SOAX, which helps determine the optimal extraction parameter values. We quantify several different types of biopolymer networks to demonstrate SOAX's potential to help answer key questions in cell biology and biophysics from a quantitative viewpoint. PMID:25765313

  17. TINA manual landmarking tool: software for the precise digitization of 3D landmarks

    PubMed Central

    2012-01-01

    Background Interest in the placing of landmarks and subsequent morphometric analyses of shape for 3D data has increased with the increasing accessibility of computed tomography (CT) scanners. However, current computer programs for this task suffer from various practical drawbacks. We present here a free software tool that overcomes many of these problems. Results The TINA Manual Landmarking Tool was developed for the digitization of 3D data sets. It enables the generation of a modifiable 3D volume rendering display plus matching orthogonal 2D cross-sections from DICOM files. The object can be rotated and axes defined and fixed. Predefined lists of landmarks can be loaded and the landmarks identified within any of the representations. Output files are stored in various established formats, depending on the preferred evaluation software. Conclusions The software tool presented here provides several options facilitating the placing of landmarks on 3D objects, including volume rendering from DICOM files, definition and fixation of meaningful axes, easy import, placement, control, and export of landmarks, and handling of large datasets. The TINA Manual Landmark Tool runs under Linux and can be obtained for free from http://www.tina-vision.net/tarballs/. PMID:22480150

  18. Modelling Gaia CCD pixels with Silvaco 3D engineering software

    NASA Astrophysics Data System (ADS)

    Seabroke, G. M.; Prod'Homme, T.; Hopkinson, G.; Burt, D.; Robbins, M.; Holland, A.

    2011-02-01

    Gaia will only achieve its unprecedented measurement accuracy requirements with detailed calibration and correction for radiation damage. We present our Silvaco 3D engineering software model of the Gaia CCD pixel and two of its applications for Gaia: (1) physically interpreting supplementary buried channel (SBC) capacity measurements (pocket-pumping and first pixel response) in terms of e2v manufacturing doping alignment tolerances; and (2) deriving electron densities within a charge packet as a function of the number of constituent electrons and 3D position within the charge packet as input to microscopic models being developed to simulate radiation damage.

  19. Performance testing of 3D point cloud software

    NASA Astrophysics Data System (ADS)

    Varela-González, M.; González-Jorge, H.; Riveiro, B.; Arias, P.

    2013-10-01

    LiDAR systems are being used widely in recent years for many applications in the engineering field: civil engineering, cultural heritage, mining, industry and environmental engineering. One of the most important limitations of this technology is the large computational requirements involved in data processing, especially for large mobile LiDAR datasets. Several software solutions for data managing are available in the market, including open source suites, however, users often unknown methodologies to verify their performance properly. In this work a methodology for LiDAR software performance testing is presented and four different suites are studied: QT Modeler, VR Mesh, AutoCAD 3D Civil and the Point Cloud Library running in software developed at the University of Vigo (SITEGI). The software based on the Point Cloud Library shows better results in the loading time of the point clouds and CPU usage. However, it is not as strong as commercial suites in working set and commit size tests.

  20. A PC-based high-quality and interactive virtual endoscopy navigating system using 3D texture based volume rendering.

    PubMed

    Hwang, Jin-Woo; Lee, Jong-Min; Kim, In-Young; Song, In-Ho; Lee, Yong-Hee; Kim, SunI

    2003-05-01

    As an alternative method to optical endoscopy, visual quality and interactivity are crucial for virtual endoscopy. One solution is to use the 3D texture map based volume rendering method that offers high rendering speed without reducing visual quality. However, it is difficult to apply the method to virtual endoscopy. First, 3D texture mapping requires a high-end graphic workstation. Second, texture memory limits reduce the frame-rate. Third, lack of shading reduces visual quality significantly. As 3D texture mapping has become available on personal computers recently, we developed an interactive navigation system using 3D texture mapping on a personal computer. We divided the volume data into small cubes and tested whether the cubes had meaningful data. Only the cubes that passed the test were loaded into the texture memory and rendered. With the amount of data to be rendered minimized, rendering speed increased remarkably. We also improved visual quality by implementing full Phong shading based on the iso-surface shading method without sacrificing interactivity. With the developed navigation system, 256 x 256 x 256 sized brain MRA data was interactively explored with good image quality. PMID:12725966

  1. Multithreaded real-time 3D image processing software architecture and implementation

    NASA Astrophysics Data System (ADS)

    Ramachandra, Vikas; Atanassov, Kalin; Aleksic, Milivoje; Goma, Sergio R.

    2011-03-01

    Recently, 3D displays and videos have generated a lot of interest in the consumer electronics industry. To make 3D capture and playback popular and practical, a user friendly playback interface is desirable. Towards this end, we built a real time software 3D video player. The 3D video player displays user captured 3D videos, provides for various 3D specific image processing functions and ensures a pleasant viewing experience. Moreover, the player enables user interactivity by providing digital zoom and pan functionalities. This real time 3D player was implemented on the GPU using CUDA and OpenGL. The player provides user interactive 3D video playback. Stereo images are first read by the player from a fast drive and rectified. Further processing of the images determines the optimal convergence point in the 3D scene to reduce eye strain. The rationale for this convergence point selection takes into account scene depth and display geometry. The first step in this processing chain is identifying keypoints by detecting vertical edges within the left image. Regions surrounding reliable keypoints are then located on the right image through the use of block matching. The difference in the positions between the corresponding regions in the left and right images are then used to calculate disparity. The extrema of the disparity histogram gives the scene disparity range. The left and right images are shifted based upon the calculated range, in order to place the desired region of the 3D scene at convergence. All the above computations are performed on one CPU thread which calls CUDA functions. Image upsampling and shifting is performed in response to user zoom and pan. The player also consists of a CPU display thread, which uses OpenGL rendering (quad buffers). This also gathers user input for digital zoom and pan and sends them to the processing thread.

  2. 3D in the Fast Lane: Render as You Go with the Latest OpenGL Boards.

    ERIC Educational Resources Information Center

    Sauer, Jeff; Murphy, Sam

    1997-01-01

    NT OpenGL hardware allows modelers and animators to work at relatively inexpensive NT workstations in their own offices or homes previous to shared space and workstation time in expensive studios. Rates seven OpenGL boards and two QuickDraw 3D accelerator boards for Mac users on overall value, wireframe and texture rendering, 2D acceleration, and…

  3. On-the-sphere block-based 3D terrain rendering using a wavelet-encoded terrain database for SVS

    NASA Astrophysics Data System (ADS)

    Baxes, Gregory A.; Linger, Tim

    2006-05-01

    Successful integration and the ultimate adoption of 3D Synthetic Vision (SV) systems into the flight environment as a cockpit aid to pilot situational awareness (SA) depends highly on overcoming two primary engineering obstacles: 1) storing on-board terrain databases with sufficient accuracy, resolution and coverage areas; and 2) achieving real-time, deterministic, accurate and artifact-free 3D terrain rendering. These combined elements create a significant, inversely-compatible challenge to deployable SV systems that has not been adequately addressed in the realm of proliferous VisSim terrain-rendering approaches. Safety-critical SV systems for flight-deployed use, ground-control of flight systems such as UAVs and accurate mission rehearsal systems require a solution to these challenges. This paper describes the TerraMetrics TerraBlocks method of storing wavelet-encoded terrain datasets and a tightly-coupled 3D terrain-block rendering approach. Large-area terrain datasets are encoded using a wavelet transform, producing a hierarchical quadtree, powers-of-2 structure of the original terrain data at numerous levels of detail (LODs). The entire original raster terrain mesh (e.g., DTED) is transformed using either lossless or lossy wavelet transformation and is maintained in an equirectangular projection. The lossless form retains all original terrain mesh data integrity in the flight dataset. A side-effect benefit of terrain data compression is also achieved. The TerraBlocks run-time 3D terrain-block renderer accesses arbitrary, uniform-sized blocks of terrain data at varying LODs, depending on scene composition, from the wavelet-transformed terrain dataset. Terrain data blocks retain a spatially-filtered depiction of the original mesh data at the retrieved LOD. Terrain data blocks are processed as discrete objects and placed into spherical world space, relative to the viewpoint. Rendering determinacy is achieved through terrain-block LOD management and spherical

  4. Appearance of bony lesions on 3-D CT reconstructions: a case study in variable renderings

    NASA Astrophysics Data System (ADS)

    Mankovich, Nicholas J.; White, Stuart C.

    1992-05-01

    This paper discusses conventional 3-D reconstruction for bone visualization and presents a case study to demonstrate the dangers of performing 3-D reconstructions without careful selection of the bone threshold. The visualization of midface bone lesions directly from axial CT images is difficult because of the complex anatomic relationships. Three-dimensional reconstructions made from the CT to provide graphic images showing lesions in relation to adjacent facial bones. Most commercially available 3-D image reconstruction requires that the radiologist or technologist identify a threshold image intensity value that can be used to distinguish bone from other tissues. Much has been made of the many disadvantages of this technique, but it continues as the predominant method in producing 3-D pictures for clinical use. This paper is intended to provide a clear demonstration for the physician of the caveats that should accompany 3-D reconstructions. We present a case of recurrent odontogenic keratocyst in the anterior maxilla where the 3-D reconstructions, made with different bone thresholds (windows), are compared to the resected specimen. A DMI 3200 computer was used to convert the scan data from a GE 9800 CT into a 3-D shaded surface image. Threshold values were assigned to (1) generate the most clinically pleasing image, (2) produce maximum theoretical fidelity (using the midpoint image intensity between average cortical bone and average soft tissue), and (3) cover stepped threshold intensities between these two methods. We compared the computer lesions with the resected specimen and noted measurement errors of up to 44 percent introduced by inappropriate bone threshold levels. We suggest clinically applicable standardization techniques in the 3-D reconstruction as well as cautionary language that should accompany the 3-D images.

  5. FROMS3D: New Software for 3-D Visualization of Fracture Network System in Fractured Rock Masses

    NASA Astrophysics Data System (ADS)

    Noh, Y. H.; Um, J. G.; Choi, Y.

    2014-12-01

    A new software (FROMS3D) is presented to visualize fracture network system in 3-D. The software consists of several modules that play roles in management of borehole and field fracture data, fracture network modelling, visualization of fracture geometry in 3-D and calculation and visualization of intersections and equivalent pipes between fractures. Intel Parallel Studio XE 2013, Visual Studio.NET 2010 and the open source VTK library were utilized as development tools to efficiently implement the modules and the graphical user interface of the software. The results have suggested that the developed software is effective in visualizing 3-D fracture network system, and can provide useful information to tackle the engineering geological problems related to strength, deformability and hydraulic behaviors of the fractured rock masses.

  6. Evaluation of 3-D graphics software: A case study

    NASA Technical Reports Server (NTRS)

    Lores, M. E.; Chasen, S. H.; Garner, J. M.

    1984-01-01

    An efficient 3-D geometry graphics software package which is suitable for advanced design studies was developed. The advanced design system is called GRADE--Graphics for Advanced Design. Efficiency and ease of use are gained by sacrificing flexibility in surface representation. The immediate options were either to continue development of GRADE or to acquire a commercially available system which would replace or complement GRADE. Test cases which would reveal the ability of each system to satisfy the requirements were developed. A scoring method which adequately captured the relative capabilities of the three systems was presented. While more complex multi-attribute decision methods could be used, the selected method provides all the needed information without being so complex that it is difficult to understand. If the value factors are modestly perturbed, system Z is a clear winner based on its overall capabilities. System Z is superior in two vital areas: surfacing and ease of interface with application programs.

  7. Hyoid bone development: An assessment of optimal CT scanner parameters and 3D volume rendering techniques

    PubMed Central

    Cotter, Meghan M.; Whyms, Brian J.; Kelly, Michael P.; Doherty, Benjamin M.; Gentry, Lindell R.; Bersu, Edward T.; Vorperian, Houri K.

    2015-01-01

    The hyoid bone anchors and supports the vocal tract. Its complex shape is best studied in three dimensions, but it is difficult to capture on computed tomography (CT) images and three-dimensional volume renderings. The goal of this study was to determine the optimal CT scanning and rendering parameters to accurately measure the growth and developmental anatomy of the hyoid and to determine whether it is feasible and necessary to use these parameters in the measurement of hyoids from in vivo CT scans. Direct linear and volumetric measurements of skeletonized hyoid bone specimens were compared to corresponding CT images to determine the most accurate scanning parameters and three-dimensional rendering techniques. A pilot study was undertaken using in vivo scans from a retrospective CT database to determine feasibility of quantifying hyoid growth. Scanning parameters and rendering technique affected accuracy of measurements. Most linear CT measurements were within 10% of direct measurements; however, volume was overestimated when CT scans were acquired with a slice thickness greater than 1.25 mm. Slice-by-slice thresholding of hyoid images decreased volume overestimation. The pilot study revealed that the linear measurements tested correlate with age. A fine-tuned rendering approach applied to small slice thickness CT scans produces the most accurate measurements of hyoid bones. However, linear measurements can be accurately assessed from in vivo CT scans at a larger slice thickness. Such findings imply that investigation into the growth and development of the hyoid bone, and the vocal tract as a whole, can now be performed using these techniques. PMID:25810349

  8. Comparison between 3D volumetric rendering and multiplanar slices on the reliability of linear measurements on CBCT images: an in vitro study

    PubMed Central

    FERNANDES, Thais Maria Freire; ADAMCZYK, Julie; POLETI, Marcelo Lupion; HENRIQUES, José Fernando Castanha; FRIEDLAND, Bernard; GARIB, Daniela Gamba

    2015-01-01

    Objective The purpose of this study was to determine the accuracy and reliability of two methods of measurements of linear distances (multiplanar 2D and tridimensional reconstruction 3D) obtained from cone-beam computed tomography (CBCT) with different voxel sizes. Material and Methods Ten dry human mandibles were scanned at voxel sizes of 0.2 and 0.4 mm. Craniometric anatomical landmarks were identified twice by two independent operators on the multiplanar reconstructed and on volume rendering images that were generated by the software Dolphin®. Subsequently, physical measurements were performed using a digital caliper. Analysis of variance (ANOVA), intraclass correlation coefficient (ICC) and Bland-Altman were used for evaluating accuracy and reliability (p<0.05). Results Excellent intraobserver reliability and good to high precision interobserver reliability values were found for linear measurements from CBCT 3D and multiplanar images. Measurements performed on multiplanar reconstructed images were more accurate than measurements in volume rendering compared with the gold standard. No statistically significant difference was found between voxel protocols, independently of the measurement method. Conclusions Linear measurements on multiplanar images of 0.2 and 0.4 voxel are reliable and accurate when compared with direct caliper measurements. Caution should be taken in the volume rendering measurements, because the measurements were reliable, but not accurate for all variables. An increased voxel resolution did not result in greater accuracy of mandible measurements and would potentially provide increased patient radiation exposure. PMID:25004053

  9. A combined fuzzy-neural network model for non-linear prediction of 3-D rendering workload in grid computing.

    PubMed

    Doulamis, Nikolaos D; Doulamis, Anastasios D; Panagakis, Athanasios; Dolkas, Konstantinos; Varvarigou, Theodora A; Varvarigos, Emmanuel

    2004-04-01

    Implementation of a commercial application to a grid infrastructure introduces new challenges in managing the quality-of-service (QoS) requirements, most stem from the fact that negotiation on QoS between the user and the service provider should strictly be satisfied. An interesting commercial application with a wide impact on a variety of fields, which can benefit from the computational grid technologies, is three-dimensional (3-D) rendering. In order to implement, however, 3-D rendering to a grid infrastructure, we should develop appropriate scheduling and resource allocation mechanisms so that the negotiated (QoS) requirements are met. Efficient scheduling schemes require modeling and prediction of rendering workload. In this paper workload prediction is addressed based on a combined fuzzy classification and neural network model. Initially, appropriate descriptors are extracted to represent the synthetic world. The descriptors are obtained by parsing RIB formatted files, which provides a general structure for describing computer-generated images. Fuzzy classification is used for organizing rendering descriptor so that a reliable representation is accomplished which increases the prediction accuracy. Neural network performs workload prediction by modeling the nonlinear input-output relationship between rendering descriptors and the respective computational complexity. To increase prediction accuracy, a constructive algorithm is adopted in this paper to train the neural network so that network weights and size are simultaneously estimated. Then, a grid scheduler scheme is proposed to estimate the queuing order that the tasks should be executed and the most appopriate processor assignment so that the demanded QoS are satisfied as much as possible. A fair scheduling policy is considered as the most appropriate. Experimental results on a real grid infrastructure are presented to illustrate the efficiency of the proposed workload prediction--scheduling algorithm

  10. 3-D surface rendering of myocardial SPECT images segmented by level set technique.

    PubMed

    Lee, Hwun-Jae; Lee, Sangbock

    2012-06-01

    SPECT(single photon emission computed tomography) myocardial imaging is a diagnosis technique that images the region of interest and examines any change induced by disease using a computer after injects intravenously a radiopharmaceutical drug emitting gamma ray and the drug has dispersed evenly in the heart . Myocardial perfusion imaging, which contains functional information, is useful for non-invasive diagnosis of myocardial disease but noises caused by physical factors and low resolution give difficulty in reading the images. In order to help reading myocardial images, this study proposed a method that segments myocardial images and reconstructs the segmented region into a 3D image. To resolve difficulty in reading, we segmented the left ventricle, the region of interest, using a level set and modeled the segmented region into a 3D image. PMID:20839037

  11. ESPript/ENDscript: extracting and rendering sequence and 3D information from atomic structures of proteins

    PubMed Central

    Gouet, Patrice; Robert, Xavier; Courcelle, Emmanuel

    2003-01-01

    The fortran program ESPript was created in 1993, to display on a PostScript figure multiple sequence alignments adorned with secondary structure elements. A web server was made available in 1999 and ESPript has been linked to three major web tools: ProDom which identifies protein domains, PredictProtein which predicts secondary structure elements and NPS@ which runs sequence alignment programs. A web server named ENDscript was created in 2002 to facilitate the generation of ESPript figures containing a large amount of information. ENDscript uses programs such as BLAST, Clustal and PHYLODENDRON to work on protein sequences and such as DSSP, CNS and MOLSCRIPT to work on protein coordinates. It enables the creation, from a single Protein Data Bank identifier, of a multiple sequence alignment figure adorned with secondary structure elements of each sequence of known 3D structure. Similar 3D structures are superimposed in turn with the program PROFIT and a final figure is drawn with BOBSCRIPT, which shows sequence and structure conservation along the Cα trace of the query. ESPript and ENDscript are available at http://genopole.toulouse.inra.fr/ESPript. PMID:12824317

  12. Exploring Brushlet Based 3D Textures in Transfer Function Specification for Direct Volume Rendering of Abdominal Organs.

    PubMed

    Alper Selver, M

    2015-02-01

    Intuitive and differentiating domains for transfer function (TF) specification for direct volume rendering is an important research area for producing informative and useful 3D images. One of the emerging branches of this research is the texture based transfer functions. Although several studies in two, three, and four dimensional image processing show the importance of using texture information, these studies generally focus on segmentation. However, TFs can also be built effectively using appropriate texture information. To accomplish this, methods should be developed to collect wide variety of shape, orientation, and texture of biological tissues and organs. In this study, volumetric data (i.e., domain of a TF) is enhanced using brushlet expansion, which represents both low and high frequency textured structures at different quadrants in transform domain. Three methods (i.e., expert based manual, atlas and machine learning based automatic) are proposed for selection of the quadrants. Non-linear manipulation of the complex brushlet coefficients is also used prior to the tiling of selected quadrants and reconstruction of the volume. Applications to abdominal data sets acquired with CT, MR, and PET show that the proposed volume enhancement effectively improves the quality of 3D rendering using well-known TF specification techniques. PMID:26357028

  13. Accuracy and reliability of measurements obtained from computed tomography 3D volume rendered images.

    PubMed

    Stull, Kyra E; Tise, Meredith L; Ali, Zabiullah; Fowler, David R

    2014-05-01

    Forensic pathologists commonly use computed tomography (CT) images to assist in determining the cause and manner of death as well as for mass disaster operations. Even though the design of the CT machine does not inherently produce distortion, most techniques within anthropology rely on metric variables, thus concern exists regarding the accuracy of CT images reflecting an object's true dimensions. Numerous researchers have attempted to validate the use of CT images, however the comparisons have only been conducted on limited elements and/or comparisons were between measurements taken from a dry element and measurements taken from the 3D-CT image of the same dry element. A full-body CT scan was performed prior to autopsy at the Office of the Chief Medical Examiner for the State of Maryland. Following autopsy, the remains were processed to remove all soft tissues and the skeletal elements were subject to an additional CT scan. Percent differences and Bland-Altman plots were used to assess the accuracy between osteometric variables obtained from the dry skeletal elements and from CT images with and without soft tissues. An additional seven crania were scanned, measured by three observers, and the reliability was evaluated by technical error of measurement (TEM) and relative technical error of measurement (%TEM). Average percent differences between the measurements obtained from the three data sources ranged from 1.4% to 2.9%. Bland-Altman plots illustrated the two sets of measurements were generally within 2mm for each comparison between data sources. Intra-observer TEM and %TEM for three observers and all craniometric variables ranged between 0.46mm and 0.77mm and 0.56% and 1.06%, respectively. The three-way inter-observer TEM and %TEM for craniometric variables was 2.6mm and 2.26%, respectively. Variables that yielded high error rates were orbital height, orbital breadth, inter-orbital breadth and parietal chord. Overall, minimal differences were found among the

  14. Image-Based Rendering of LOD1 3D City Models for traffic-augmented Immersive Street-view Navigation

    NASA Astrophysics Data System (ADS)

    Brédif, M.

    2013-10-01

    It may be argued that urban areas may now be modeled with sufficient details for realistic fly-through over the cities at a reasonable price point. Modeling cities at the street level for immersive street-view navigation is however still a very expensive (or even impossible) operation if one tries to match the level of detail acquired by street-view mobile mapping imagery. This paper proposes to leverage the richness of these street-view images with the common availability of nation-wide LOD1 3D city models, using an image-based rendering technique : projective multi-texturing. Such a coarse 3D city model may be used as a lightweight scene proxy of approximate coarse geometry. The images neighboring the interpolated viewpoint are projected onto this scene proxy using their estimated poses and calibrations and blended together according to their relative distance. This enables an immersive navigation within the image dataset that is perfectly equal to - and thus as rich as - original images when viewed from their viewpoint location, and which degrades gracefully in between viewpoint locations. Beyond proving the applicability of this preprocessing-free computer graphics technique to mobile mapping images and LOD1 3D city models, our contributions are three-fold. Firstly, image distortion is corrected online in the GPU, preventing an extra image resampling step. Secondly, externally-computed binary masks may be used to discard pixels corresponding to moving objects. Thirdly, we propose a shadowmap-inspired technique that prevents, at marginal cost, the projective texturing of surfaces beyond the first, as seen from the projected image viewpoint location. Finally, an augmented visualization application is introduced to showcase the proposed immersive navigation: images are unpopulated from vehicles using externally-computed binary masks and repopulated using a 3D visualization of a 2D traffic simulation.

  15. Fast software-based volume rendering using multimedia instructions on PC platforms and its application to virtual endoscopy

    NASA Astrophysics Data System (ADS)

    Mori, Kensaku; Suenaga, Yasuhito; Toriwaki, Jun-ichiro

    2003-05-01

    This paper describes a software-based fast volume rendering (VolR) method on a PC platform by using multimedia instructions, such as SIMD instructions, which are currently available in PCs' CPUs. This method achieves fast rendering speed through highly optimizing software rather than an improved rendering algorithm. In volume rendering using a ray casting method, the system requires fast execution of the following processes: (a) interpolation of voxel or color values at sample points, (b) computation of normal vectors (gray-level gradient vectors), (c) calculation of shaded values obtained by dot-products of normal vectors and light source direction vectors, (d) memory access to a huge area, and (e) efficient ray skipping at translucent regions. The proposed software implements these fundamental processes in volume rending by using special instruction sets for multimedia processing. The proposed software can generate virtual endoscopic images of a 3-D volume of 512x512x489 voxel size by volume rendering with perspective projection, specular reflection, and on-the-fly normal vector computation on a conventional PC without any special hardware at thirteen frames per second. Semi-translucent display is also possible.

  16. Sphere-Enhanced Microwave Ablation (sMWA) Versus Bland Microwave Ablation (bMWA): Technical Parameters, Specific CT 3D Rendering and Histopathology

    SciTech Connect

    Gockner, T. L.; Zelzer, S.; Mokry, T. Gnutzmann, D. Bellemann, N.; Mogler, C.; Beierfuß, A. Köllensperger, E. Germann, G.; Radeleff, B. A. Stampfl, U. Kauczor, H. U.; Pereira, P. L.; Sommer, C. M.

    2015-04-15

    PurposeThis study was designed to compare technical parameters during ablation as well as CT 3D rendering and histopathology of the ablation zone between sphere-enhanced microwave ablation (sMWA) and bland microwave ablation (bMWA).MethodsIn six sheep-livers, 18 microwave ablations were performed with identical system presets (power output: 80 W, ablation time: 120 s). In three sheep, transarterial embolisation (TAE) was performed immediately before microwave ablation using spheres (diameter: 40 ± 10 μm) (sMWA). In the other three sheep, microwave ablation was performed without spheres embolisation (bMWA). Contrast-enhanced CT, sacrifice, and liver harvest followed immediately after microwave ablation. Study goals included technical parameters during ablation (resulting power output, ablation time), geometry of the ablation zone applying specific CT 3D rendering with a software prototype (short axis of the ablation zone, volume of the largest aligned ablation sphere within the ablation zone), and histopathology (hematoxylin-eosin, Masson Goldner and TUNEL).ResultsResulting power output/ablation times were 78.7 ± 1.0 W/120 ± 0.0 s for bMWA and 78.4 ± 1.0 W/120 ± 0.0 s for sMWA (n.s., respectively). Short axis/volume were 23.7 ± 3.7 mm/7.0 ± 2.4 cm{sup 3} for bMWA and 29.1 ± 3.4 mm/11.5 ± 3.9 cm{sup 3} for sMWA (P < 0.01, respectively). Histopathology confirmed the signs of coagulation necrosis as well as early and irreversible cell death for bMWA and sMWA. For sMWA, spheres were detected within, at the rim, and outside of the ablation zone without conspicuous features.ConclusionsSpecific CT 3D rendering identifies a larger ablation zone for sMWA compared with bMWA. The histopathological signs and the detectable amount of cell death are comparable for both groups. When comparing sMWA with bMWA, TAE has no effect on the technical parameters during ablation.

  17. 3D reconstruction software comparison for short sequences

    NASA Astrophysics Data System (ADS)

    Strupczewski, Adam; Czupryński, BłaŻej

    2014-11-01

    Large scale multiview reconstruction is recently a very popular area of research. There are many open source tools that can be downloaded and run on a personal computer. However, there are few, if any, comparisons between all the available software in terms of accuracy on small datasets that a single user can create. The typical datasets for testing of the software are archeological sites or cities, comprising thousands of images. This paper presents a comparison of currently available open source multiview reconstruction software for small datasets. It also compares the open source solutions with a simple structure from motion pipeline developed by the authors from scratch with the use of OpenCV and Eigen libraries.

  18. See-Through Imaging of Laser-Scanned 3d Cultural Heritage Objects Based on Stochastic Rendering of Large-Scale Point Clouds

    NASA Astrophysics Data System (ADS)

    Tanaka, S.; Hasegawa, K.; Okamoto, N.; Umegaki, R.; Wang, S.; Uemura, M.; Okamoto, A.; Koyamada, K.

    2016-06-01

    We propose a method for the precise 3D see-through imaging, or transparent visualization, of the large-scale and complex point clouds acquired via the laser scanning of 3D cultural heritage objects. Our method is based on a stochastic algorithm and directly uses the 3D points, which are acquired using a laser scanner, as the rendering primitives. This method achieves the correct depth feel without requiring depth sorting of the rendering primitives along the line of sight. Eliminating this need allows us to avoid long computation times when creating natural and precise 3D see-through views of laser-scanned cultural heritage objects. The opacity of each laser-scanned object is also flexibly controllable. For a laser-scanned point cloud consisting of more than 107 or 108 3D points, the pre-processing requires only a few minutes, and the rendering can be executed at interactive frame rates. Our method enables the creation of cumulative 3D see-through images of time-series laser-scanned data. It also offers the possibility of fused visualization for observing a laser-scanned object behind a transparent high-quality photographic image placed in the 3D scene. We demonstrate the effectiveness of our method by applying it to festival floats of high cultural value. These festival floats have complex outer and inner 3D structures and are suitable for see-through imaging.

  19. New software for visualizing 3D geological data in coal mines

    NASA Astrophysics Data System (ADS)

    Lee, Sungjae; Choi, Yosoon

    2015-04-01

    This study developed new software to visualize 3D geological data in coal mines. The Visualization Tool Kit (VTK) library and Visual Basic.NET 2010 were used to implement the software. The software consists of several modules providing functionalities: (1) importing and editing borehole data; (2) modelling of coal seams in 3D; (3) modelling of coal properties using 3D ordinary Kriging method; (4) calculating economical values of 3D blocks; (5) pit boundary optimization for identifying economical coal reserves based on the Lerchs-Grosmann algorithm; and (6) visualizing 3D geological, geometrical and economical data. The software has been applied to a small-scale open-pit coal mine in Indonesia revealed that it can provide useful information supporting the planning and design of open-pit coal mines.

  20. JULIDE: a software tool for 3D reconstruction and statistical analysis of autoradiographic mouse brain sections.

    PubMed

    Ribes, Delphine; Parafita, Julia; Charrier, Rémi; Magara, Fulvio; Magistretti, Pierre J; Thiran, Jean-Philippe

    2010-01-01

    In this article we introduce JULIDE, a software toolkit developed to perform the 3D reconstruction, intensity normalization, volume standardization by 3D image registration and voxel-wise statistical analysis of autoradiographs of mouse brain sections. This software tool has been developed in the open-source ITK software framework and is freely available under a GPL license. The article presents the complete image processing chain from raw data acquisition to 3D statistical group analysis. Results of the group comparison in the context of a study on spatial learning are shown as an illustration of the data that can be obtained with this tool. PMID:21124830

  1. On Fundamental Evaluation Using Uav Imagery and 3d Modeling Software

    NASA Astrophysics Data System (ADS)

    Nakano, K.; Suzuki, H.; Tamino, T.; Chikatsu, H.

    2016-06-01

    Unmanned aerial vehicles (UAVs), which have been widely used in recent years, can acquire high-resolution images with resolutions in millimeters; such images cannot be acquired with manned aircrafts. Moreover, it has become possible to obtain a surface reconstruction of a realistic 3D model using high-overlap images and 3D modeling software such as Context capture, Pix4Dmapper, Photoscan based on computer vision technology such as structure from motion and multi-view stereo. 3D modeling software has many applications. However, most of them seem to not have obtained appropriate accuracy control in accordance with the knowledge of photogrammetry and/or computer vision. Therefore, we performed flight tests in a test field using an UAV equipped with a gimbal stabilizer and consumer grade digital camera. Our UAV is a hexacopter and can fly according to the waypoints for autonomous flight and can record flight logs. We acquired images from different altitudes such as 10 m, 20 m, and 30 m. We obtained 3D reconstruction results of orthoimages, point clouds, and textured TIN models for accuracy evaluation in some cases with different image scale conditions using 3D modeling software. Moreover, the accuracy aspect was evaluated for different units of input image—course unit and flight unit. This paper describes the fundamental accuracy evaluation for 3D modeling using UAV imagery and 3D modeling software from the viewpoint of close-range photogrammetry.

  2. Some Methods of Applied Numerical Analysis to 3d Facial Reconstruction Software

    NASA Astrophysics Data System (ADS)

    Roşu, Şerban; Ianeş, Emilia; Roşu, Doina

    2010-09-01

    This paper deals with the collective work performed by medical doctors from the University Of Medicine and Pharmacy Timisoara and engineers from the Politechnical Institute Timisoara in the effort to create the first Romanian 3d reconstruction software based on CT or MRI scans and to test the created software in clinical practice.

  3. UNIQUIMER 3D, a software system for structural DNA nanotechnology design, analysis and evaluation

    PubMed Central

    Zhu, Jinhao; Wei, Bryan; Yuan, Yuan; Mi, Yongli

    2009-01-01

    A user-friendly software system, UNIQUIMER 3D, was developed to design DNA structures for nanotechnology applications. It consists of 3D visualization, internal energy minimization, sequence generation and construction of motif array simulations (2D tiles and 3D lattices) functionalities. The system can be used to check structural deformation and design errors under scaled-up conditions. UNIQUIMER 3D has been tested on the design of both existing motifs (holiday junction, 4 × 4 tile, double crossover, DNA tetrahedron, DNA cube, etc.) and nonexisting motifs (soccer ball). The results demonstrated UNIQUIMER 3D's capability in designing large complex structures. We also designed a de novo sequence generation algorithm. UNIQUIMER 3D was developed for the Windows environment and is provided free of charge to the nonprofit research institutions. PMID:19228709

  4. High-performance GPU-based rendering for real-time, rigid 2D/3D-image registration and motion prediction in radiation oncology

    PubMed Central

    Spoerk, Jakob; Gendrin, Christelle; Weber, Christoph; Figl, Michael; Pawiro, Supriyanto Ardjo; Furtado, Hugo; Fabri, Daniella; Bloch, Christoph; Bergmann, Helmar; Gröller, Eduard; Birkfellner, Wolfgang

    2012-01-01

    A common problem in image-guided radiation therapy (IGRT) of lung cancer as well as other malignant diseases is the compensation of periodic and aperiodic motion during dose delivery. Modern systems for image-guided radiation oncology allow for the acquisition of cone-beam computed tomography data in the treatment room as well as the acquisition of planar radiographs during the treatment. A mid-term research goal is the compensation of tumor target volume motion by 2D/3D registration. In 2D/3D registration, spatial information on organ location is derived by an iterative comparison of perspective volume renderings, so-called digitally rendered radiographs (DRR) from computed tomography volume data, and planar reference x-rays. Currently, this rendering process is very time consuming, and real-time registration, which should at least provide data on organ position in less than a second, has not come into existence. We present two GPU-based rendering algorithms which generate a DRR of 512 × 512 pixels size from a CT dataset of 53 MB size at a pace of almost 100 Hz. This rendering rate is feasible by applying a number of algorithmic simplifications which range from alternative volume-driven rendering approaches – namely so-called wobbled splatting – to sub-sampling of the DRR-image by means of specialized raycasting techniques. Furthermore, general purpose graphics processing unit (GPGPU) programming paradigms were consequently utilized. Rendering quality and performance as well as the influence on the quality and performance of the overall registration process were measured and analyzed in detail. The results show that both methods are competitive and pave the way for fast motion compensation by rigid and possibly even non-rigid 2D/3D registration and, beyond that, adaptive filtering of motion models in IGRT. PMID:21782399

  5. Comparative study of software techniques for 3D mapping of perforators in deep inferior epigastric artery perforator flap planning

    PubMed Central

    Hunter-Smith, David J.; Rozen, Warren Matthew

    2016-01-01

    Background Computed tomographic (CT) angiography (CTA) is widely considered the gold standard imaging modality for preoperative planning autologous breast reconstruction with deep inferior epigastric artery (DIEA) perforator (DIEP) flap. Improved anatomical understanding from CTA has translated to enhanced clinical outcomes. To achieve this, the use of appropriate CT hardware and software is vital. Various CT scanners and contrast materials have been demonstrated to consistently produce adequate scan data. However, the availability of affordable and easily accessible imaging software capable of generating 3D volume-rendered perforator images to clinically useful quality has been lacking. Osirix (Pixmeo, Geneva, Switzerland) is a free, readily available medical image processing software that shows promise. We have previously demonstrated in a case report the usefulness of Osirix in localizing perforators and their course. Methods In the current case series of 50 consecutive CTA scans, we compare the accuracy of Osirix to a commonly used proprietary 3D imaging software, Siemens Syngo InSpace 4D (Siemens, Erlangen, Germany), in identifying perforator number and location. Moreover, we compared both programs to intraoperative findings. Results We report a high rate of concordance with Osirix and Siemens Syngo InSpace 4D (99.6%). Both programs correlated closely with operative findings (92.2%). Most of the discrepancies were found in the lateral row perforators (90%). Conclusions In the current study, we report the accuracy of Osirix that is comparable to Siemens Syngo InSpace 4D, a proprietary software, in mapping perforators. However, it provides an added advantage of being free, easy-to-use, portable, and potentially a superior quality of 3D reconstructed image. PMID:27047778

  6. 3DVEM Software Modules for Efficient Management of Point Clouds and Photorealistic 3d Models

    NASA Astrophysics Data System (ADS)

    Fabado, S.; Seguí, A. E.; Cabrelles, M.; Navarro, S.; García-De-San-Miguel, D.; Lerma, J. L.

    2013-07-01

    Cultural heritage managers in general and information users in particular are not usually used to deal with high-technological hardware and software. On the contrary, information providers of metric surveys are most of the times applying latest developments for real-life conservation and restoration projects. This paper addresses the software issue of handling and managing either 3D point clouds or (photorealistic) 3D models to bridge the gap between information users and information providers as regards the management of information which users and providers share as a tool for decision-making, analysis, visualization and management. There are not many viewers specifically designed to handle, manage and create easily animations of architectural and/or archaeological 3D objects, monuments and sites, among others. 3DVEM - 3D Viewer, Editor & Meter software will be introduced to the scientific community, as well as 3DVEM - Live and 3DVEM - Register. The advantages of managing projects with both sets of data, 3D point cloud and photorealistic 3D models, will be introduced. Different visualizations of true documentation projects in the fields of architecture, archaeology and industry will be presented. Emphasis will be driven to highlight the features of new userfriendly software to manage virtual projects. Furthermore, the easiness of creating controlled interactive animations (both walkthrough and fly-through) by the user either on-the-fly or as a traditional movie file will be demonstrated through 3DVEM - Live.

  7. Internet-based hardware/software co-design framework for embedded 3D graphics applications

    NASA Astrophysics Data System (ADS)

    Yeh, Chi-Tsai; Wang, Chun-Hao; Huang, Ing-Jer; Wong, Weng-Fai

    2011-12-01

    Advances in technology are making it possible to run three-dimensional (3D) graphics applications on embedded and handheld devices. In this article, we propose a hardware/software co-design environment for 3D graphics application development that includes the 3D graphics software, OpenGL ES application programming interface (API), device driver, and 3D graphics hardware simulators. We developed a 3D graphics system-on-a-chip (SoC) accelerator using transaction-level modeling (TLM). This gives software designers early access to the hardware even before it is ready. On the other hand, hardware designers also stand to gain from the more complex test benches made available in the software for verification. A unique aspect of our framework is that it allows hardware and software designers from geographically dispersed areas to cooperate and work on the same framework. Designs can be entered and executed from anywhere in the world without full access to the entire framework, which may include proprietary components. This results in controlled and secure transparency and reproducibility, granting leveled access to users of various roles.

  8. 3D reconstruction of SEM images by use of optical photogrammetry software.

    PubMed

    Eulitz, Mona; Reiss, Gebhard

    2015-08-01

    Reconstruction of the three-dimensional (3D) surface of an object to be examined is widely used for structure analysis in science and many biological questions require information about their true 3D structure. For Scanning Electron Microscopy (SEM) there has been no efficient non-destructive solution for reconstruction of the surface morphology to date. The well-known method of recording stereo pair images generates a 3D stereoscope reconstruction of a section, but not of the complete sample surface. We present a simple and non-destructive method of 3D surface reconstruction from SEM samples based on the principles of optical close range photogrammetry. In optical close range photogrammetry a series of overlapping photos is used to generate a 3D model of the surface of an object. We adapted this method to the special SEM requirements. Instead of moving a detector around the object, the object itself was rotated. A series of overlapping photos was stitched and converted into a 3D model using the software commonly used for optical photogrammetry. A rabbit kidney glomerulus was used to demonstrate the workflow of this adaption. The reconstruction produced a realistic and high-resolution 3D mesh model of the glomerular surface. The study showed that SEM micrographs are suitable for 3D reconstruction by optical photogrammetry. This new approach is a simple and useful method of 3D surface reconstruction and suitable for various applications in research and teaching. PMID:26073969

  9. Development of a new software for analyzing 3-D fracture network

    NASA Astrophysics Data System (ADS)

    Um, Jeong-Gi; Noh, Young-Hwan; Choi, Yosoon

    2014-05-01

    A new software is presented to analyze fracture network in 3-D. Recently, we completed the software package based on information given in EGU2013. The software consists of several modules that play roles in management of borehole data, stochastic modelling of fracture network, construction of analysis domain, visualization of fracture geometry in 3-D, calculation of equivalent pipes and production of cross-section diagrams. Intel Parallel Studio XE 2013, Visual Studio.NET 2010 and the open source VTK library were utilized as development tools to efficiently implement the modules and the graphical user interface of the software. A case study was performed to analyze 3-D fracture network system at the Upper Devonian Grosmont Formation in Alberta, Canada. The results have suggested that the developed software is effective in modelling and visualizing 3-D fracture network system, and can provide useful information to tackle the geomechanical problems related to strength, deformability and hydraulic behaviours of the fractured rock masses. This presentation describes the concept and details of the development and implementation of the software.

  10. Motion-Capture-Enabled Software for Gestural Control of 3D Models

    NASA Technical Reports Server (NTRS)

    Norris, Jeffrey S.; Luo, Victor; Crockett, Thomas M.; Shams, Khawaja S.; Powell, Mark W.; Valderrama, Anthony

    2012-01-01

    Current state-of-the-art systems use general-purpose input devices such as a keyboard, mouse, or joystick that map to tasks in unintuitive ways. This software enables a person to control intuitively the position, size, and orientation of synthetic objects in a 3D virtual environment. It makes possible the simultaneous control of the 3D position, scale, and orientation of 3D objects using natural gestures. Enabling the control of 3D objects using a commercial motion-capture system allows for natural mapping of the many degrees of freedom of the human body to the manipulation of the 3D objects. It reduces training time for this kind of task, and eliminates the need to create an expensive, special-purpose controller.

  11. Morphological and Volumetric Assessment of Cerebral Ventricular System with 3D Slicer Software.

    PubMed

    Gonzalo Domínguez, Miguel; Hernández, Cristina; Ruisoto, Pablo; Juanes, Juan A; Prats, Alberto; Hernández, Tomás

    2016-06-01

    We present a technological process based on the 3D Slicer software for the three-dimensional study of the brain's ventricular system with teaching purposes. It values the morphology of this complex brain structure, as a whole and in any spatial position, being able to compare it with pathological studies, where its anatomy visibly changes. 3D Slicer was also used to obtain volumetric measurements in order to provide a more comprehensive and detail representation of the ventricular system. We assess the potential this software has for processing high resolution images, taken from Magnetic Resonance and generate the three-dimensional reconstruction of ventricular system. PMID:27147517

  12. A Distributed GPU-Based Framework for Real-Time 3D Volume Rendering of Large Astronomical Data Cubes

    NASA Astrophysics Data System (ADS)

    Hassan, A. H.; Fluke, C. J.; Barnes, D. G.

    2012-05-01

    We present a framework to volume-render three-dimensional data cubes interactively using distributed ray-casting and volume-bricking over a cluster of workstations powered by one or more graphics processing units (GPUs) and a multi-core central processing unit (CPU). The main design target for this framework is to provide an in-core visualization solution able to provide three-dimensional interactive views of terabyte-sized data cubes. We tested the presented framework using a computing cluster comprising 64 nodes with a total of 128GPUs. The framework proved to be scalable to render a 204GB data cube with an average of 30 frames per second. Our performance analyses also compare the use of NVIDIA Tesla 1060 and 2050GPU architectures and the effect of increasing the visualization output resolution on the rendering performance. Although our initial focus, as shown in the examples presented in this work, is volume rendering of spectral data cubes from radio astronomy, we contend that our approach has applicability to other disciplines where close to real-time volume rendering of terabyte-order three-dimensional data sets is a requirement.

  13. FluoRender: An Application of 2D Image Space Methods for 3D and 4D Confocal Microscopy Data Visualization in Neurobiology Research

    PubMed Central

    Wan, Yong; Otsuna, Hideo; Chien, Chi-Bin; Hansen, Charles

    2013-01-01

    2D image space methods are processing methods applied after the volumetric data are projected and rendered into the 2D image space, such as 2D filtering, tone mapping and compositing. In the application domain of volume visualization, most 2D image space methods can be carried out more efficiently than their 3D counterparts. Most importantly, 2D image space methods can be used to enhance volume visualization quality when applied together with volume rendering methods. In this paper, we present and discuss the applications of a series of 2D image space methods as enhancements to confocal microscopy visualizations, including 2D tone mapping, 2D compositing, and 2D color mapping. These methods are easily integrated with our existing confocal visualization tool, FluoRender, and the outcome is a full-featured visualization system that meets neurobiologists’ demands for qualitative analysis of confocal microscopy data. PMID:23584131

  14. Visualizer: 3D Gridded Data Visualization Software for Geoscience Education and Research

    NASA Astrophysics Data System (ADS)

    Harwood, C.; Billen, M. I.; Kreylos, O.; Jadamec, M.; Sumner, D. Y.; Kellogg, L. H.; Hamann, B.

    2008-12-01

    In both research and education learning is an interactive and iterative process of exploring and analyzing data or model results. However, visualization software often presents challenges on the path to learning because it assumes the user already knows the locations and types of features of interest, instead of enabling flexible and intuitive examination of results. We present examples of research and teaching using the software, Visualizer, specifically designed to create an effective and intuitive environment for interactive, scientific analysis of 3D gridded data. Visualizer runs in a range of 3D virtual reality environments (e.g., GeoWall, ImmersaDesk, or CAVE), but also provides a similar level of real-time interactivity on a desktop computer. When using Visualizer in a 3D-enabled environment, the software allows the user to interact with the data images as real objects, grabbing, rotating or walking around the data to gain insight and perspective. On the desktop, simple features, such as a set of cross-bars marking the plane of the screen, provide extra 3D spatial cues that allow the user to more quickly understand geometric relationships within the data. This platform portability allows the user to more easily integrate research results into classroom demonstrations and exercises, while the interactivity provides an engaging environment for self-directed and inquiry-based learning by students. Visualizer software is freely available for download (www.keckcaves.org) and runs on Mac OSX and Linux platforms.

  15. 3D Game-Based Learning System for Improving Learning Achievement in Software Engineering Curriculum

    ERIC Educational Resources Information Center

    Su,Chung-Ho; Cheng, Ching-Hsue

    2013-01-01

    The advancement of game-based learning has encouraged many related studies, such that students could better learn curriculum by 3-dimension virtual reality. To enhance software engineering learning, this paper develops a 3D game-based learning system to assist teaching and assess the students' motivation, satisfaction and learning…

  16. Scipion: A software framework toward integration, reproducibility and validation in 3D electron microscopy.

    PubMed

    de la Rosa-Trevín, J M; Quintana, A; Del Cano, L; Zaldívar, A; Foche, I; Gutiérrez, J; Gómez-Blanco, J; Burguet-Castell, J; Cuenca-Alba, J; Abrishami, V; Vargas, J; Otón, J; Sharov, G; Vilas, J L; Navas, J; Conesa, P; Kazemi, M; Marabini, R; Sorzano, C O S; Carazo, J M

    2016-07-01

    In the past few years, 3D electron microscopy (3DEM) has undergone a revolution in instrumentation and methodology. One of the central players in this wide-reaching change is the continuous development of image processing software. Here we present Scipion, a software framework for integrating several 3DEM software packages through a workflow-based approach. Scipion allows the execution of reusable, standardized, traceable and reproducible image-processing protocols. These protocols incorporate tools from different programs while providing full interoperability among them. Scipion is an open-source project that can be downloaded from http://scipion.cnb.csic.es. PMID:27108186

  17. The capture and dissemination of integrated 3D geospatial knowledge at the British Geological Survey using GSI3D software and methodology

    NASA Astrophysics Data System (ADS)

    Kessler, Holger; Mathers, Steve; Sobisch, Hans-Georg

    2009-06-01

    The Geological Surveying and Investigation in 3 Dimensions (GSI3D) software tool and methodology has been developed over the last 15 years. Since 2001 this has been in cooperation with the British Geological Survey (BGS). To-date over a hundred BGS geologists have learned to use the software that is now routinely deployed in building systematic and commercial 3D geological models. The success of the GSI3D methodology and software is based on its intuitive design and the fact that it utilises exactly the same data and methods, albeit in digital forms, that geologists have been using for two centuries in order to make geological maps and cross-sections. The geologist constructs models based on a career of observation of geological phenomena, thereby incorporating tacit knowledge into the model. This knowledge capture is a key element to the GSI3D approach. In BGS GSI3D is part of a much wider set of systems and work processes that together make up the cyberinfrastructure of a modern geological survey. The GSI3D software is not yet designed to cope with bedrock structures in which individual stratigraphic surfaces are repeated or inverted, but the software is currently being extended by BGS to encompass these more complex geological scenarios. A further challenge for BGS is to enable its 3D geological models to become part of the semantic Web using GML application schema like GeoSciML. The biggest benefits of widely available systematic geological models will be an enhanced public understanding of the sub-surface in 3D, and the teaching of geoscience students.

  18. 3D-Assisted Quantitative Assessment of Orbital Volume Using an Open-Source Software Platform in a Taiwanese Population

    PubMed Central

    Shyu, Victor Bong-Hang; Hsu, Chung-En; Chen, Chih-hao; Chen, Chien-Tzung

    2015-01-01

    Orbital volume evaluation is an important part of pre-operative assessments in orbital trauma and congenital deformity patients. The availability of the affordable, open-source software, OsiriX, as a tool for preoperative planning increased the popularity of radiological assessments by the surgeon. A volume calculation method based on 3D volume rendering-assisted region-of-interest computation was used to determine the normal orbital volume in Taiwanese patients after reorientation to the Frankfurt plane. Method one utilized 3D points for intuitive orbital rim outlining. The mean normal orbital volume for left and right orbits was 24.3±1.51 ml and 24.7±1.17 ml in male and 21.0±1.21 ml and 21.1±1.30 ml in female subjects. Another method (method two) based on the bilateral orbital lateral rim was also used to calculate orbital volume and compared with method one. The mean normal orbital volume for left and right orbits was 19.0±1.68 ml and 19.1±1.45 ml in male and 16.0±1.01 ml and 16.1±0.92 ml in female subjects. The inter-rater reliability and intra-rater measurement accuracy between users for both methods was found to be acceptable for orbital volume calculations. 3D-assisted quantification of orbital volume is a feasible technique for orbital volume assessment. The normal orbital volume can be used as controls in cases of unilateral orbital reconstruction with a mean size discrepancy of less than 3.1±2.03% in females and 2.7±1.32% in males. The OsiriX software can be used reliably by the individual surgeon as a comprehensive preoperative planning and imaging tool for orbital volume measurement and computed tomography reorientation. PMID:25774683

  19. Software Development: 3D Animations and Creating User Interfaces for Realistic Simulations

    NASA Technical Reports Server (NTRS)

    Gordillo, Orlando Enrique

    2015-01-01

    My fall 2015 semester was spent at the Lyndon B. Johnson Space Center working in the Integrated Graphics, Operations, and Analysis Laboratory (IGOAL). My first project was to create a video animation that could tell the story of OMICS. OMICS is a term being used in the field of biomedical science to describe the collective technologies that study biological systems, such as what makes up a cell and how it functions with other systems. In the IGOAL I used a large 23 inch Wacom monitor to draw storyboards, graphics, and line art animations. I used Blender as the 3D environment to sculpt, shape, cut or modify the several scenes and models for the video. A challenge creating this video was to take a term used in biomedical science and describe it in such a way that an 8th grade student can understand. I used a line art style because it would visually set the tone for what we thought was an educational style. In order to get a handle on the perspective and overall feel for the animation without overloading my workspace, I split up the 2 minute animation into several scenes. I used Blender's python scripting capabilities which allowed for the addition of plugins to add or modify tools. The scripts can also directly interact with the objects to create naturalistic patterns or movements. After collecting the rendered scenes, I used Blender's built-in video editing workspace to output the animation. My second project was to write software that emulates a physical system's interface. The interface was to simulate a boat, ROV, and winch system. Simulations are a time and cost effective way to test complicated data and provide training for operators without having to use expensive hardware. We created the virtual controls with 3-D Blender models and 2-D graphics, and then add functionality in C# using the Unity game engine. The Unity engine provides several essential behaviors of a simulator, such as the start and update functions. A framework for Unity, which was developed in

  20. An open-source deconvolution software package for 3-D quantitative fluorescence microscopy imaging

    PubMed Central

    SUN, Y.; DAVIS, P.; KOSMACEK, E. A.; IANZINI, F.; MACKEY, M. A.

    2010-01-01

    Summary Deconvolution techniques have been widely used for restoring the 3-D quantitative information of an unknown specimen observed using a wide-field fluorescence microscope. Deconv, an open-source deconvolution software package, was developed for 3-D quantitative fluorescence microscopy imaging and was released under the GNU Public License. Deconv provides numerical routines for simulation of a 3-D point spread function and deconvolution routines implemented three constrained iterative deconvolution algorithms: one based on a Poisson noise model and two others based on a Gaussian noise model. These algorithms are presented and evaluated using synthetic images and experimentally obtained microscope images, and the use of the library is explained. Deconv allows users to assess the utility of these deconvolution algorithms and to determine which are suited for a particular imaging application. The design of Deconv makes it easy for deconvolution capabilities to be incorporated into existing imaging applications. PMID:19941558

  1. 3D modeling of high-Tc superconductors by finite element software

    NASA Astrophysics Data System (ADS)

    Zhang, Min; Coombs, T. A.

    2012-01-01

    A three-dimensional (3D) numerical model is proposed to solve the electromagnetic problems involving transport current and background field of a high-Tc superconducting (HTS) system. The model is characterized by the E-J power law and H-formulation, and is successfully implemented using finite element software. We first discuss the model in detail, including the mesh methods, boundary conditions and computing time. To validate the 3D model, we calculate the ac loss and trapped field solution for a bulk material and compare the results with the previously verified 2D solutions and an analytical solution. We then apply our model to test some typical problems such as superconducting bulk array and twisted conductors, which cannot be tackled by the 2D models. The new 3D model could be a powerful tool for researchers and engineers to investigate problems with a greater level of complicity.

  2. The Performance Evaluation of Multi-Image 3d Reconstruction Software with Different Sensors

    NASA Astrophysics Data System (ADS)

    Mousavi, V.; Khosravi, M.; Ahmadi, M.; Noori, N.; Naveh, A. Hosseini; Varshosaz, M.

    2015-12-01

    Today, multi-image 3D reconstruction is an active research field and generating three dimensional model of the objects is one the most discussed issues in Photogrammetry and Computer Vision that can be accomplished using range-based or image-based methods. Very accurate and dense point clouds generated by range-based methods such as structured light systems and laser scanners has introduced them as reliable tools in the industry. Image-based 3D digitization methodologies offer the option of reconstructing an object by a set of unordered images that depict it from different viewpoints. As their hardware requirements are narrowed down to a digital camera and a computer system, they compose an attractive 3D digitization approach, consequently, although range-based methods are generally very accurate, image-based methods are low-cost and can be easily used by non-professional users. One of the factors affecting the accuracy of the obtained model in image-based methods is the software and algorithm used to generate three dimensional model. These algorithms are provided in the form of commercial software, open source and web-based services. Another important factor in the accuracy of the obtained model is the type of sensor used. Due to availability of mobile sensors to the public, popularity of professional sensors and the advent of stereo sensors, a comparison of these three sensors plays an effective role in evaluating and finding the optimized method to generate three-dimensional models. Lots of research has been accomplished to identify a suitable software and algorithm to achieve an accurate and complete model, however little attention is paid to the type of sensors used and its effects on the quality of the final model. The purpose of this paper is deliberation and the introduction of an appropriate combination of a sensor and software to provide a complete model with the highest accuracy. To do this, different software, used in previous studies, were compared and

  3. Development of 3-D fracture network visualization software based on graphical user interface

    NASA Astrophysics Data System (ADS)

    Young-Hwan, Noh; Jeong-Gi, Um; Yosoon, Choi; Myong-Ho, Park; Jaeyoung, Choi

    2013-04-01

    A sound understanding of the structural characteristics of fractured rock masses is important in designing and maintaining earth structures because their strength, deformability, and hydraulic behavior depend mainly on the characteristics of discontinuity network structures. Despite considerable progress in understanding the structural characteristics of rock masses, the complexity of discontinuity patterns has prevented satisfactory analysis based on a 3-D rock mass visualization model. This research presents the results of studies performed to develop rock mass visualization in 3-D to analysis the mechanical and hydraulic behavior of fractured rock masses. General and particular solutions of non-linear equations of disk-shaped fractures have been derived to calculated lines of intersection and equivalent pipes. Also, program modules of DISK3D, FNTWK3D, BOUNDARY and BDM(borehole data management) have been developed to perform the visualization of fracture network and corresponding equivalent pipes for DFN based fluid flow model. The developed software for the 3-D fractured rock mass visualization model based on MS visual studio can be used to characterize rock mass geometry and network systems effectively. The results obtained in this study will be refined and then combined for use as a tool for assessing geomechanical problems related to strength, deformability and hydraulic behaviors of the fractured rock masses. Acknowledgements. This work was supported by the 2011 Energy Efficiency and Resources Program of the Korea Institute of Energy Technology Evaluation and Planning (KETEP) grant.

  4. Age Estimation in Living Adults using 3D Volume Rendered CT Images of the Sternal Plastron and Lower Chest.

    PubMed

    Oldrini, Guillaume; Harter, Valentin; Witte, Yannick; Martrille, Laurent; Blum, Alain

    2016-01-01

    Age estimation is commonly of interest in a judicial context. In adults, it is less documented than in children. The aim of this study was to evaluate age estimation in adults using CT images of the sternal plastron with volume rendering technique (VRT). The evaluation criteria are derived from known methods used for age estimation and are applicable in living or dead subjects. The VRT images of 456 patients were analyzed. Two radiologists performed age estimation independently from an anterior view of the plastron. Interobserver agreement and correlation coefficients between each reader's classification and real age were calculated. The interobserver agreement was 0.86, and the correlation coefficients between readers classifications and real age classes were 0.60 and 0.65. Spearman correlation coefficients were, respectively, 0.89, 0.67, and 0.71. Analysis of the plastron using VRT allows age estimation in vivo quickly and with results similar than methods such as Iscan, Suchey-Brooks, and radiographs used to estimate the age of death. PMID:27092960

  5. Scalable, High-performance 3D Imaging Software Platform: System Architecture and Application to Virtual Colonoscopy

    PubMed Central

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli; Brett, Bevin

    2013-01-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. In this work, we have developed a software platform that is designed to support high-performance 3D medical image processing for a wide range of applications using increasingly available and affordable commodity computing systems: multi-core, clusters, and cloud computing systems. To achieve scalable, high-performance computing, our platform (1) employs size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D image processing algorithms; (2) supports task scheduling for efficient load distribution and balancing; and (3) consists of a layered parallel software libraries that allow a wide range of medical applications to share the same functionalities. We evaluated the performance of our platform by applying it to an electronic cleansing system in virtual colonoscopy, with initial experimental results showing a 10 times performance improvement on an 8-core workstation over the original sequential implementation of the system. PMID:23366803

  6. Web-based interactive 2D/3D medical image processing and visualization software.

    PubMed

    Mahmoudi, Seyyed Ehsan; Akhondi-Asl, Alireza; Rahmani, Roohollah; Faghih-Roohi, Shahrooz; Taimouri, Vahid; Sabouri, Ahmad; Soltanian-Zadeh, Hamid

    2010-05-01

    There are many medical image processing software tools available for research and diagnosis purposes. However, most of these tools are available only as local applications. This limits the accessibility of the software to a specific machine, and thus the data and processing power of that application are not available to other workstations. Further, there are operating system and processing power limitations which prevent such applications from running on every type of workstation. By developing web-based tools, it is possible for users to access the medical image processing functionalities wherever the internet is available. In this paper, we introduce a pure web-based, interactive, extendable, 2D and 3D medical image processing and visualization application that requires no client installation. Our software uses a four-layered design consisting of an algorithm layer, web-user-interface layer, server communication layer, and wrapper layer. To compete with extendibility of the current local medical image processing software, each layer is highly independent of other layers. A wide range of medical image preprocessing, registration, and segmentation methods are implemented using open source libraries. Desktop-like user interaction is provided by using AJAX technology in the web-user-interface. For the visualization functionality of the software, the VRML standard is used to provide 3D features over the web. Integration of these technologies has allowed implementation of our purely web-based software with high functionality without requiring powerful computational resources in the client side. The user-interface is designed such that the users can select appropriate parameters for practical research and clinical studies. PMID:20022133

  7. OS3D/GIMRT software for modeling multicomponent-multidimensional reactive transport

    SciTech Connect

    CI Steefel; SB Yabusaki

    2000-05-17

    OS3D/GIMRT is a numerical software package for simulating multicomponent reactive transport in porous media. The package consists of two principal components: (1) the code OS3D (Operator Splitting 3-Dimensional Reactive Transport) which simulates reactive transport by either splitting the reaction and transport steps in time, i.e., the classic time or operator splitting approach, or by iterating sequentially between reactions and transport, and (2) the code GIMRT (Global Implicit Multicomponent Reactive Transport) which treats up to two dimensional reactive transport with a one step or global implicit approach. Although the two codes do not yet have totally identical capabilities, they can be run from the same input file, allowing comparisons to be made between the two approaches in many cases. The advantages and disadvantages of the two approaches are discussed more fully below, but in general OS3D is designed for simulation of transient concentration fronts, particularly under high Peclet number transport conditions, because of its use of a total variation diminishing or TVD transport algorithm. GIMRT is suited for simulating water-rock alteration over long periods of time where the aqueous concentration field is at or close to a quasi-stationary state and the numerical transport errors are less important. Where water-rock interaction occurs over geological periods of time, GIMRT may be preferable to OS3D because of its ability to take larger time steps.

  8. Significant acceleration of 2D-3D registration-based fusion of ultrasound and x-ray images by mesh-based DRR rendering

    NASA Astrophysics Data System (ADS)

    Kaiser, Markus; John, Matthias; Borsdorf, Anja; Mountney, Peter; Ionasec, Razvan; Nöttling, Alois; Kiefer, Philipp; Seeburger, Jörg; Neumuth, Thomas

    2013-03-01

    For transcatheter-based minimally invasive procedures in structural heart disease ultrasound and X-ray are the two enabling imaging modalities. A live fusion of both real-time modalities can potentially improve the workflow and the catheter navigation by combining the excellent instrument imaging of X-ray with the high-quality soft tissue imaging of ultrasound. A recently published approach to fuse X-ray fluoroscopy with trans-esophageal echo (TEE) registers the ultrasound probe to X-ray images by a 2D-3D registration method which inherently provides a registration of ultrasound images to X-ray images. In this paper, we significantly accelerate the 2D-3D registration method in this context. The main novelty is to generate the projection images (DRR) of the 3D object not via volume ray-casting but instead via a fast rendering of triangular meshes. This is possible, because in the setting for TEE/X-ray fusion the 3D geometry of the ultrasound probe is known in advance and their main components can be described by triangular meshes. We show that the new approach can achieve a speedup factor up to 65 and does not affect the registration accuracy when used in conjunction with the gradient correlation similarity measure. The improvement is independent of the underlying registration optimizer. Based on the results, a TEE/X-ray fusion could be performed with a higher frame rate and a shorter time lag towards real-time registration performance. The approach could potentially accelerate other applications of 2D-3D registrations, e.g. the registration of implant models with X-ray images.

  9. Fast voxel-based 2D/3D registration algorithm using a volume rendering method based on the shear-warp factorization

    NASA Astrophysics Data System (ADS)

    Weese, Juergen; Goecke, Roland; Penney, Graeme P.; Desmedt, Paul; Buzug, Thorsten M.; Schumann, Heidrun

    1999-05-01

    2D/3D registration makes it possible to use pre-operative CT scans for navigation purposes during X-ray fluoroscopy guided interventions. We present a fast voxel-based method for this registration task, which uses a recently introduced similarity measure (pattern intensity). This measure is especially suitable for 2D/3D registration, because it is robust with respect to structures such as a stent visible in the X-ray fluoroscopy image but not in the CT scan. The method uses only a part of the CT scan for the generation of digitally reconstructed radiographs (DRRs) to accelerate their computation. Nevertheless, computation time is crucial for intra-operative application and a further speed-up is required, because numerous DRRs must be computed. For that reason, the suitability of different volume rendering methods for 2D/3D registration has been investigated. A method based on the shear-warp factorization of the viewing transformation turned out to be especially suitable and builds the basis of the registration algorithm. The algorithm has been applied to images of a spine phantom and to clinical images. For comparison, registration results have been calculated using ray-casting. The shear-warp factorization based rendering method accelerates registration by a factor of up to seven compared to ray-casting without degrading registration accuracy. Using a vertebra as feature for registration, computation time is in the range of 3-4s (Sun UltraSparc, 300 MHz) which is acceptable for intra-operative application.

  10. Remote measurement methods for 3-D modeling purposes using BAE Systems' Software

    NASA Astrophysics Data System (ADS)

    Walker, Stewart; Pietrzak, Arleta

    2015-06-01

    Efficient, accurate data collection from imagery is the key to an economical generation of useful geospatial products. Incremental developments of traditional geospatial data collection and the arrival of new image data sources cause new software packages to be created and existing ones to be adjusted to enable such data to be processed. In the past, BAE Systems' digital photogrammetric workstation, SOCET SET®, met fin de siècle expectations in data processing and feature extraction. Its successor, SOCET GXP®, addresses today's photogrammetric requirements and new data sources. SOCET GXP is an advanced workstation for mapping and photogrammetric tasks, with automated functionality for triangulation, Digital Elevation Model (DEM) extraction, orthorectification and mosaicking, feature extraction and creation of 3-D models with texturing. BAE Systems continues to add sensor models to accommodate new image sources, in response to customer demand. New capabilities added in the latest version of SOCET GXP facilitate modeling, visualization and analysis of 3-D features.

  11. Deriving 3d Point Clouds from Terrestrial Photographs - Comparison of Different Sensors and Software

    NASA Astrophysics Data System (ADS)

    Niederheiser, Robert; Mokroš, Martin; Lange, Julia; Petschko, Helene; Prasicek, Günther; Oude Elberink, Sander

    2016-06-01

    Terrestrial photogrammetry nowadays offers a reasonably cheap, intuitive and effective approach to 3D-modelling. However, the important choice, which sensor and which software to use is not straight forward and needs consideration as the choice will have effects on the resulting 3D point cloud and its derivatives. We compare five different sensors as well as four different state-of-the-art software packages for a single application, the modelling of a vegetated rock face. The five sensors represent different resolutions, sensor sizes and price segments of the cameras. The software packages used are: (1) Agisoft PhotoScan Pro (1.16), (2) Pix4D (2.0.89), (3) a combination of Visual SFM (V0.5.22) and SURE (1.2.0.286), and (4) MicMac (1.0). We took photos of a vegetated rock face from identical positions with all sensors. Then we compared the results of the different software packages regarding the ease of the workflow, visual appeal, similarity and quality of the point cloud. While PhotoScan and Pix4D offer the user-friendliest workflows, they are also "black-box" programmes giving only little insight into their processing. Unsatisfying results may only be changed by modifying settings within a module. The combined workflow of Visual SFM, SURE and CloudCompare is just as simple but requires more user interaction. MicMac turned out to be the most challenging software as it is less user-friendly. However, MicMac offers the most possibilities to influence the processing workflow. The resulting point-clouds of PhotoScan and MicMac are the most appealing.

  12. Reliable and Fault-Tolerant Software-Defined Network Operations Scheme for Remote 3D Printing

    NASA Astrophysics Data System (ADS)

    Kim, Dongkyun; Gil, Joon-Min

    2015-03-01

    The recent wide expansion of applicable three-dimensional (3D) printing and software-defined networking (SDN) technologies has led to a great deal of attention being focused on efficient remote control of manufacturing processes. SDN is a renowned paradigm for network softwarization, which has helped facilitate remote manufacturing in association with high network performance, since SDN is designed to control network paths and traffic flows, guaranteeing improved quality of services by obtaining network requests from end-applications on demand through the separated SDN controller or control plane. However, current SDN approaches are generally focused on the controls and automation of the networks, which indicates that there is a lack of management plane development designed for a reliable and fault-tolerant SDN environment. Therefore, in addition to the inherent advantage of SDN, this paper proposes a new software-defined network operations center (SD-NOC) architecture to strengthen the reliability and fault-tolerance of SDN in terms of network operations and management in particular. The cooperation and orchestration between SDN and SD-NOC are also introduced for the SDN failover processes based on four principal SDN breakdown scenarios derived from the failures of the controller, SDN nodes, and connected links. The abovementioned SDN troubles significantly reduce the network reachability to remote devices (e.g., 3D printers, super high-definition cameras, etc.) and the reliability of relevant control processes. Our performance consideration and analysis results show that the proposed scheme can shrink operations and management overheads of SDN, which leads to the enhancement of responsiveness and reliability of SDN for remote 3D printing and control processes.

  13. UCVM: An Open Source Software Package for Querying and Visualizing 3D Velocity Models

    NASA Astrophysics Data System (ADS)

    Gill, D.; Small, P.; Maechling, P. J.; Jordan, T. H.; Shaw, J. H.; Plesch, A.; Chen, P.; Lee, E. J.; Taborda, R.; Olsen, K. B.; Callaghan, S.

    2015-12-01

    Three-dimensional (3D) seismic velocity models provide foundational data for ground motion simulations that calculate the propagation of earthquake waves through the Earth. The Southern California Earthquake Center (SCEC) has developed the Unified Community Velocity Model (UCVM) package for both Linux and OS X. This unique framework provides a cohesive way for querying and visualizing 3D models. UCVM v14.3.0, supports many Southern California velocity models including CVM-S4, CVM-H 11.9.1, and CVM-S4.26. The last model was derived from 26 full-3D tomographic iterations on CVM-S4. Recently, UCVM has been used to deliver a prototype of a new 3D model of central California (CCA) also based on full-3D tomographic inversions. UCVM was used to provide initial plots of this model and will be used to deliver CCA to users when the model is publicly released. Visualizing models is also possible with UCVM. Integrated within the platform are plotting utilities that can generate 2D cross-sections, horizontal slices, and basin depth maps. UCVM can also export models in NetCDF format for easy import into IDV and ParaView. UCVM has also been prototyped to export models that are compatible with IRIS' new Earth Model Collaboration (EMC) visualization utility. This capability allows for user-specified horizontal slices and cross-sections to be plotted in the same 3D Earth space. UCVM was designed to help a wide variety of researchers. It is currently being use to generate velocity meshes for many SCEC wave propagation codes, including AWP-ODC-SGT and Hercules. It is also used to provide the initial input to SCEC's CyberShake platform. For those interested in specific data points, the software framework makes it easy to extract P and S wave propagation speeds and other material properties from 3D velocity models by providing a common interface through which researchers can query earth models for a given location and depth. Also included in the last release was the ability to add small

  14. A graphic user interface for efficient 3D photo-reconstruction based on free software

    NASA Astrophysics Data System (ADS)

    Castillo, Carlos; James, Michael; Gómez, Jose A.

    2015-04-01

    Recently, different studies have stressed the applicability of 3D photo-reconstruction based on Structure from Motion algorithms in a wide range of geoscience applications. For the purpose of image photo-reconstruction, a number of commercial and freely available software packages have been developed (e.g. Agisoft Photoscan, VisualSFM). The workflow involves typically different stages such as image matching, sparse and dense photo-reconstruction, point cloud filtering and georeferencing. For approaches using open and free software, each of these stages usually require different applications. In this communication, we present an easy-to-use graphic user interface (GUI) developed in Matlab® code as a tool for efficient 3D photo-reconstruction making use of powerful existing software: VisualSFM (Wu, 2015) for photo-reconstruction and CloudCompare (Girardeau-Montaut, 2015) for point cloud processing. The GUI performs as a manager of configurations and algorithms, taking advantage of the command line modes of existing software, which allows an intuitive and automated processing workflow for the geoscience user. The GUI includes several additional features: a) a routine for significantly reducing the duration of the image matching operation, normally the most time consuming stage; b) graphical outputs for understanding the overall performance of the algorithm (e.g. camera connectivity, point cloud density); c) a number of useful options typically performed before and after the photo-reconstruction stage (e.g. removal of blurry images, image renaming, vegetation filtering); d) a manager of batch processing for the automated reconstruction of different image datasets. In this study we explore the advantages of this new tool by testing its performance using imagery collected in several soil erosion applications. References Girardeau-Montaut, D. 2015. CloudCompare documentation accessed at http://cloudcompare.org/ Wu, C. 2015. VisualSFM documentation access at http://ccwu.me/vsfm/doc.html#.

  15. UCVM: Open Source Software for Understanding and Delivering 3D Velocity Models

    NASA Astrophysics Data System (ADS)

    Gill, D.; Small, P.; Maechling, P. J.; Jordan, T. H.; Shaw, J. H.; Plesch, A.; Chen, P.; Lee, E. J.; Taborda, R.; Olsen, K. B.; Callaghan, S.

    2014-12-01

    Physics-based ground motion simulations can calculate the propagation of earthquake waves through 3D velocity models of the Earth. The Southern California Earthquake Center (SCEC) has developed the Unified Community Velocity Model (UCVM) framework to help researchers build structured or unstructured velocity meshes from 3D velocity models for use in wave propagation simulations. The UCVM software framework makes it easy to extract P and S wave propagation speeds and other material properties from 3D velocity models by providing a common interface through which researchers can query earth models for a given location and depth. Currently, the platform supports multiple California models, including SCEC CVM-S4 and CVM-H 11.9.1, and has been designed to support models from any region on earth. UCVM is currently being use to generate velocity meshes for many SCEC wave propagation codes, including AWP-ODC-SGT and Hercules. In this presentation, we describe improvements to the UCVM software. The current version, UCVM 14.3.0, released in March of 2014, supports the newest Southern California velocity model, CVM-S4.26, which was derived from 26 full-3D tomographic iterations using CVM-S4 as the starting model (Lee et al., this meeting), and the Broadband 1D velocity model used in the CyberShake 14.2 study. We have ported UCVM to multiple Linux distributions and OS X. Also included in this release is the ability to add small-scale stochastic heterogeneities to extract Cartesian meshes for use in high-frequency ground motion simulations. This tool was built using the C language open-source FFT library, FFTW. The stochastic parameters (Hurst exponent, correlation length, and the horizontal/vertical aspect ratio) can be customized by the user. UCVM v14.3.0 also provides visualization scripts for constructing cross-sections, horizontal slices, basin depths, and Vs30 maps. The interface allows researchers to visually review velocity models . Also, UCVM v14.3.0 can extract

  16. Evaluation of an improved algorithm for producing realistic 3D breast software phantoms: Application for mammography

    SciTech Connect

    Bliznakova, K.; Suryanarayanan, S.; Karellas, A.; Pallikarakis, N.

    2010-11-15

    Purpose: This work presents an improved algorithm for the generation of 3D breast software phantoms and its evaluation for mammography. Methods: The improved methodology has evolved from a previously presented 3D noncompressed breast modeling method used for the creation of breast models of different size, shape, and composition. The breast phantom is composed of breast surface, duct system and terminal ductal lobular units, Cooper's ligaments, lymphatic and blood vessel systems, pectoral muscle, skin, 3D mammographic background texture, and breast abnormalities. The key improvement is the development of a new algorithm for 3D mammographic texture generation. Simulated images of the enhanced 3D breast model without lesions were produced by simulating mammographic image acquisition and were evaluated subjectively and quantitatively. For evaluation purposes, a database with regions of interest taken from simulated and real mammograms was created. Four experienced radiologists participated in a visual subjective evaluation trial, as they judged the quality of the simulated mammograms, using the new algorithm compared to mammograms, obtained with the old modeling approach. In addition, extensive quantitative evaluation included power spectral analysis and calculation of fractal dimension, skewness, and kurtosis of simulated and real mammograms from the database. Results: The results from the subjective evaluation strongly suggest that the new methodology for mammographic breast texture creates improved breast models compared to the old approach. Calculated parameters on simulated images such as {beta} exponent deducted from the power law spectral analysis and fractal dimension are similar to those calculated on real mammograms. The results for the kurtosis and skewness are also in good coincidence with those calculated from clinical images. Comparison with similar calculations published in the literature showed good agreement in the majority of cases. Conclusions: The

  17. A software tool for automatic classification and segmentation of 2D/3D medical images

    NASA Astrophysics Data System (ADS)

    Strzelecki, Michal; Szczypinski, Piotr; Materka, Andrzej; Klepaczko, Artur

    2013-02-01

    Modern medical diagnosis utilizes techniques of visualization of human internal organs (CT, MRI) or of its metabolism (PET). However, evaluation of acquired images made by human experts is usually subjective and qualitative only. Quantitative analysis of MR data, including tissue classification and segmentation, is necessary to perform e.g. attenuation compensation, motion detection, and correction of partial volume effect in PET images, acquired with PET/MR scanners. This article presents briefly a MaZda software package, which supports 2D and 3D medical image analysis aiming at quantification of image texture. MaZda implements procedures for evaluation, selection and extraction of highly discriminative texture attributes combined with various classification, visualization and segmentation tools. Examples of MaZda application in medical studies are also provided.

  18. Toward high-speed 3D nonlinear soft tissue deformation simulations using Abaqus software.

    PubMed

    Idkaidek, Ashraf; Jasiuk, Iwona

    2015-12-01

    We aim to achieve a fast and accurate three-dimensional (3D) simulation of a porcine liver deformation under a surgical tool pressure using the commercial finite element software Abaqus. The liver geometry is obtained using magnetic resonance imaging, and a nonlinear constitutive law is employed to capture large deformations of the tissue. Effects of implicit versus explicit analysis schemes, element type, and mesh density on computation time are studied. We find that Abaqus explicit and implicit solvers are capable of simulating nonlinear soft tissue deformations accurately using first-order tetrahedral elements in a relatively short time by optimizing the element size. This study provides new insights and guidance on accurate and relatively fast nonlinear soft tissue simulations. Such simulations can provide force feedback during robotic surgery and allow visualization of tissue deformations for surgery planning and training of surgical residents. PMID:26530842

  19. Use of 3D imaging in CT of the acute trauma patient: impact of a PACS-based software package.

    PubMed

    Soto, Jorge A; Lucey, Brain C; Stuhlfaut, Joshua W; Varghese, Jose C

    2005-04-01

    To evaluate the impact of a picture archiving and communication systems (PACS)-based software package on the requests for 3D reconstructions of multidetector CT (MDCT) data sets in the emergency radiology of a level 1 trauma center, we reviewed the number and type of physician requests for 3D reconstructions of MDCT data sets for patients admitted after sustaining multiple trauma, during a 12-month period (January 2003-December 2003). During the first 5 months of the study, 3D reconstructions were performed in dedicated workstations located separately from the emergency radiology CT interpretation area. During the last 7 months of the study, reconstructions were performed online by the attending radiologist or resident on duty, using a software package directly incorporated into the PACS workstations. The mean monthly number of 3D reconstructions requested during the two time periods was compared using Student's t test. The monthly mean +/- SD of 3D reconstructions performed before and after 3D software incorporation into the PACS was 34+/-7 (95% CI, 10-58) and 132+/-31 (95% CI, 111-153), respectively. This difference was statistically significant (p<0.0001). In the multiple trauma patient, implementation of PACS-integrated software increases utilization of 3D reconstructions of MDCT data sets. PMID:16028324

  20. IGMAS+ A New 3D Gravity, FTG and Magnetic Modeling Software

    NASA Astrophysics Data System (ADS)

    Goetze, H.; Schmidt, S.; Fichler, C.; Alvers, M. R.

    2007-12-01

    Modern geophysical interpretation requires an interdisciplinary approach, particularly when considering the available amount of 'state of the art' information contained in comprehensive data bases. A combination of different geophysical surveys employing seismics, gravity and geoelectrics, together with geological and petrological studies, can provide new insights into the structures and tectonic evolution of the lithosphere and natural deposits. Interdisciplinary interpretation is essential for any numerical modelling of these structures and the processes acting on them. Three-dimensional (3D) interactive modeling with the IGMAS+ software provides means for integrated processing and interpretation of geoid, gravity and magnetic fields and their gradients (full tensor), yielding improved geological interpretation. IGMAS+ is an acronym standing for "Interactive Geophysical Modelling Application System". It bases on the existing software IGMAS (http://www.gravity.uni-kiel.de/igmas), a tool developed during the past twenty years for potential field modelling. The new IGMAS+, however, will comprise the advantages of the "old" IGMAS (e.g. flexible geometry concept and a fast and stable algorithm) with automated interpretation tools and a modern graphical GUI based on leading edge insights from psychological computer graphics research and thus provide optimal man machine communication. IGMAS+ fully three-dimensional models are constructed using triangulated polyhedra and/or triangulated grids, to which constant density and/or induced and remanent susceptibility are assigned. Interactive modifications of model parameters (geometry, density, susceptibility, magnetization), access to the numerical modeling process, and direct visualization of both calculated and measured fields of gravity and magnetics, enable the interpreter to design the model as realistically as possible. IGMAS+ allows easy integration of constraining data into interactive modeling processes

  1. IGMAS+ a new 3D Gravity, FTG and Magnetic Modeling Software

    NASA Astrophysics Data System (ADS)

    Götze, Hans-Jürgen; Schmidt, Sabine; Fichler, Christine; Planka, Christian

    2010-05-01

    Modern geophysical interpretation requires an interdisciplinary approach, particularly when considering the available amount of 'state of the art' information contained in comprehensive data bases. A combination of different geophysical surveys employing seismics, gravity and geoelectrics, together with geological and petrological studies, can provide new insights into the structures and tectonic evolution of the lithosphere and natural deposits. Interdisciplinary interpretation is essential for any numerical modelling of these structures and the processes acting on them Three-dimensional (3D) interactive modeling with the IGMAS+ software provides means for integrated processing and interpretation of geoid, gravity and magnetic fields and their gradients (full tensor), yielding improved geological interpretation. IGMAS+ is an acronym standing for "Interactive Geophysical Modelling Application System". It bases on the existing software IGMAS (http://www.gravity.uni-kiel.de/igmas), a tool developed during the past twenty years for potential field modelling. The new IGMAS+, however, will comprise the advantages of the "old" IGMAS (e.g. flexible geometry concept and a fast and stable algorithm) with automated interpretation tools and a modern graphical GUI based on leading edge insights from psychological computer graphics research and thus provide optimal man machine communication. IGMAS+ fully three-dimensional models are constructed using triangulated polyhedra and/or triangulated grids, to which constant density and/or induced and remanent susceptibility are assigned. Interactive modifications of model parameters (geometry, density, susceptibility, magnetization), access to the numerical modeling process, and direct visualization of both calculated and measured fields of gravity and magnetics, enable the interpreter to design the model as realistically as possible. IGMAS+ allows easy integration of constraining data into interactive modeling processes

  2. Using Computer-Aided Design Software and 3D Printers to Improve Spatial Visualization

    ERIC Educational Resources Information Center

    Katsio-Loudis, Petros; Jones, Millie

    2015-01-01

    Many articles have been published on the use of 3D printing technology. From prefabricated homes and outdoor structures to human organs, 3D printing technology has found a niche in many fields, but especially education. With the introduction of AutoCAD technical drawing programs and now 3D printing, learners can use 3D printed models to develop…

  3. Designing Spatial Visualisation Tasks for Middle School Students with a 3D Modelling Software: An Instrumental Approach

    ERIC Educational Resources Information Center

    Turgut, Melih; Uygan, Candas

    2015-01-01

    In this work, certain task designs to enhance middle school students' spatial visualisation ability, in the context of an instrumental approach, have been developed. 3D modelling software, SketchUp®, was used. In the design process, software tools were focused on and, thereafter, the aim was to interpret the instrumental genesis and spatial…

  4. Evaluating Dense 3d Reconstruction Software Packages for Oblique Monitoring of Crop Canopy Surface

    NASA Astrophysics Data System (ADS)

    Brocks, S.; Bareth, G.

    2016-06-01

    Crop Surface Models (CSMs) are 2.5D raster surfaces representing absolute plant canopy height. Using multiple CMSs generated from data acquired at multiple time steps, a crop surface monitoring is enabled. This makes it possible to monitor crop growth over time and can be used for monitoring in-field crop growth variability which is useful in the context of high-throughput phenotyping. This study aims to evaluate several software packages for dense 3D reconstruction from multiple overlapping RGB images on field and plot-scale. A summer barley field experiment located at the Campus Klein-Altendorf of University of Bonn was observed by acquiring stereo images from an oblique angle using consumer-grade smart cameras. Two such cameras were mounted at an elevation of 10 m and acquired images for a period of two months during the growing period of 2014. The field experiment consisted of nine barley cultivars that were cultivated in multiple repetitions and nitrogen treatments. Manual plant height measurements were carried out at four dates during the observation period. The software packages Agisoft PhotoScan, VisualSfM with CMVS/PMVS2 and SURE are investigated. The point clouds are georeferenced through a set of ground control points. Where adequate results are reached, a statistical analysis is performed.

  5. The polyGeVero® software for fast and easy computation of 3D radiotherapy dosimetry data

    NASA Astrophysics Data System (ADS)

    Kozicki, Marek; Maras, Piotr

    2015-01-01

    The polyGeVero® software package was elaborated for calculations of 3D dosimetry data such as the polymer gel dosimetry. It comprises four workspaces designed for: i) calculating calibrations, ii) storing calibrations in a database, iii) calculating dose distribution 3D cubes, iv) comparing two datasets e.g. a measured one with a 3D dosimetry with a calculated one with the aid of a treatment planning system. To accomplish calculations the software was equipped with a number of tools such as the brachytherapy isotopes database, brachytherapy dose versus distance calculation based on the line approximation approach, automatic spatial alignment of two 3D dose cubes for comparison purposes, 3D gamma index, 3D gamma angle, 3D dose difference, Pearson's coefficient, histograms calculations, isodoses superimposition for two datasets, and profiles calculations in any desired direction. This communication is to briefly present the main functions of the software and report on the speed of calculations performed by polyGeVero®.

  6. Data acquisition electronics and reconstruction software for real time 3D track reconstruction within the MIMAC project

    NASA Astrophysics Data System (ADS)

    Bourrion, O.; Bosson, G.; Grignon, C.; Bouly, J. L.; Richer, J. P.; Guillaudin, O.; Mayet, F.; Billard, J.; Santos, D.

    2011-11-01

    Directional detection of non-baryonic Dark Matter requires 3D reconstruction of low energy nuclear recoils tracks. A gaseous micro-TPC matrix, filled with either 3He, CF4 or C4H10 has been developed within the MIMAC project. A dedicated acquisition electronics and a real time track reconstruction software have been developed to monitor a 512 channel prototype. This auto-triggered electronic uses embedded processing to reduce the data transfer to its useful part only, i.e. decoded coordinates of hit tracks and corresponding energy measurements. An acquisition software with on-line monitoring and 3D track reconstruction is also presented.

  7. i3Drefine Software for Protein 3D Structure Refinement and Its Assessment in CASP10

    PubMed Central

    Bhattacharya, Debswapna; Cheng, Jianlin

    2013-01-01

    Protein structure refinement refers to the process of improving the qualities of protein structures during structure modeling processes to bring them closer to their native states. Structure refinement has been drawing increasing attention in the community-wide Critical Assessment of techniques for Protein Structure prediction (CASP) experiments since its addition in 8th CASP experiment. During the 9th and recently concluded 10th CASP experiments, a consistent growth in number of refinement targets and participating groups has been witnessed. Yet, protein structure refinement still remains a largely unsolved problem with majority of participating groups in CASP refinement category failed to consistently improve the quality of structures issued for refinement. In order to alleviate this need, we developed a completely automated and computationally efficient protein 3D structure refinement method, i3Drefine, based on an iterative and highly convergent energy minimization algorithm with a powerful all-atom composite physics and knowledge-based force fields and hydrogen bonding (HB) network optimization technique. In the recent community-wide blind experiment, CASP10, i3Drefine (as ‘MULTICOM-CONSTRUCT’) was ranked as the best method in the server section as per the official assessment of CASP10 experiment. Here we provide the community with free access to i3Drefine software and systematically analyse the performance of i3Drefine in strict blind mode on the refinement targets issued in CASP10 refinement category and compare with other state-of-the-art refinement methods participating in CASP10. Our analysis demonstrates that i3Drefine is only fully-automated server participating in CASP10 exhibiting consistent improvement over the initial structures in both global and local structural quality metrics. Executable version of i3Drefine is freely available at http://protein.rnet.missouri.edu/i3drefine/. PMID:23894517

  8. Parallel rendering

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1995-01-01

    This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

  9. Lessons in modern digital field geology: Open source software, 3D techniques, and the new world of digital mapping

    NASA Astrophysics Data System (ADS)

    Pavlis, Terry; Hurtado, Jose; Langford, Richard; Serpa, Laura

    2014-05-01

    Although many geologists refuse to admit it, it is time to put paper-based geologic mapping into the historical archives and move to the full potential of digital mapping techniques. For our group, flat map digital geologic mapping is now a routine operation in both research and instruction. Several software options are available, and basic proficiency with the software can be learned in a few hours of instruction and practice. The first practical field GIS software, ArcPad, remains a viable, stable option on Windows-based systems. However, the vendor seems to be moving away from ArcPad in favor of mobile software solutions that are difficult to implement without GIS specialists. Thus, we have pursued a second software option based on the open source program QGIS. Our QGIS system uses the same shapefile-centric data structure as our ArcPad system, including similar pop-up data entry forms and generic graphics for easy data management in the field. The advantage of QGIS is that the same software runs on virtually all common platforms except iOS, although the Android version remains unstable as of this writing. A third software option we are experimenting with for flat map-based field work is Fieldmove, a derivative of the 3D-capable program Move developed by Midland Valley. Our initial experiments with Fieldmove are positive, particularly with the new, inexpensive (<300Euros) Windows tablets. However, the lack of flexibility in data structure makes for cumbersome workflows when trying to interface our existing shapefile-centric data structures to Move. Nonetheless, in spring 2014 we will experiment with full-3D immersion in the field using the full Move software package in combination with ground based LiDAR and photogrammetry. One new workflow suggested by our initial experiments is that field geologists should consider using photogrammetry software to capture 3D visualizations of key outcrops. This process is now straightforward in several software packages, and

  10. 3D-MRI rendering of the anatomical structures related to acupuncture points of the Dai mai, Yin qiao mai and Yang qiao mai meridians within the context of the WOMED concept of lateral tension: implications for musculoskeletal disease

    PubMed Central

    Moncayo, Roy; Rudisch, Ansgar; Kremser, Christian; Moncayo, Helga

    2007-01-01

    Background A conceptual model of lateral muscular tension in patients presenting thyroid associated ophthalmopathy (TAO) has been recently described. Clinical improvement has been achieved by using acupuncture on points belonging to the so-called extraordinary meridians. The aim of this study was to characterize the anatomical structures related to these acupuncture points by means of 3D MRI image rendering relying on external markers. Methods The investigation was carried out the index case patient of the lateral tension model. A licensed medical acupuncture practitioner located the following acupuncture points: 1) Yin qiao mai meridian (medial ankle): Kidney 3, Kidney 6, the plantar Kidney 6 (Nan jing description); 2) Yang qiao mai meridian (lateral ankle): Bladder 62, Bladder 59, Bladder 61, and the plantar Bladder 62 (Nan jing description); 3) Dai mai meridian (wait): Liver 13, Gall bladder 26, Gall bladder 27, Gall bladder 28, and Gall bladder 29. The points were marked by taping a nitro-glycerin capsule on the skin. Imaging was done on a Siemens Magnetom Avanto MR scanner using an array head and body coil. Mainly T1-weighted imaging sequences, as routinely used for patient exams, were used to obtain multi-slice images. The image data were rendered in 3D modus using dedicated software (Leonardo, Siemens). Results Points of the Dai mai meridian – at the level of the waist – corresponded to the obliquus externus abdominis and the obliquus internus abdominis. Points of the Yin qiao mai meridian – at the medial side of the ankle – corresponded to tendinous structures of the flexor digitorum longus as well as to muscular structures of the abductor hallucis on the foot sole. Points of the Yang qiao mai meridian – at the lateral side of the ankle – corresponded to tendinous structures of the peroneus brevis, the peroneous longus, and the lateral surface of the calcaneus and close to the foot sole to the abductor digiti minimi. Conclusion This non

  11. Scalable Multi-Platform Distribution of Spatial 3d Contents

    NASA Astrophysics Data System (ADS)

    Klimke, J.; Hagedorn, B.; Döllner, J.

    2013-09-01

    Virtual 3D city models provide powerful user interfaces for communication of 2D and 3D geoinformation. Providing high quality visualization of massive 3D geoinformation in a scalable, fast, and cost efficient manner is still a challenging task. Especially for mobile and web-based system environments, software and hardware configurations of target systems differ significantly. This makes it hard to provide fast, visually appealing renderings of 3D data throughout a variety of platforms and devices. Current mobile or web-based solutions for 3D visualization usually require raw 3D scene data such as triangle meshes together with textures delivered from server to client, what makes them strongly limited in terms of size and complexity of the models they can handle. In this paper, we introduce a new approach for provisioning of massive, virtual 3D city models on different platforms namely web browsers, smartphones or tablets, by means of an interactive map assembled from artificial oblique image tiles. The key concept is to synthesize such images of a virtual 3D city model by a 3D rendering service in a preprocessing step. This service encapsulates model handling and 3D rendering techniques for high quality visualization of massive 3D models. By generating image tiles using this service, the 3D rendering process is shifted from the client side, which provides major advantages: (a) The complexity of the 3D city model data is decoupled from data transfer complexity (b) the implementation of client applications is simplified significantly as 3D rendering is encapsulated on server side (c) 3D city models can be easily deployed for and used by a large number of concurrent users, leading to a high degree of scalability of the overall approach. All core 3D rendering techniques are performed on a dedicated 3D rendering server, and thin-client applications can be compactly implemented for various devices and platforms.

  12. A Software System for Filling Complex Holes in 3D Meshes by Flexible Interacting Particles

    NASA Astrophysics Data System (ADS)

    Yamazaki, Daisuke; Savchenko, Vladimir

    3D meshes generated by acquisition devices such as laser range scanners often contain holes due to occlusion, etc. In practice, these holes are extremely geometrically and topologically complex. We propose a heuristic hole filling technique using particle systems to fill complex holes with arbitrary topology in 3D meshes. Our approach includes the following steps: hole identification, base surface creation, particle distribution, triangulation, and mesh refinement. We demonstrate the functionality of the proposed surface retouching system on synthetic and real data.

  13. 3D Imaging for hand gesture recognition: Exploring the software-hardware interaction of current technologies

    NASA Astrophysics Data System (ADS)

    Periverzov, Frol; Ilieş, Horea T.

    2012-09-01

    Interaction with 3D information is one of the fundamental and most familiar tasks in virtually all areas of engineering and science. Several recent technological advances pave the way for developing hand gesture recognition capabilities available to all, which will lead to more intuitive and efficient 3D user interfaces (3DUI). These developments can unlock new levels of expression and productivity in all activities concerned with the creation and manipulation of virtual 3D shapes and, specifically, in engineering design. Building fully automated systems for tracking and interpreting hand gestures requires robust and efficient 3D imaging techniques as well as potent shape classifiers. We survey and explore current and emerging 3D imaging technologies, and focus, in particular, on those that can be used to build interfaces between the users' hands and the machine. The purpose of this paper is to categorize and highlight the relevant differences between these existing 3D imaging approaches in terms of the nature of the information provided, output data format, as well as the specific conditions under which these approaches yield reliable data. Furthermore we explore the impact of each of these approaches on the computational cost and reliability of the required image processing algorithms. Finally we highlight the main challenges and opportunities in developing natural user interfaces based on hand gestures, and conclude with some promising directions for future research. [Figure not available: see fulltext.

  14. Fast perspective volume ray casting method using GPU-based acceleration techniques for translucency rendering in 3D endoluminal CT colonography.

    PubMed

    Lee, Taek-Hee; Lee, Jeongjin; Lee, Ho; Kye, Heewon; Shin, Yeong Gil; Kim, Soo Hong

    2009-08-01

    Recent advances in graphics processing unit (GPU) have enabled direct volume rendering at interactive rates. However, although perspective volume rendering for opaque isosurface is rapidly performed using conventional GPU-based method, perspective volume rendering for non-opaque volume such as translucency rendering is still slow. In this paper, we propose an efficient GPU-based acceleration technique of fast perspective volume ray casting for translucency rendering in computed tomography (CT) colonography. The empty space searching step is separated from the shading and compositing steps, and they are divided into separate processing passes in the GPU. Using this multi-pass acceleration, empty space leaping is performed exactly at the voxel level rather than at the block level, so that the efficiency of empty space leaping is maximized for colon data set, which has many curved or narrow regions. In addition, the numbers of shading and compositing steps are fixed, and additional empty space leapings between colon walls are performed to increase computational efficiency further near the haustral folds. Experiments were performed to illustrate the efficiency of the proposed scheme compared with the conventional GPU-based method, which has been known to be the fastest algorithm. The experimental results showed that the rendering speed of our method was 7.72fps for translucency rendering of 1024x1024 colonoscopy image, which was about 3.54 times faster than that of the conventional method. Since our method performed the fully optimized empty space leaping for any kind of colon inner shapes, the frame-rate variations of our method were about two times smaller than that of the conventional method to guarantee smooth navigation. The proposed method could be successfully applied to help diagnose colon cancer using translucency rendering in virtual colonoscopy. PMID:19541296

  15. Introducing 3D U-statistic method for separating anomaly from background in exploration geochemical data with associated software development

    NASA Astrophysics Data System (ADS)

    Ghannadpour, Seyyed Saeed; Hezarkhani, Ardeshir

    2016-03-01

    The U-statistic method is one of the most important structural methods to separate the anomaly from the background. It considers the location of samples and carries out the statistical analysis of the data without judging from a geochemical point of view and tries to separate subpopulations and determine anomalous areas. In the present study, to use U-statistic method in three-dimensional (3D) condition, U-statistic is applied on the grade of two ideal test examples, by considering sample Z values (elevation). So far, this is the first time that this method has been applied on a 3D condition. To evaluate the performance of 3D U-statistic method and in order to compare U-statistic with one non-structural method, the method of threshold assessment based on median and standard deviation (MSD method) is applied on the two example tests. Results show that the samples indicated by U-statistic method as anomalous are more regular and involve less dispersion than those indicated by the MSD method. So that, according to the location of anomalous samples, denser areas of them can be determined as promising zones. Moreover, results show that at a threshold of U = 0, the total error of misclassification for U-statistic method is much smaller than the total error of criteria of bar {x}+n× s. Finally, 3D model of two test examples for separating anomaly from background using 3D U-statistic method is provided. The source code for a software program, which was developed in the MATLAB programming language in order to perform the calculations of the 3D U-spatial statistic method, is additionally provided. This software is compatible with all the geochemical varieties and can be used in similar exploration projects.

  16. Multimedia software design of automobile construction based on 3D engine

    NASA Astrophysics Data System (ADS)

    Xu, Guo-dong; Chi, Xiao-xia

    2013-03-01

    This paper introduces the methods of three-dimensional modeling, assembling and simulating design of an automobile based on 3D engine, Pro/Engineer and 3DSMax. Research is also carried out on the order and the route of virtual assembling as well the corresponding processes.

  17. Spatial Visualization by Realistic 3D Views

    ERIC Educational Resources Information Center

    Yue, Jianping

    2008-01-01

    In this study, the popular Purdue Spatial Visualization Test-Visualization by Rotations (PSVT-R) in isometric drawings was recreated with CAD software that allows 3D solid modeling and rendering to provide more realistic pictorial views. Both the original and the modified PSVT-R tests were given to students and their scores on the two tests were…

  18. RenderMan design principles

    NASA Technical Reports Server (NTRS)

    Apodaca, Tony; Porter, Tom

    1989-01-01

    The two worlds of interactive graphics and realistic graphics have remained separate. Fast graphics hardware runs simple algorithms and generates simple looking images. Photorealistic image synthesis software runs slowly on large expensive computers. The time has come for these two branches of computer graphics to merge. The speed and expense of graphics hardware is no longer the barrier to the wide acceptance of photorealism. There is every reason to believe that high quality image synthesis will become a standard capability of every graphics machine, from superworkstation to personal computer. The significant barrier has been the lack of a common language, an agreed-upon set of terms and conditions, for 3-D modeling systems to talk to 3-D rendering systems for computing an accurate rendition of that scene. Pixar has introduced RenderMan to serve as that common language. RenderMan, specifically the extensibility it offers in shading calculations, is discussed.

  19. Status of the phenomena representation, 3D modeling, and cloud-based software architecture development

    SciTech Connect

    Smith, Curtis L.; Prescott, Steven; Kvarfordt, Kellie; Sampath, Ram; Larson, Katie

    2015-09-01

    Early in 2013, researchers at the Idaho National Laboratory outlined a technical framework to support the implementation of state-of-the-art probabilistic risk assessment to predict the safety performance of advanced small modular reactors. From that vision of the advanced framework for risk analysis, specific tasks have been underway in order to implement the framework. This report discusses the current development of a several tasks related to the framework implementation, including a discussion of a 3D physics engine that represents the motion of objects (including collision and debris modeling), cloud-based analysis tools such as a Bayesian-inference engine, and scenario simulations. These tasks were performed during 2015 as part of the technical work associated with the Advanced Reactor Technologies Program.

  20. Laser scanner data processing and 3D modeling using a free and open source software

    SciTech Connect

    Gabriele, Fatuzzo; Michele, Mangiameli Giuseppe, Mussumeci; Salvatore, Zito

    2015-03-10

    The laser scanning is a technology that allows in a short time to run the relief geometric objects with a high level of detail and completeness, based on the signal emitted by the laser and the corresponding return signal. When the incident laser radiation hits the object to detect, then the radiation is reflected. The purpose is to build a three-dimensional digital model that allows to reconstruct the reality of the object and to conduct studies regarding the design, restoration and/or conservation. When the laser scanner is equipped with a digital camera, the result of the measurement process is a set of points in XYZ coordinates showing a high density and accuracy with radiometric and RGB tones. In this case, the set of measured points is called “point cloud” and allows the reconstruction of the Digital Surface Model. Even the post-processing is usually performed by closed source software, which is characterized by Copyright restricting the free use, free and open source software can increase the performance by far. Indeed, this latter can be freely used providing the possibility to display and even custom the source code. The experience started at the Faculty of Engineering in Catania is aimed at finding a valuable free and open source tool, MeshLab (Italian Software for data processing), to be compared with a reference closed source software for data processing, i.e. RapidForm. In this work, we compare the results obtained with MeshLab and Rapidform through the planning of the survey and the acquisition of the point cloud of a morphologically complex statue.

  1. SF3M 2.0: improvement of 3D photo-reconstruction interface based on freely available software

    NASA Astrophysics Data System (ADS)

    Castillo, Carlos; James, Michael R.; Pérez, Rafael; Gómez, Jose A.

    2016-04-01

    During recent years, a number of tools based on Structure-from-Motion algorithms have been released for full image-based 3D reconstruction either freely (e.g. Bundler, PMVS2, VisualSFM, MicMac) or commercially (e.g. Agisoft PhotoScan). The SF3M interface was developed in Matlab® to use link software developments (VisualSFM, CloudCompare) and new applications to create a semi-automated workflow including reconstruction, georeferencing and point-cloud filtering, and has been tested for gully erosion assessment with terrestrial images (Castillo et al., 2015). The main aim of this work to provide an improved freely-available and easy-to-use alternative for 3D reconstruction intended for public agencies, non-profit organisations, researchers and other stakeholders interested in 3D modelling. In this communication we present SF3M 2.0, a new version of the graphical user interface. In this case, the SfM module is based on MicMac, an open-software tool (Pierrot-Deseilligny and Cléry, 2011) which provides advanced features such as camera calibration and constrained bundle adjustment using ground control points. SF3M 2.0 will be tested in two scenarios: a) using the same ground-based image set tested in Castillo et al. (2015) to compare the performance of both versions and b) using aerial images taken from a helium balloon to assess a gully network in a 40-hectares catchment. In this study we explore the advantages of SF3M 2.0, explain its operation and evaluate its accuracy and performance. This tool will be also available for free download. References Castillo, C., James, M.R., Redel-Macías, M. D., Pérez, R., and Gómez, J.A.: SF3M software: 3-D photo-reconstruction for non-expert users and its application to a gully network, SOIL, 1, 583-594. Pierrot-Deseilligny, M and Cléry, I. APERO, an Open Source Bundle Adjusment Software for Automatic Calibration and Orientation of a Set of Images. Proceedings of the ISPRS Commission V Symposium, Image Engineering and Vision

  2. Simulation of 3D flows past hypersonic vehicles in FlowVision software

    NASA Astrophysics Data System (ADS)

    Aksenov, A. A.; Zhluktov, S. V.; Savitskiy, D. V.; Bartenev, G. Y.; Pokhilko, V. I.

    2015-11-01

    A new implicit velocity-pressure split method is discussed in the given presentation. The method implies using conservative velocities, obtained at the given time step, for integration of the momentum equation and other convection-diffusion equations. This enables simulation of super- and hypersonic flows with account of motion of solid boundaries. Calculations of known test cases performed in the FlowVision software are demonstrated. It is shown that the method allows one to carry out calculations at high Mach numbers with integration step essentially exceeding the explicit time step.

  3. Software for Building Models of 3D Objects via the Internet

    NASA Technical Reports Server (NTRS)

    Schramer, Tim; Jensen, Jeff

    2003-01-01

    The Virtual EDF Builder (where EDF signifies Electronic Development Fixture) is a computer program that facilitates the use of the Internet for building and displaying digital models of three-dimensional (3D) objects that ordinarily comprise assemblies of solid models created previously by use of computer-aided-design (CAD) programs. The Virtual EDF Builder resides on a Unix-based server computer. It is used in conjunction with a commercially available Web-based plug-in viewer program that runs on a client computer. The Virtual EDF Builder acts as a translator between the viewer program and a database stored on the server. The translation function includes the provision of uniform resource locator (URL) links to other Web-based computer systems and databases. The Virtual EDF builder can be used in two ways: (1) If the client computer is Unix-based, then it can assemble a model locally; the computational load is transferred from the server to the client computer. (2) Alternatively, the server can be made to build the model, in which case the server bears the computational load and the results are downloaded to the client computer or workstation upon completion.

  4. Standard and fenestrated endograft sizing in EVAR planning: Description and validation of a semi-automated 3D software.

    PubMed

    Macía, Iván; de Blas, Mariano; Legarreta, Jon Haitz; Kabongo, Luis; Hernández, Óscar; Egaña, José María; Emparanza, José Ignacio; García-Familiar, Ainhoa; Graña, Manuel

    2016-06-01

    An abdominal aortic aneurysm (AAA) is a pathological dilation of the abdominal aorta that may lead to a rupture with fatal consequences. Endovascular aneurysm repair (EVAR) is a minimally invasive surgical procedure consisting of the deployment and fixation of a stent-graft that isolates the damaged vessel wall from blood circulation. The technique requires adequate endovascular device sizing, which may be performed by vascular analysis and quantification on Computerized Tomography Angiography (CTA) scans. This paper presents a novel 3D CTA image-based software for AAA inspection and EVAR sizing, eVida Vascular, which allows fast and accurate 3D endograft sizing for standard and fenestrated endografts. We provide a description of the system and its innovations, including the underlying vascular image analysis and visualization technology, functional modules and user interaction. Furthermore, an experimental validation of the tool is described, assessing the degree of agreement with a commercial, clinically validated software, when comparing measurements obtained for standard endograft sizing in a group of 14 patients. PMID:25747803

  5. 3-d finite element model development for biomechanics: a software demonstration

    SciTech Connect

    Hollerbach, K.; Hollister, A.M.; Ashby, E.

    1997-03-01

    Finite element analysis is becoming an increasingly important part of biomechanics and orthopedic research, as computational resources become more powerful, and data handling algorithms become more sophisticated. Until recently, tools with sufficient power did not exist or were not accessible to adequately model complicated, three-dimensional, nonlinear biomechanical systems. In the past, finite element analyses in biomechanics have often been limited to two-dimensional approaches, linear analyses, or simulations of single tissue types. Today, we have the resources to model fully three-dimensional, nonlinear, multi-tissue, and even multi-joint systems. The authors will present the process of developing these kinds of finite element models, using human hand and knee examples, and will demonstrate their software tools.

  6. WARP3D-Release 10.8: Dynamic Nonlinear Analysis of Solids using a Preconditioned Conjugate Gradient Software Architecture

    NASA Technical Reports Server (NTRS)

    Koppenhoefer, Kyle C.; Gullerud, Arne S.; Ruggieri, Claudio; Dodds, Robert H., Jr.; Healy, Brian E.

    1998-01-01

    This report describes theoretical background material and commands necessary to use the WARP3D finite element code. WARP3D is under continuing development as a research code for the solution of very large-scale, 3-D solid models subjected to static and dynamic loads. Specific features in the code oriented toward the investigation of ductile fracture in metals include a robust finite strain formulation, a general J-integral computation facility (with inertia, face loading), an element extinction facility to model crack growth, nonlinear material models including viscoplastic effects, and the Gurson-Tver-gaard dilatant plasticity model for void growth. The nonlinear, dynamic equilibrium equations are solved using an incremental-iterative, implicit formulation with full Newton iterations to eliminate residual nodal forces. The history integration of the nonlinear equations of motion is accomplished with Newmarks Beta method. A central feature of WARP3D involves the use of a linear-preconditioned conjugate gradient (LPCG) solver implemented in an element-by-element format to replace a conventional direct linear equation solver. This software architecture dramatically reduces both the memory requirements and CPU time for very large, nonlinear solid models since formation of the assembled (dynamic) stiffness matrix is avoided. Analyses thus exhibit the numerical stability for large time (load) steps provided by the implicit formulation coupled with the low memory requirements characteristic of an explicit code. In addition to the much lower memory requirements of the LPCG solver, the CPU time required for solution of the linear equations during each Newton iteration is generally one-half or less of the CPU time required for a traditional direct solver. All other computational aspects of the code (element stiffnesses, element strains, stress updating, element internal forces) are implemented in the element-by- element, blocked architecture. This greatly improves

  7. Software architecture as a freedom for 3D content providers and users along with independency on purposes and used devices

    NASA Astrophysics Data System (ADS)

    Sultana, Razia; Christ, Andreas; Meyrueis, Patrick

    2014-05-01

    The improvements in the hardware and software of communication devices have allowed running Virtual Reality (VR) and Augmented Reality (AR) applications on those. Nowadays, it is possible to overlay synthetic information on real images, or even to play 3D on-line games on smart phones or some other mobile devices. Hence the use of 3D data for business and specially for education purposes is ubiquitous. Due to always available at hand and always ready to use properties of mobile phones, those are considered as most potential communication devices. The total numbers of mobile phone users are increasing all over the world every day and that makes mobile phones the most suitable device to reach a huge number of end clients either for education or for business purposes. There are different standards, protocols and specifications to establish the communication among different communication devices but there is no initiative taken so far to make it sure that the send data through this communication process will be understood and used by the destination device. Since all the devices are not able to deal with all kind of 3D data formats and it is also not realistic to have different version of the same data to make it compatible with the destination device, it is necessary to have a prevalent solution. The proposed architecture in this paper describes a device and purpose independent 3D data visibility any time anywhere to the right person in suitable format. There is no solution without limitation. The architecture is implemented in a prototype to make an experimental validation of the architecture which also shows the difference between theory and practice.

  8. RVA. 3-D Visualization and Analysis Software to Support Management of Oil and Gas Resources

    SciTech Connect

    Keefer, Donald A.; Shaffer, Eric G.; Storsved, Brynne; Vanmoer, Mark; Angrave, Lawrence; Damico, James R.; Grigsby, Nathan

    2015-12-01

    A free software application, RVA, has been developed as a plugin to the US DOE-funded ParaView visualization package, to provide support in the visualization and analysis of complex reservoirs being managed using multi-fluid EOR techniques. RVA, for Reservoir Visualization and Analysis, was developed as an open-source plugin to the 64 bit Windows version of ParaView 3.14. RVA was developed at the University of Illinois at Urbana-Champaign, with contributions from the Illinois State Geological Survey, Department of Computer Science and National Center for Supercomputing Applications. RVA was designed to utilize and enhance the state-of-the-art visualization capabilities within ParaView, readily allowing joint visualization of geologic framework and reservoir fluid simulation model results. Particular emphasis was placed on enabling visualization and analysis of simulation results highlighting multiple fluid phases, multiple properties for each fluid phase (including flow lines), multiple geologic models and multiple time steps. Additional advanced functionality was provided through the development of custom code to implement data mining capabilities. The built-in functionality of ParaView provides the capacity to process and visualize data sets ranging from small models on local desktop systems to extremely large models created and stored on remote supercomputers. The RVA plugin that we developed and the associated User Manual provide improved functionality through new software tools, and instruction in the use of ParaView-RVA, targeted to petroleum engineers and geologists in industry and research. The RVA web site (http://rva.cs.illinois.edu) provides an overview of functions, and the development web site (https://github.com/shaffer1/RVA) provides ready access to the source code, compiled binaries, user manual, and a suite of demonstration data sets. Key functionality has been included to support a range of reservoirs visualization and analysis needs, including

  9. 3D profilometric characterization of the aged skin surface using a skin replica and alicona Mex software.

    PubMed

    Pirisinu, Marco; Mazzarello, Vittorio

    2016-05-01

    The skin's surface is characterized by a network of furrows and wrinkles showing different height and depth. Different studies showed that processes such as aging, photo aging and cancer may alter dermal ultrastructure surface. The quantitative analysis of skin topography is a key point for understanding health condition of the skin. Here, for the first time, the skin fine structure was studied via a new approach where replica method was combined with Mex Alicona software and scanning electron microscopy (SEM). The skin texture of cheek and forearm were studied in 120 healthy sardinian volunteers. Patients were divided into three different aged groups. The skin areas of interest were reproduced by the silicone replica method, each replica was explored by SEM and digital images were taken. By using Mex Alicona software were created 3D imagine and a list of 24 surface texture parameters were obtained, of these the most representative were chosen in order to assess eventual changes between groups. The skin's texture of forearm and cheek showed a gradually loss of its typical polyhedric mesh with increasing age group. In particular, the photoexposition increased loss of dermal texture. At today, Alicona mex technology was exclusively used on palaeontology studies, our results showed that a deep analyze of skin texture was performed and support Mex alicona software as a new promising tool on dermatological research. This new analytical approach provided an easy and fast process to appreciate skin texture and its changes, by using high quality 3D dimension images. SCANNING 38:213-220, 2016. © 2015 Wiley Periodicals, Inc. PMID:26258960

  10. Development of fast patient position verification software using 2D-3D image registration and its clinical experience.

    PubMed

    Mori, Shinichiro; Kumagai, Motoki; Miki, Kentaro; Fukuhara, Riki; Haneishi, Hideaki

    2015-09-01

    To improve treatment workflow, we developed a graphic processing unit (GPU)-based patient positional verification software application and integrated it into carbon-ion scanning beam treatment. Here, we evaluated the basic performance of the software. The algorithm provides 2D/3D registration matching using CT and orthogonal X-ray flat panel detector (FPD) images. The participants were 53 patients with tumors of the head and neck, prostate or lung receiving carbon-ion beam treatment. 2D/3D-ITchi-Gime (ITG) calculation accuracy was evaluated in terms of computation time and registration accuracy. Registration calculation was determined using the similarity measurement metrics gradient difference (GD), normalized mutual information (NMI), zero-mean normalized cross-correlation (ZNCC), and their combination. Registration accuracy was dependent on the particular metric used. Representative examples were determined to have target registration error (TRE) = 0.45 ± 0.23 mm and angular error (AE) = 0.35 ± 0.18° with ZNCC + GD for a head and neck tumor; TRE = 0.12 ± 0.07 mm and AE = 0.16 ± 0.07° with ZNCC for a pelvic tumor; and TRE = 1.19 ± 0.78 mm and AE = 0.83 ± 0.61° with ZNCC for lung tumor. Calculation time was less than 7.26 s.The new registration software has been successfully installed and implemented in our treatment process. We expect that it will improve both treatment workflow and treatment accuracy. PMID:26081313

  11. Development of fast patient position verification software using 2D-3D image registration and its clinical experience

    PubMed Central

    Mori, Shinichiro; Kumagai, Motoki; Miki, Kentaro; Fukuhara, Riki; Haneishi, Hideaki

    2015-01-01

    To improve treatment workflow, we developed a graphic processing unit (GPU)-based patient positional verification software application and integrated it into carbon-ion scanning beam treatment. Here, we evaluated the basic performance of the software. The algorithm provides 2D/3D registration matching using CT and orthogonal X-ray flat panel detector (FPD) images. The participants were 53 patients with tumors of the head and neck, prostate or lung receiving carbon-ion beam treatment. 2D/3D-ITchi-Gime (ITG) calculation accuracy was evaluated in terms of computation time and registration accuracy. Registration calculation was determined using the similarity measurement metrics gradient difference (GD), normalized mutual information (NMI), zero-mean normalized cross-correlation (ZNCC), and their combination. Registration accuracy was dependent on the particular metric used. Representative examples were determined to have target registration error (TRE) = 0.45 ± 0.23 mm and angular error (AE) = 0.35 ± 0.18° with ZNCC + GD for a head and neck tumor; TRE = 0.12 ± 0.07 mm and AE = 0.16 ± 0.07° with ZNCC for a pelvic tumor; and TRE = 1.19 ± 0.78 mm and AE = 0.83 ± 0.61° with ZNCC for lung tumor. Calculation time was less than 7.26 s.The new registration software has been successfully installed and implemented in our treatment process. We expect that it will improve both treatment workflow and treatment accuracy. PMID:26081313

  12. Dimensional accuracy of 3D printed vertebra

    NASA Astrophysics Data System (ADS)

    Ogden, Kent; Ordway, Nathaniel; Diallo, Dalanda; Tillapaugh-Fay, Gwen; Aslan, Can

    2014-03-01

    3D printer applications in the biomedical sciences and medical imaging are expanding and will have an increasing impact on the practice of medicine. Orthopedic and reconstructive surgery has been an obvious area for development of 3D printer applications as the segmentation of bony anatomy to generate printable models is relatively straightforward. There are important issues that should be addressed when using 3D printed models for applications that may affect patient care; in particular the dimensional accuracy of the printed parts needs to be high to avoid poor decisions being made prior to surgery or therapeutic procedures. In this work, the dimensional accuracy of 3D printed vertebral bodies derived from CT data for a cadaver spine is compared with direct measurements on the ex-vivo vertebra and with measurements made on the 3D rendered vertebra using commercial 3D image processing software. The vertebra was printed on a consumer grade 3D printer using an additive print process using PLA (polylactic acid) filament. Measurements were made for 15 different anatomic features of the vertebral body, including vertebral body height, endplate width and depth, pedicle height and width, and spinal canal width and depth, among others. It is shown that for the segmentation and printing process used, the results of measurements made on the 3D printed vertebral body are substantially the same as those produced by direct measurement on the vertebra and measurements made on the 3D rendered vertebra.

  13. 3D Visualization of Astronomical Data with Blender

    NASA Astrophysics Data System (ADS)

    Kent, B. R.

    2015-09-01

    We present the innovative use of Blender, a 3D graphics package, for astronomical visualization. With a Python API and feature rich interface, Blender lends itself well to many 3D data visualization scenarios including data cube rendering, N-body simulations, catalog displays, and surface maps. We focus on the aspects of the software most useful to astronomers such as visual data exploration, applying data to Blender object constructs, and using graphics processing units (GPUs) for rendering. We share examples from both observational data and theoretical models to illustrate how the software can fit into an astronomer's toolkit.

  14. CT-guided Irreversible Electroporation in an Acute Porcine Liver Model: Effect of Previous Transarterial Iodized Oil Tissue Marking on Technical Parameters, 3D Computed Tomographic Rendering of the Electroporation Zone, and Histopathology

    SciTech Connect

    Sommer, C. M.; Fritz, S.; Vollherbst, D.; Zelzer, S.; Wachter, M. F. Bellemann, N. Gockner, T. Mokry, T. Schmitz, A.; Aulmann, S.; Stampfl, U.; Pereira, P.; Kauczor, H. U.; Werner, J.; Radeleff, B. A.

    2015-02-15

    PurposeTo evaluate the effect of previous transarterial iodized oil tissue marking (ITM) on technical parameters, three-dimensional (3D) computed tomographic (CT) rendering of the electroporation zone, and histopathology after CT-guided irreversible electroporation (IRE) in an acute porcine liver model as a potential strategy to improve IRE performance.MethodsAfter Ethics Committee approval was obtained, in five landrace pigs, two IREs of the right and left liver (RL and LL) were performed under CT guidance with identical electroporation parameters. Before IRE, transarterial marking of the LL was performed with iodized oil. Nonenhanced and contrast-enhanced CT examinations followed. One hour after IRE, animals were killed and livers collected. Mean resulting voltage and amperage during IRE were assessed. For 3D CT rendering of the electroporation zone, parameters for size and shape were analyzed. Quantitative data were compared by the Mann–Whitney test. Histopathological differences were assessed.ResultsMean resulting voltage and amperage were 2,545.3 ± 66.0 V and 26.1 ± 1.8 A for RL, and 2,537.3 ± 69.0 V and 27.7 ± 1.8 A for LL without significant differences. Short axis, volume, and sphericity index were 16.5 ± 4.4 mm, 8.6 ± 3.2 cm{sup 3}, and 1.7 ± 0.3 for RL, and 18.2 ± 3.4 mm, 9.8 ± 3.8 cm{sup 3}, and 1.7 ± 0.3 for LL without significant differences. For RL and LL, the electroporation zone consisted of severely widened hepatic sinusoids containing erythrocytes and showed homogeneous apoptosis. For LL, iodized oil could be detected in the center and at the rim of the electroporation zone.ConclusionThere is no adverse effect of previous ITM on technical parameters, 3D CT rendering of the electroporation zone, and histopathology after CT-guided IRE of the liver.

  15. Modeling Coastal Salinity in Quasi 2D and 3D Using a DUALEM-421 and Inversion Software.

    PubMed

    Davies, Gareth; Huang, Jingyi; Monteiro Santos, Fernando Acacio; Triantafilis, John

    2015-01-01

    Rising sea levels, owing to climate change, are a threat to fresh water coastal aquifers. This is because saline intrusions are caused by increases and intensification of medium-large scale influences including sea level rise, wave climate, tidal cycles, and shifts in beach morphology. Methods are therefore required to understand the dynamics of these interactions. While traditional borehole and galvanic contact resistivity (GCR) techniques have been successful they are time-consuming. Alternatively, frequency-domain electromagnetic (FEM) induction is potentially useful as physical contact with the ground is not required. A DUALEM-421 and EM4Soil inversion software package are used to develop a quasi two- (2D) and quasi three-dimensional (3D) electromagnetic conductivity images (EMCI) across Long Reef Beach located north of Sydney Harbour, New South Wales, Australia. The quasi 2D models discern: the dry sand (<10 mS/m) associated with the incipient dune; sand with fresh water (10 to 20 mS/m); mixing of fresh and saline water (20 to 500 mS/m), and; saline sand of varying moisture (more than 500 mS/m). The quasi 3D EMCIs generated for low and high tides suggest that daily tidal cycles do not have a significant effect on local groundwater salinity. Instead, the saline intrusion is most likely influenced by medium-large scale drivers including local wave climate and morphology along this wave-dominated beach. Further research is required to elucidate the influence of spring-neap tidal cycles, contrasting beach morphological states and sea level rise. PMID:25053423

  16. A Comprehensive Software System for Interactive, Real-time, Visual 3D Deterministic and Stochastic Groundwater Modeling

    NASA Astrophysics Data System (ADS)

    Li, S.

    2002-05-01

    Taking advantage of the recent developments in groundwater modeling research and computer, image and graphics processing, and objected oriented programming technologies, Dr. Li and his research group have recently developed a comprehensive software system for unified deterministic and stochastic groundwater modeling. Characterized by a new real-time modeling paradigm and improved computational algorithms, the software simulates 3D unsteady flow and reactive transport in general groundwater formations subject to both systematic and "randomly" varying stresses and geological and chemical heterogeneity. The software system has following distinct features and capabilities: Interactive simulation and real time visualization and animation of flow in response to deterministic as well as stochastic stresses. Interactive, visual, and real time particle tracking, random walk, and reactive plume modeling in both systematically and randomly fluctuating flow. Interactive statistical inference, scattered data interpolation, regression, and ordinary and universal Kriging, conditional and unconditional simulation. Real-time, visual and parallel conditional flow and transport simulations. Interactive water and contaminant mass balance analysis and visual and real-time flux update. Interactive, visual, and real time monitoring of head and flux hydrographs and concentration breakthroughs. Real-time modeling and visualization of aquifer transition from confined to unconfined to partially de-saturated or completely dry and rewetting Simultaneous and embedded subscale models, automatic and real-time regional to local data extraction; Multiple subscale flow and transport models Real-time modeling of steady and transient vertical flow patterns on multiple arbitrarily-shaped cross-sections and simultaneous visualization of aquifer stratigraphy, properties, hydrological features (rivers, lakes, wetlands, wells, drains, surface seeps), and dynamically adjusted surface flooding area

  17. PONDEROSA-C/S: client-server based software package for automated protein 3D structure determination.

    PubMed

    Lee, Woonghee; Stark, Jaime L; Markley, John L

    2014-11-01

    Peak-picking Of Noe Data Enabled by Restriction Of Shift Assignments-Client Server (PONDEROSA-C/S) builds on the original PONDEROSA software (Lee et al. in Bioinformatics 27:1727-1728. doi: 10.1093/bioinformatics/btr200, 2011) and includes improved features for structure calculation and refinement. PONDEROSA-C/S consists of three programs: Ponderosa Server, Ponderosa Client, and Ponderosa Analyzer. PONDEROSA-C/S takes as input the protein sequence, a list of assigned chemical shifts, and nuclear Overhauser data sets ((13)C- and/or (15)N-NOESY). The output is a set of assigned NOEs and 3D structural models for the protein. Ponderosa Analyzer supports the visualization, validation, and refinement of the results from Ponderosa Server. These tools enable semi-automated NMR-based structure determination of proteins in a rapid and robust fashion. We present examples showing the use of PONDEROSA-C/S in solving structures of four proteins: two that enable comparison with the original PONDEROSA package, and two from the Critical Assessment of automated Structure Determination by NMR (Rosato et al. in Nat Methods 6:625-626. doi: 10.1038/nmeth0909-625 , 2009) competition. The software package can be downloaded freely in binary format from http://pine.nmrfam.wisc.edu/download_packages.html. Registered users of the National Magnetic Resonance Facility at Madison can submit jobs to the PONDEROSA-C/S server at http://ponderosa.nmrfam.wisc.edu, where instructions, tutorials, and instructions can be found. Structures are normally returned within 1-2 days. PMID:25190042

  18. SedWorks: A 3-D visualisation software package to help students link surface processes with depositional product

    NASA Astrophysics Data System (ADS)

    Jones, M. A.; Edwards, A.; Boulton, P.

    2010-12-01

    Helping students to develop a cognitive and intuitive feel for the different temporal and spatial scales of processes through which the rock record is assembled is a primary goal of geoscience teaching. SedWorks is a 3-D virtual geoscience world that integrates both quantitative modelling and field-based studies into one interactive package. The program aims to help students acquire scientific content, cultivate critical thinking skills, and hone their problem solving ability, while also providing them with the opportunity to practice the activities undertaken by professional earth scientists. SedWorks is built upon a game development platform used for constructing interactive 3-D applications. Initially the software has been developed for teaching the sedimentology component of a Geoscience degree and consists of a series of continents or land masses each possessing sedimentary environments which the students visit on virtual field trips. The students are able to interact with the software to collect virtual field data from both the modern environment and the stratigraphic record, and to formulate hypotheses based on their observations which they can test through virtual physical experimentation within the program. The program is modular in design in order to enhance its adaptability and to allow scientific content to be updated so that the knowledge and skills acquired are at the cutting edge. We will present an example module in which students undertake a virtual field study of a 2-km long stretch of a river to observe how sediment is transported and deposited. On entering the field area students are able to observe different bedforms in different parts of the river as they move up- and down-stream, as well as in and out of the river. As they explore, students discover ‘hot spots’ at which particular tools become available to them. This includes tools for measuring the physical parameters of the flow and sediment bed (e.g. velocity, depth, grain size, bed

  19. NOTE: A software tool for 2D/3D visualization and analysis of phase-space data generated by Monte Carlo modelling of medical linear accelerators

    NASA Astrophysics Data System (ADS)

    Neicu, Toni; Aljarrah, Khaled M.; Jiang, Steve B.

    2005-10-01

    A computer program has been developed for novel 2D/3D visualization and analysis of the phase-space parameters of Monte Carlo simulations of medical accelerator radiation beams. The software is written in the IDL language and reads the phase-space data generated in the BEAMnrc/BEAM Monte Carlo code format. Contour and colour-wash plots of the fluence, mean energy, energy fluence, mean angle, spectra distribution, energy fluence distribution, angular distribution, and slices and projections of the 3D ZLAST distribution can be calculated and displayed. Based on our experience of using it at Massachusetts General Hospital, the software has proven to be a useful tool for analysis and verification of the Monte Carlo generated phase-space files. The software is in the public domain.

  20. Europeana and 3D

    NASA Astrophysics Data System (ADS)

    Pletinckx, D.

    2011-09-01

    The current 3D hype creates a lot of interest in 3D. People go to 3D movies, but are we ready to use 3D in our homes, in our offices, in our communication? Are we ready to deliver real 3D to a general public and use interactive 3D in a meaningful way to enjoy, learn, communicate? The CARARE project is realising this for the moment in the domain of monuments and archaeology, so that real 3D of archaeological sites and European monuments will be available to the general public by 2012. There are several aspects to this endeavour. First of all is the technical aspect of flawlessly delivering 3D content over all platforms and operating systems, without installing software. We have currently a working solution in PDF, but HTML5 will probably be the future. Secondly, there is still little knowledge on how to create 3D learning objects, 3D tourist information or 3D scholarly communication. We are still in a prototype phase when it comes to integrate 3D objects in physical or virtual museums. Nevertheless, Europeana has a tremendous potential as a multi-facetted virtual museum. Finally, 3D has a large potential to act as a hub of information, linking to related 2D imagery, texts, video, sound. We describe how to create such rich, explorable 3D objects that can be used intuitively by the generic Europeana user and what metadata is needed to support the semantic linking.

  1. Integrating structure-from-motion photogrammetry with geospatial software as a novel technique for quantifying 3D ecological characteristics of coral reefs

    PubMed Central

    Delparte, D; Gates, RD; Takabayashi, M

    2015-01-01

    The structural complexity of coral reefs plays a major role in the biodiversity, productivity, and overall functionality of reef ecosystems. Conventional metrics with 2-dimensional properties are inadequate for characterization of reef structural complexity. A 3-dimensional (3D) approach can better quantify topography, rugosity and other structural characteristics that play an important role in the ecology of coral reef communities. Structure-from-Motion (SfM) is an emerging low-cost photogrammetric method for high-resolution 3D topographic reconstruction. This study utilized SfM 3D reconstruction software tools to create textured mesh models of a reef at French Frigate Shoals, an atoll in the Northwestern Hawaiian Islands. The reconstructed orthophoto and digital elevation model were then integrated with geospatial software in order to quantify metrics pertaining to 3D complexity. The resulting data provided high-resolution physical properties of coral colonies that were then combined with live cover to accurately characterize the reef as a living structure. The 3D reconstruction of reef structure and complexity can be integrated with other physiological and ecological parameters in future research to develop reliable ecosystem models and improve capacity to monitor changes in the health and function of coral reef ecosystems. PMID:26207190

  2. Integrating structure-from-motion photogrammetry with geospatial software as a novel technique for quantifying 3D ecological characteristics of coral reefs.

    PubMed

    Burns, Jhr; Delparte, D; Gates, R D; Takabayashi, M

    2015-01-01

    The structural complexity of coral reefs plays a major role in the biodiversity, productivity, and overall functionality of reef ecosystems. Conventional metrics with 2-dimensional properties are inadequate for characterization of reef structural complexity. A 3-dimensional (3D) approach can better quantify topography, rugosity and other structural characteristics that play an important role in the ecology of coral reef communities. Structure-from-Motion (SfM) is an emerging low-cost photogrammetric method for high-resolution 3D topographic reconstruction. This study utilized SfM 3D reconstruction software tools to create textured mesh models of a reef at French Frigate Shoals, an atoll in the Northwestern Hawaiian Islands. The reconstructed orthophoto and digital elevation model were then integrated with geospatial software in order to quantify metrics pertaining to 3D complexity. The resulting data provided high-resolution physical properties of coral colonies that were then combined with live cover to accurately characterize the reef as a living structure. The 3D reconstruction of reef structure and complexity can be integrated with other physiological and ecological parameters in future research to develop reliable ecosystem models and improve capacity to monitor changes in the health and function of coral reef ecosystems. PMID:26207190

  3. Architecture of web services in the enhancement of real-time 3D video virtualization in cloud environment

    NASA Astrophysics Data System (ADS)

    Bada, Adedayo; Wang, Qi; Alcaraz-Calero, Jose M.; Grecos, Christos

    2016-04-01

    This paper proposes a new approach to improving the application of 3D video rendering and streaming by jointly exploring and optimizing both cloud-based virtualization and web-based delivery. The proposed web service architecture firstly establishes a software virtualization layer based on QEMU (Quick Emulator), an open-source virtualization software that has been able to virtualize system components except for 3D rendering, which is still in its infancy. The architecture then explores the cloud environment to boost the speed of the rendering at the QEMU software virtualization layer. The capabilities and inherent limitations of Virgil 3D, which is one of the most advanced 3D virtual Graphics Processing Unit (GPU) available, are analyzed through benchmarking experiments and integrated into the architecture to further speed up the rendering. Experimental results are reported and analyzed to demonstrate the benefits of the proposed approach.

  4. Using virtual reality technology and hand tracking technology to create software for training surgical skills in 3D game

    NASA Astrophysics Data System (ADS)

    Zakirova, A. A.; Ganiev, B. A.; Mullin, R. I.

    2015-11-01

    The lack of visible and approachable ways of training surgical skills is one of the main problems in medical education. Existing simulation training devices are not designed to teach students, and are not available due to the high cost of the equipment. Using modern technologies such as virtual reality and hands movements fixation technology we want to create innovative method of learning the technics of conducting operations in 3D game format, which can make education process interesting and effective. Creating of 3D format virtual simulator will allow to solve several conceptual problems at once: opportunity of practical skills improvement unlimited by the time without the risk for patient, high realism of environment in operational and anatomic body structures, using of game mechanics for information perception relief and memorization of methods acceleration, accessibility of this program.

  5. Parallel sphere rendering

    SciTech Connect

    Krogh, M.; Hansen, C.; Painter, J.; de Verdiere, G.C.

    1995-05-01

    Sphere rendering is an important method for visualizing molecular dynamics data. This paper presents a parallel divide-and-conquer algorithm that is almost 90 times faster than current graphics workstations. To render extremely large data sets and large images, the algorithm uses the MIMD features of the supercomputers to divide up the data, render independent partial images, and then finally composite the multiple partial images using an optimal method. The algorithm and performance results are presented for the CM-5 and the T3D.

  6. Normal-mode function representation of global 3-D data sets: open-access software for the atmospheric research community

    NASA Astrophysics Data System (ADS)

    Žagar, N.; Kasahara, A.; Terasaki, K.; Tribbia, J.; Tanaka, H.

    2015-04-01

    This article presents new software for the analysis of global dynamical fields in (re)analyses, weather forecasts and climate models. A new diagnostic tool, developed within the MODES project, allows one to diagnose properties of balanced and inertio-gravity (IG) circulations across many scales. In particular, the IG spectrum, which has only recently become observable, can be studied simultaneously in the mass and wind fields while considering the whole model depth in contrast to the majority of studies. The paper includes the theory of normal-mode function (NMF) expansion, technical details of the Fortran 90 code, examples of namelists which control the software execution and outputs of the software application on the ERA Interim reanalysis data set. The applied libraries and default compiler are from the open-source domain. A limited understanding of Fortran suffices for the successful implementation of the software. The presented application of the software to the ERA Interim data set reveals several aspects of the large-scale circulation after it has been partitioned into the linearly balanced and IG components. The global energy distribution is dominated by the balanced energy while the IG modes contribute around 10% of the total wave energy. However, on sub-synoptic scales, IG energy dominates and it is associated with the main features of tropical variability on all scales. The presented energy distribution and features of the zonally averaged and equatorial circulation provide a reference for the validation of climate models.

  7. Normal-mode function representation of global 3-D datasets: an open-access software for atmospheric research community

    NASA Astrophysics Data System (ADS)

    Žagar, N.; Kasahara, A.; Terasaki, K.; Tribbia, J.; Tanaka, H.

    2014-12-01

    The paper presents new software for the analysis of global dynamical fields in (re)analyses, weather forecasts and climate models. A new diagnostic tool, developed within the MODES project, allows one to diagnose properties of balanced and inertio-gravity (IG) circulation across many scales. In particular, the IG spectrum, which has only recently become observable, can be studied simultaneously in the mass field and wind field and considering the whole model depth in contrary to majority of studies. The paper presentation includes the theory of normal-mode function expansion, technical details of the Fortran 90 code, examples of namelists which control the software execution and outputs of the software application on the reanalysis dataset ERA Interim. The applied libraries and default compiler are from the open-source domain. A limited understanding of Fortran suffices for the successful implementation of the software. The presented application of the software to the ERA Interim dataset show some features of the large-scale circulation after it has been split into the balanced and IG components. The global energy distribution is dominated by the balanced energy with IG modes making less than 10% of the total wave energy. However, on subsynoptic scales IG energy dominates and it is associated with the main features of tropical variability on all scales. The presented energy distribution and features of the zonally-averaged and equatorial circulation provide a reference for the validation of climate models.

  8. IP4DI: A software for time-lapse 2D/3D DC-resistivity and induced polarization tomography

    NASA Astrophysics Data System (ADS)

    Karaoulis, M.; Revil, A.; Tsourlos, P.; Werkema, D. D.; Minsley, B. J.

    2013-04-01

    We propose a 2D/3D forward modelling and inversion package to invert direct current (DC)-resistivity, time-domain induced polarization (TDIP), and frequency-domain induced polarization (FDIP) data. Each cell used for the discretization of the 2D/3D problems is characterized by a DC-resistivity value and a chargeability or complex conductivity for TDIP/FDIP problems, respectively. The governing elliptic partial differential equations are solved with the finite element method, which can be applied for both real and complex numbers. The inversion can be performed either for a single snapshot of data or for a sequence of snapshots in order to monitor a dynamic process such as a salt tracer test. For the time-lapse inversion, we have developed an active time constrained (ATC) approach that is very efficient in filtering out noise in the data that is not correlated over time. The forward algorithm is benchmarked with simple analytical solutions. The inversion package IP4DI is benchmarked with three tests, two including simple geometries. The last one corresponds to a time-lapse resistivity problem for cross-well tomography during enhanced oil recovery. The algorithms are based on MATLAB® code package and a graphical user interface (GUI).

  9. How Students Solve Problems in Spatial Geometry while Using a Software Application for Visualizing 3D Geometric Objects

    ERIC Educational Resources Information Center

    Widder, Mirela; Gorsky, Paul

    2013-01-01

    In schools, learning spatial geometry is usually dependent upon a student's ability to visualize three dimensional geometric configurations from two dimensional drawings. Such a process, however, often creates visual obstacles which are unique to spatial geometry. Useful software programs which realistically depict three dimensional geometric…

  10. Application of Technical Measures and Software in Constructing Photorealistic 3D Models of Historical Building Using Ground-Based and Aerial (UAV) Digital Images

    NASA Astrophysics Data System (ADS)

    Zarnowski, Aleksander; Banaszek, Anna; Banaszek, Sebastian

    2015-12-01

    Preparing digital documentation of historical buildings is a form of protecting cultural heritage. Recently there have been several intensive studies using non-metric digital images to construct realistic 3D models of historical buildings. Increasingly often, non-metric digital images are obtained with unmanned aerial vehicles (UAV). Technologies and methods of UAV flights are quite different from traditional photogrammetric approaches. The lack of technical guidelines for using drones inhibits the process of implementing new methods of data acquisition. This paper presents the results of experiments in the use of digital images in the construction of photo-realistic 3D model of a historical building (Raphaelsohns' Sawmill in Olsztyn). The aim of the study at the first stage was to determine the meteorological and technical conditions for the acquisition of aerial and ground-based photographs. At the next stage, the technology of 3D modelling was developed using only ground-based or only aerial non-metric digital images. At the last stage of the study, an experiment was conducted to assess the possibility of 3D modelling with the comprehensive use of aerial (UAV) and ground-based digital photographs in terms of their labour intensity and precision of development. Data integration and automatic photo-realistic 3D construction of the models was done with Pix4Dmapper and Agisoft PhotoScan software Analyses have shown that when certain parameters established in an experiment are kept, the process of developing the stock-taking documentation for a historical building moves from the standards of analogue to digital technology with considerably reduced cost.

  11. Computer-assisted 3D design software for teaching neuro-ophthalmology of the oculomotor system and training new retinal surgery techniques

    NASA Astrophysics Data System (ADS)

    Glittenberg, Carl; Binder, Susanne

    2004-07-01

    Purpose: To create a more effective method of demonstrating complex subject matter in ophthalmology with the use of high end, 3-D, computer aided animation and interactive multimedia technologies. Specifically, to explore the possibilities of demonstrating the complex nature of the neuroophthalmological basics of the human oculomotor system in a clear and non confusing way, and to demonstrate new forms of retinal surgery in a manner that makes the procedures easier to understand for other retinal surgeons. Methods and Materials: Using Reflektions 4.3, Monzoom Pro 4.5, Cinema 4D XL 5.03, Cinema 4D XL 8 Studio Bundle, Mediator 4.0, Mediator Pro 5.03, Fujitsu-Siemens Pentium III and IV, Gericom Webgine laptop, M.G.I. Video Wave 1.0 and 5, Micrografix Picture Publisher 6.0 and 8, Amorphium 1.0, and Blobs for Windows, we created 3-D animations showing the origin, insertion, course, main direction of pull, and auxiliary direction of pull of the six extra-ocular eye muscles. We created 3-D animations that (a) show the intra-cranial path of the relevant oculomotor cranial nerves and which muscles are supplied by them, (b) show which muscles are active in each of the ten lines of sight, (c) demonstrate the various malfunctions of oculomotor systems, as well as (d) show the surgical techniques and the challenges in radial optic neurotomies and subretinal surgeries. Most of the 3-D animations were integrated in interactive multimedia teaching programs. Their effectiveness was compared to conventional teaching methods in a comparative study performed at the University of Vienna. We also performed a survey to examine the response of students being taught with the interactive programs. We are currently in the process of placing most of the animations in an interactive web site in order to make them freely available to everyone who is interested. Results: Although learning how to use complex 3-D computer animation and multimedia authoring software can be very time consuming and

  12. ASTRID: A 3D Eulerian software for subcooled boiling modelling - comparison with experimental results in tubes and annuli

    SciTech Connect

    Briere, E.; Larrauri, D.; Olive, J.

    1995-09-01

    For about four years, Electricite de France has been developing a 3-D computer code for the Eulerian simulation of two-phase flows. This code, named ASTRID, is based on the six-equation two-fluid model. Boiling water flows, such as those encountered in nuclear reactors, are among the main applications of ASTRID. In order to provide ASTRID with closure laws and boundary conditions suitable for boiling flows, a boiling model has been developed by EDF and the Institut de Mecanique des Fluides de Toulouse. In the fluid, the heat and mass transfer between a bubble and the liquid is being modelled. At the heating wall, the incipient boiling point is determined according to Hsu`s criterion and the boiling heat flux is split into three additive terms: a convective term, a quenching term and a vaporisation term. This model uses several correlations. EDF`s program in boiling two-phase flows also includes experimental studies, some of which are performed in collaboration with other laboratories. Refrigerant subcooled boiling both in tubular (DEBORA experiment, CEN Grenoble) and in annular geometry (Arizona State University Experiment) have been computed with ASTRID. The simulations show the satisfactory results already obtained on void fraction and liquid temperature. Ways of improvement of the model are drawn especially on the dynamical part.

  13. Evaluation of Structure from Motion Software to Create 3D Models of Late Nineteenth Century Great Lakes Shipwrecks Using Archived Diver-Acquired Video Surveys

    NASA Astrophysics Data System (ADS)

    Mertes, J.; Thomsen, T.; Gulley, J.

    2014-12-01

    Here we demonstrate the ability to use archived video surveys to create photorealistic 3D models of submerged archeological sites. We created 3D models of two nineteenth century Great Lakes shipwrecks using diver-acquired video surveys and Structure from Motion (SfM) software. Models were georeferenced using archived hand survey data. Comparison of hand survey measurements and digital measurements made using the models demonstrate that spatial analysis produces results with reasonable accuracy when wreck maps are available. Error associated with digital measurements displayed an inverse relationship to object size. Measurement error ranged from a maximum of 18 % (on 0.37 m object) and a minimum of 0.56 % (on a 4.21 m object). Our results demonstrate SfM can generate models of large maritime archaeological sites that for research, education and outreach purposes. Where site maps are available, these 3D models can be georeferenced to allow additional spatial analysis long after on-site data collection.

  14. Computer-Aided Designing and Manufacturing of Lingual Fixed Orthodontic Appliance Using 2D/3D Registration Software and Rapid Prototyping

    PubMed Central

    Kwon, Soon-Yong; Kim, Ki-Beom; Chung, Kyu-Rhim; Kim (Sunny), Seong-Hun

    2014-01-01

    The availability of 3D dental model scanning technology, combined with the ability to register CBCT data with digital models, has enabled the fabrication of orthognathic surgical CAD/CAM designed splints, customized brackets, and indirect bonding systems. In this study, custom lingual orthodontic appliances were virtually designed by merging 3D model images with lateral and posterior-anterior cephalograms. By exporting design information to 3D CAD software, we have produced a stereolithographic prototype and converted it into a cobalt-chrome alloy appliance as a way of combining traditional prosthetic investment and cast techniques. While the bonding procedure of the appliance could be reinforced, CAD technology simplified the fabrication process by eliminating the soldering phase. This report describes CAD/CAM fabrication of the complex anteroposterior lingual bonded retraction appliance for intrusive retraction of the maxillary anterior dentition. Furthermore, the CAD/CAM method eliminates the extra step of determining the lever arm on the lateral cephalograms and subsequent design modifications on the study model. PMID:24899895

  15. Measurement of Channel Morphology in a Headwater Stream using Low-Altitude Photography and a 3D Model Software

    NASA Astrophysics Data System (ADS)

    Nidaira, K.; Hiraoka, M.; Gomi, T.; Uchiyama, Y.

    2015-12-01

    We developed a method for measuring detail channel morphology using a low elevation photographic scanning. This study was conducted in a 36-m step-pool channel segment in a headwater stream of Ooborazawa watershed located in 20 km south of Tokyo. The channels were covered by Boenninghausenia japonica and Oplismenus undulatifolius var. undulatifolius. Therefore, topographic measurement in high altitude (up to 5 m) using a drone is not applicable. D50 and D90 of channel substrates were 4 cm and 21 cm, respectively. A plastic case that equipped with two digital cameras (RICOH CX5) is mounted at the top of 2.2 m of a glass fiber pole. Photos were taken every 5 seconds from 1.8 m above ground surface. Eleven ground control points (GCP) were installed and measured coordinates. We developed digital 3D topographic model using PhotoScan Pro edition version 1.0.0 and the developed 1 cm contour map using ArcGIS version 10.2. Furthermore, we measured the number, height, and length of steps for examining the accuracy of data. Resolution of obtained topographic model was from 9 to 11 mm per pixel. 1 cm of particle was identified using photo was 1 cm. Estimated step height was agreed to the measured step height in the field. We detected maximum channel scour from October to December, 2014 with (146.5 mm/day for maximum daily rain) occurred at pools with 13cm changes , while 5 to 10 cm of changes in sediment deposition occurred from Mya to June, 2015 with 78.5 mm/day of maximum daily rain. Disposition of sediment was concentration within the sequences of step structures. Our method allows us for understanding detail sediment movement and resultant localized channel changes in steep channels.

  16. Surveying and mapping a cave using 3d laser scanner: the open challenge with free and open source software

    NASA Astrophysics Data System (ADS)

    Cosso, T.; Ferrando, I.; Orlando, A.

    2014-06-01

    The present work is part of a series of activities involving different skills, in order to explore and document in detail one of the most visited caves in Liguria Region. In this context, in addition to speleologists, geologists, videographers, the geomatic expertise has also been involved to carry out a laser scanner survey, in order to produce a three-dimensional model of the two more easily accessible rooms of the cave. The survey was carried out using Z+F IMAGER® 5010 instrument and the post processing operations related to registration of point clouds have been made with Z+F LaserControl®. Subsequently, two different free and open source software were used: MeshLab, to merge the point clouds and to obtain the final mesh, and CloudCompare, to make filtering on the previous results and to extract sections.

  17. A free software for pore-scale modelling: solving Stokes equation for velocity fields and permeability values in 3D pore geometries

    NASA Astrophysics Data System (ADS)

    Gerke, Kirill; Vasilyev, Roman; Khirevich, Siarhei; Karsanina, Marina; Collins, Daniel; Korost, Dmitry; Mallants, Dirk

    2015-04-01

    In this contribution we introduce a novel free software which solves the Stokes equation to obtain velocity fields for low Reynolds-number flows within externally generated 3D pore geometries. Provided with velocity fields, one can calculate permeability for known pressure gradient boundary conditions via Darcy's equation. Finite-difference schemes of 2nd and 4th order of accuracy are used together with an artificial compressibility method to iteratively converge to a steady-state solution of Stokes' equation. This numerical approach is much faster and less computationally demanding than the majority of open-source or commercial softwares employing other algorithms (finite elements/volumes, lattice Boltzmann, etc.) The software consists of two parts: 1) a pre and post-processing graphical interface, and 2) a solver. The latter is efficiently parallelized to use any number of available cores (the speedup on 16 threads was up to 10-12 depending on hardware). Due to parallelization and memory optimization our software can be used to obtain solutions for 300x300x300 voxels geometries on modern desktop PCs. The software was successfully verified by testing it against lattice Boltzmann simulations and analytical solutions. To illustrate the software's applicability for numerous problems in Earth Sciences, a number of case studies have been developed: 1) identifying the representative elementary volume for permeability determination within a sandstone sample, 2) derivation of permeability/hydraulic conductivity values for rock and soil samples and comparing those with experimentally obtained values, 3) revealing the influence of the amount of fine-textured material such as clay on filtration properties of sandy soil. This work was partially supported by RSF grant 14-17-00658 (pore-scale modelling) and RFBR grants 13-04-00409-a and 13-05-01176-a.

  18. Using semi-automated photogrammetry software to generate 3D surfaces from oblique and vertical photographs at Mount St. Helens, WA

    NASA Astrophysics Data System (ADS)

    Schilling, S.; Diefenbach, A. K.

    2012-12-01

    Photogrammetry has been used to generate contours and Digital Elevation Models (DEMs) to monitor change at Mount St. Helens, WA since the 1980 eruption. We continue to improve techniques to monitor topographic changes within the crater. During the 2004-2008 eruption, 26 DEMs were used to track volume and rates of growth of a lava dome and changes of Crater Glacier. These measurements constrained seismogenic extrusion models and were compared with geodetic deflation volume to constrain magma chamber behavior. We used photogrammetric software to collect irregularly spaced 3D points primarily by hand and, in reasonably flat areas, by automated algorithms, from commercial vertical aerial photographs. These models took days to months to complete and the areal extent of each surface was determined by visual inspection. Later in the eruption, we pioneered the use of different software to generate irregularly spaced 3D points manually from oblique images captured by a hand-held digital camera. In each case, the irregularly spaced points and intervening interpolated points formed regular arrays of cells or DEMs. Calculations using DEMs produced from the hand-held images duplicated volumetric and rate results gleaned from the vertical aerial photographs. This manual point capture technique from oblique hand-held photographs required only a few hours to generate a model over a focused area such as the lava dome, but would have taken perhaps days to capture data over the entire crater. Here, we present results from new photogrammetric software that uses robust image-matching algorithms to produce 3D surfaces automatically after inner, relative, and absolute orientations between overlapping photographs are completed. Measurements using scans of vertical aerial photographs taken August 10, 2005 produced dome volume estimates within two percent of those from a surface generated using the vertical aerial photograph manual method. The new August 10th orientations took less than 8

  19. The physics of volume rendering

    NASA Astrophysics Data System (ADS)

    Peters, Thomas

    2014-11-01

    Radiation transfer is an important topic in several physical disciplines, probably most prominently in astrophysics. Computer scientists use radiation transfer, among other things, for the visualization of complex data sets with direct volume rendering. In this article, I point out the connection between physical radiation transfer and volume rendering, and I describe an implementation of direct volume rendering in the astrophysical radiation transfer code RADMC-3D. I show examples for the use of this module on analytical models and simulation data.

  20. Volumetric 3D display using a DLP projection engine

    NASA Astrophysics Data System (ADS)

    Geng, Jason

    2012-03-01

    In this article, we describe a volumetric 3D display system based on the high speed DLPTM (Digital Light Processing) projection engine. Existing two-dimensional (2D) flat screen displays often lead to ambiguity and confusion in high-dimensional data/graphics presentation due to lack of true depth cues. Even with the help of powerful 3D rendering software, three-dimensional (3D) objects displayed on a 2D flat screen may still fail to provide spatial relationship or depth information correctly and effectively. Essentially, 2D displays have to rely upon capability of human brain to piece together a 3D representation from 2D images. Despite the impressive mental capability of human visual system, its visual perception is not reliable if certain depth cues are missing. In contrast, volumetric 3D display technologies to be discussed in this article are capable of displaying 3D volumetric images in true 3D space. Each "voxel" on a 3D image (analogous to a pixel in 2D image) locates physically at the spatial position where it is supposed to be, and emits light from that position toward omni-directions to form a real 3D image in 3D space. Such a volumetric 3D display provides both physiological depth cues and psychological depth cues to human visual system to truthfully perceive 3D objects. It yields a realistic spatial representation of 3D objects and simplifies our understanding to the complexity of 3D objects and spatial relationship among them.

  1. Time-Critical Volume Rendering

    NASA Technical Reports Server (NTRS)

    Kaufman, Arie

    1998-01-01

    For the past twelve months, we have conducted and completed a joint research entitled "Time- Critical Volume Rendering" with NASA Ames. As expected, High performance volume rendering algorithms have been developed by exploring some new faster rendering techniques, including object presence acceleration, parallel processing, and hierarchical level-of-detail representation. Using our new techniques, initial experiments have achieved real-time rendering rates of more than 10 frames per second of various 3D data sets with highest resolution. A couple of joint papers and technique reports as well as an interactive real-time demo have been compiled as the result of this project.

  2. FIRE: an open-software suite for real-time 2D/3D image registration for image guided radiotherapy research

    NASA Astrophysics Data System (ADS)

    Furtado, H.; Gendrin, C.; Spoerk, J.; Steiner, E.; Underwood, T.; Kuenzler, T.; Georg, D.; Birkfellner, W.

    2016-03-01

    Radiotherapy treatments have changed at a tremendously rapid pace. Dose delivered to the tumor has escalated while organs at risk (OARs) are better spared. The impact of moving tumors during dose delivery has become higher due to very steep dose gradients. Intra-fractional tumor motion has to be managed adequately to reduce errors in dose delivery. For tumors with large motion such as tumors in the lung, tracking is an approach that can reduce position uncertainty. Tumor tracking approaches range from purely image intensity based techniques to motion estimation based on surrogate tracking. Research efforts are often based on custom designed software platforms which take too much time and effort to develop. To address this challenge we have developed an open software platform especially focusing on tumor motion management. FLIRT is a freely available open-source software platform. The core method for tumor tracking is purely intensity based 2D/3D registration. The platform is written in C++ using the Qt framework for the user interface. The performance critical methods are implemented on the graphics processor using the CUDA extension. One registration can be as fast as 90ms (11Hz). This is suitable to track tumors moving due to respiration (~0.3Hz) or heartbeat (~1Hz). Apart from focusing on high performance, the platform is designed to be flexible and easy to use. Current use cases range from tracking feasibility studies, patient positioning and method validation. Such a framework has the potential of enabling the research community to rapidly perform patient studies or try new methods.

  3. Gesture Interaction Browser-Based 3D Molecular Viewer.

    PubMed

    Virag, Ioan; Stoicu-Tivadar, Lăcrămioara; Crişan-Vida, Mihaela

    2016-01-01

    The paper presents an open source system that allows the user to interact with a 3D molecular viewer using associated hand gestures for rotating, scaling and panning the rendered model. The novelty of this approach is that the entire application is browser-based and doesn't require installation of third party plug-ins or additional software components in order to visualize the supported chemical file formats. This kind of solution is suitable for instruction of users in less IT oriented environments, like medicine or chemistry. For rendering various molecular geometries our team used GLmol (a molecular viewer written in JavaScript). The interaction with the 3D models is made with Leap Motion controller that allows real-time tracking of the user's hand gestures. The first results confirmed that the resulting application leads to a better way of understanding various types of translational bioinformatics related problems in both biomedical research and education. PMID:27350455

  4. Quantum rendering

    NASA Astrophysics Data System (ADS)

    Lanzagorta, Marco O.; Gomez, Richard B.; Uhlmann, Jeffrey K.

    2003-08-01

    In recent years, computer graphics has emerged as a critical component of the scientific and engineering process, and it is recognized as an important computer science research area. Computer graphics are extensively used for a variety of aerospace and defense training systems and by Hollywood's special effects companies. All these applications require the computer graphics systems to produce high quality renderings of extremely large data sets in short periods of time. Much research has been done in "classical computing" toward the development of efficient methods and techniques to reduce the rendering time required for large datasets. Quantum Computing's unique algorithmic features offer the possibility of speeding up some of the known rendering algorithms currently used in computer graphics. In this paper we discuss possible implementations of quantum rendering algorithms. In particular, we concentrate on the implementation of Grover's quantum search algorithm for Z-buffering, ray-tracing, radiosity, and scene management techniques. We also compare the theoretical performance between the classical and quantum versions of the algorithms.

  5. PLOT3D user's manual

    NASA Technical Reports Server (NTRS)

    Walatka, Pamela P.; Buning, Pieter G.; Pierce, Larry; Elson, Patricia A.

    1990-01-01

    PLOT3D is a computer graphics program designed to visualize the grids and solutions of computational fluid dynamics. Seventy-four functions are available. Versions are available for many systems. PLOT3D can handle multiple grids with a million or more grid points, and can produce varieties of model renderings, such as wireframe or flat shaded. Output from PLOT3D can be used in animation programs. The first part of this manual is a tutorial that takes the reader, keystroke by keystroke, through a PLOT3D session. The second part of the manual contains reference chapters, including the helpfile, data file formats, advice on changing PLOT3D, and sample command files.

  6. Implant Restoration of Edentulous Jaws with 3D Software Planning, Guided Surgery, Immediate Loading, and CAD-CAM Full Arch Frameworks

    PubMed Central

    De Riu, Giacomo; Pisano, Milena; Campus, Guglielmo; Tullio, Antonio

    2013-01-01

    Purpose. The aim of this study was to analyze the clinical and radiographic outcomes of 23 edentulous jaws treated with 3D software planning, guided surgery, and immediate loading and restored with CAD-CAM full arch frameworks. Materials and Methods. This work was designed as a prospective case series clinical study. Twenty patients have been consecutively rehabilitated with an immediately loaded implant supported fixed full prosthesis. A total of 120 fixtures supporting 23 bridges were placed. 117 out of 120 implants were immediately loaded. Outcome measures were implants survival, radiographic marginal bone levels and remodeling, soft tissue parameters, and complications. Results. 114 of 117 implants reached a 30 months follow-up, and no patients dropped out from the study. The cumulative survival rate was 97.7%; after 30 months, mean marginal bone level was 1.25 ± 0.31 mm, mean marginal bone remodeling value was 1.08 ± 0.34, mean PPD value was 2.84 ± 0.55 mm, and mean BOP value was 4% ± 2.8%. Only minor prosthetic complications were recorded. Conclusion. Within the limitations of this study, it can be concluded that computer-guided surgery and immediate loading seem to represent a viable option for the immediate rehabilitations of completely edentulous jaws with fixed implant supported restorations. This trial is registered with Clinicaltrials.gov NCT01866696. PMID:23983690

  7. Two Eyes, 3D: Stereoscopic Design Principles

    NASA Astrophysics Data System (ADS)

    Price, Aaron; Subbarao, M.; Wyatt, R.

    2013-01-01

    Two Eyes, 3D is a NSF-funded research project about how people perceive highly spatial objects when shown with 2D or stereoscopic ("3D") representations. As part of the project, we produced a short film about SN 2011fe. The high definition film has been rendered in both 2D and stereoscopic formats. It was developed according to a set of stereoscopic design principles we derived from the literature and past experience producing and studying stereoscopic films. Study participants take a pre- and post-test that involves a spatial cognition assessment and scientific knowledge questions about Type-1a supernovae. For the evaluation, participants use iPads in order to record spatial manipulation of the device and look for elements of embodied cognition. We will present early results and also describe the stereoscopic design principles and the rationale behind them. All of our content and software is available under open source licenses. More information is at www.twoeyes3d.org.

  8. Fast data parallel polygon rendering

    SciTech Connect

    Ortega, F.A.; Hansen, C.D.

    1993-09-01

    This paper describes a parallel method for polygonal rendering on a massively parallel SIMD machine. This method, based on a simple shading model, is targeted for applications which require very fast polygon rendering for extremely large sets of polygons such as is found in many scientific visualization applications. The algorithms described in this paper are incorporated into a library of 3D graphics routines written for the Connection Machine. The routines are implemented on both the CM-200 and the CM-5. This library enables a scientists to display 3D shaded polygons directly from a parallel machine without the need to transmit huge amounts of data to a post-processing rendering system.

  9. 3D Scientific Visualization with Blender

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.

    2015-03-01

    This is the first book written on using Blender for scientific visualization. It is a practical and interesting introduction to Blender for understanding key parts of 3D rendering and animation that pertain to the sciences via step-by-step guided tutorials. 3D Scientific Visualization with Blender takes you through an understanding of 3D graphics and modelling for different visualization scenarios in the physical sciences.

  10. Low-cost real-time 3D PC distributed-interactive-simulation (DIS) application for C4I

    NASA Astrophysics Data System (ADS)

    Gonthier, David L.; Veron, Harry

    1998-04-01

    A 3D Distributed Interactive Simulation (DIS) application was developed and demonstrated in a PC environment. The application is capable of running in the stealth mode or as a player which includes battlefield simulations, such as ModSAF. PCs can be clustered together, but not necessarily collocated, to run a simulation or training exercise on their own. A 3D perspective view of the battlefield is displayed that includes terrain, trees, buildings and other objects supported by the DIS application. Screen update rates of 15 to 20 frames per second have been achieved with fully lit and textured scenes thus providing high quality and fast graphics. A complete PC system can be configured for under $2,500. The software runs under Windows95 and WindowsNT. It is written in C++ and uses a commercial API called RenderWare for 3D rendering. The software uses Microsoft Foundation classes and Microsoft DirectPlay for joystick input. The RenderWare libraries enhance the performance through optimization for MMX and the Pentium Pro processor. The RenderWare and the Righteous 3D graphics board from Orchid Technologies with an advertised rendering rate of up to 2 million texture mapped triangles per second. A low-cost PC DIS simulator that can partake in a real-time collaborative simulation with other platforms is thus achieved.

  11. Interactive 3d Landscapes on Line

    NASA Astrophysics Data System (ADS)

    Fanini, B.; Calori, L.; Ferdani, D.; Pescarin, S.

    2011-09-01

    The paper describes challenges identified while developing browser embedded 3D landscape rendering applications, our current approach and work-flow and how recent development in browser technologies could affect. All the data, even if processed by optimization and decimation tools, result in very huge databases that require paging, streaming and Level-of-Detail techniques to be implemented to allow remote web based real time fruition. Our approach has been to select an open source scene-graph based visual simulation library with sufficient performance and flexibility and adapt it to the web by providing a browser plug-in. Within the current Montegrotto VR Project, content produced with new pipelines has been integrated. The whole Montegrotto Town has been generated procedurally by CityEngine. We used this procedural approach, based on algorithms and procedures because it is particularly functional to create extensive and credible urban reconstructions. To create the archaeological sites we used optimized mesh acquired with laser scanning and photogrammetry techniques whereas to realize the 3D reconstructions of the main historical buildings we adopted computer-graphic software like blender and 3ds Max. At the final stage, semi-automatic tools have been developed and used up to prepare and clusterise 3D models and scene graph routes for web publishing. Vegetation generators have also been used with the goal of populating the virtual scene to enhance the user perceived realism during the navigation experience. After the description of 3D modelling and optimization techniques, the paper will focus and discuss its results and expectations.

  12. New portable FELIX 3D display

    NASA Astrophysics Data System (ADS)

    Langhans, Knut; Bezecny, Daniel; Homann, Dennis; Bahr, Detlef; Vogt, Carsten; Blohm, Christian; Scharschmidt, Karl-Heinz

    1998-04-01

    An improved generation of our 'FELIX 3D Display' is presented. This system is compact, light, modular and easy to transport. The created volumetric images consist of many voxels, which are generated in a half-sphere display volume. In that way a spatial object can be displayed occupying a physical space with height, width and depth. The new FELIX generation uses a screen rotating with 20 revolutions per second. This target screen is mounted by an easy to change mechanism making it possible to use appropriate screens for the specific purpose of the display. An acousto-optic deflection unit with an integrated small diode pumped laser draws the images on the spinning screen. Images can consist of up to 10,000 voxels at a refresh rate of 20 Hz. Currently two different hardware systems are investigated. The first one is based on a standard PCMCIA digital/analog converter card as an interface and is controlled by a notebook. The developed software is provided with a graphical user interface enabling several animation features. The second, new prototype is designed to display images created by standard CAD applications. It includes the development of a new high speed hardware interface suitable for state-of-the- art fast and high resolution scanning devices, which require high data rates. A true 3D volume display as described will complement the broad range of 3D visualization tools, such as volume rendering packages, stereoscopic and virtual reality techniques, which have become widely available in recent years. Potential applications for the FELIX 3D display include imaging in the field so fair traffic control, medical imaging, computer aided design, science as well as entertainment.

  13. Rendering Three-Dimensional Solar Coronal Structures

    NASA Technical Reports Server (NTRS)

    Gary, G. Allen

    1997-01-01

    An X-ray or EUV image of the corona or chromosphere is a 2D representation of an extended 3D complex for which a general inversion process is impossible. A specific model must be incorporated in order to understand the full 3D structure. We approach this problem by modeling a set of optically-thin 3D plasma flux tubes which we render these as synthetic images. The resulting images allow the interpretation of the X-ray/EUV observations to obtain information on (1) the 3D structure of X-ray images, i.e., the geometric structure of the flux tubes, and on (2) the internal structure using specific plasma characteristics, i.e., the physical structure of the flux tubes. The data-analysis technique uses magnetograms to characterize photospheric magnetic fields and extrapolation techniques to form the field lines. Using a new set of software tools, we have generated 3D flux tube structures around these field lines and integrated the plasma emission along the line of sight to obtain a rendered image. A set of individual flux-tube images is selected by a non-negative least-squares technique to Provide a match with an observed X-ray image. The scheme minimizes the squares of the differences between the synthesized image and the observed image with a non-negative constraint on the coefficients of the brightness of the individual flux-tube loops. The derived images are used to determine the specific photospheric foot points and physical data, i.e., scaling laws for densities and loop lengths. The development has led to Computer efficient integration and display software that is compatible for comparison with observations (e.g., Yohkoh SXT data, NIXT, or EIT). This analysis is important in determining directly the magnetic field configuration, which provides the structure of coronal loops, and indirectly the electric currents or waves, which provide the energy for the heating of the plasma. We have used very simple assumptions (i.e., potential magnetic fields and isothermal

  14. Physically Based Rendering in the Nightshade NG Visualization Platform

    NASA Astrophysics Data System (ADS)

    Berglund, Karrie; Larey-Williams, Trystan; Spearman, Rob; Bogard, Arthur

    2015-01-01

    This poster describes our work on creating a physically based rendering model in Nightshade NG planetarium simulation and visualization software (project website: NightshadeSoftware.org). We discuss techniques used for rendering realistic scenes in the universe and dealing with astronomical distances in real time on consumer hardware. We also discuss some of the challenges of rewriting the software from scratch, a project which began in 2011.Nightshade NG can be a powerful tool for sharing data and visualizations. The desktop version of the software is free for anyone to download, use, and modify; it runs on Windows and Linux (and eventually Mac). If you are looking to disseminate your data or models, please stop by to discuss how we can work together.Nightshade software is used in literally hundreds of digital planetarium systems worldwide. Countless teachers and astronomy education groups run the software on flat screens. This wide use makes Nightshade an effective tool for dissemination to educators and the public.Nightshade NG is an especially powerful visualization tool when projected on a dome. We invite everyone to enter our inflatable dome in the exhibit hall to see this software in a 3D environment.

  15. Imaging of Temporomandibular Joint: Approach by Direct Volume Rendering

    PubMed Central

    Caradonna, Carola; Bruschetta, Daniele; Vaccarino, Gianluigi; Milardi, Demetrio

    2014-01-01

    Background: The purpose of this study was to conduct a morphological analysis of the temporomandibular joint, a highly specialized synovial joint that permits movement and function of the mandible. Materials and Methods: We have studied the temporom-andibular joint anatomy, directly on the living, from 3D images obtained by medical imaging Computed Tomography and Nuclear Magnetic Resonance acquisition, and subsequent re-engineering techniques 3D Surface Rendering and Volume Rendering. Data were analysed with the goal of being able to isolate, identify and distinguish the anatomical structures of the joint, and get the largest possible number of information utilizing software for post-processing work. Results: It was possible to reproduce anatomy of the skeletal structures, as well as through acquisitions of Magnetic Resonance Imaging; it was also possible to visualize the vascular, muscular, ligamentous and tendinous components of the articular complex, and also the capsule and the fibrous cartilaginous disc. We managed the Surface Rendering and Volume Rendering, not only to obtain three-dimensional images for colour and for resolution comparable to the usual anatomical preparations, but also a considerable number of anatomical, minuter details, zooming, rotating and cutting the same images with linking, graduating the colour, transparency and opacity from time to time. Conclusion: These results are encouraging to stimulate further studies in other anatomical districts. PMID:25664280

  16. Linked-View Parallel Coordinate Plot Renderer

    2011-06-28

    This software allows multiple linked views for interactive querying via map-based data selection, bar chart analytic overlays, and high dynamic range (HDR) line renderings. The major component of the visualization package is a parallel coordinate renderer with binning, curved layouts, shader-based rendering, and other techniques to allow interactive visualization of multidimensional data.

  17. Exploratory nuclear microprobe data visualisation using 3- and 4-dimensional biological volume rendering tools

    NASA Astrophysics Data System (ADS)

    Whitlow, Harry J.; Ren, Minqin; van Kan, Jeroen A.; Watt, Frank; White, Dan

    2007-07-01

    The emergence of Confocal Microscopy (CM) and Atomic Force Microscopy (AFM) as everyday tools in cellular level biology has stimulated development of 3D data visualisation software. Conventional 2-dimensional images of cell (optical) sections obtained in a transmission electron or optical microscopes and more sophisticated multidimensional imaging methods require processing software capable of 3D rendering and mathematically transforming data in 3-, 4-, or more dimensions. The richness of data obtained from the different nuclear microscopy imaging techniques and often parallel information channels (X-ray, secondary electron, Scanning Transmission Ion Microscopy) is often not obvious because subtleties and interrelations in the data could not be rendered in a human interpretable way. In this exploratory study we have applied the BioImageXD software, originally developed for rendering of multidimensional CM data, to some different nuclear microscopy data. Cells-on-Silicon STIM data from a human breast cancer cell line and elemental maps from lesions on rabbit aorta have been visualised. Mathematical filtering and averaging combined with hardware accelerated 3D rendering enabled dramatically clear visualisation of inter-cellular regions comprising extra cellular matrix proteins that were otherwise difficult to visualise, and also sub cellular structures. For elemental mapping, the use of filtered correlation surfaces and colour channels clearly revealed the interrelations in the data structures that are not easily discernible in the PIXE elemental maps.

  18. 3D-Printed Microfluidics.

    PubMed

    Au, Anthony K; Huynh, Wilson; Horowitz, Lisa F; Folch, Albert

    2016-03-14

    The advent of soft lithography allowed for an unprecedented expansion in the field of microfluidics. However, the vast majority of PDMS microfluidic devices are still made with extensive manual labor, are tethered to bulky control systems, and have cumbersome user interfaces, which all render commercialization difficult. On the other hand, 3D printing has begun to embrace the range of sizes and materials that appeal to the developers of microfluidic devices. Prior to fabrication, a design is digitally built as a detailed 3D CAD file. The design can be assembled in modules by remotely collaborating teams, and its mechanical and fluidic behavior can be simulated using finite-element modeling. As structures are created by adding materials without the need for etching or dissolution, processing is environmentally friendly and economically efficient. We predict that in the next few years, 3D printing will replace most PDMS and plastic molding techniques in academia. PMID:26854878

  19. Direct volume rendering methods for cell structures.

    PubMed

    Martišek, Dalibor; Martišek, Karel

    2012-01-01

    The study of the complicated architecture of cell space structures is an important problem in biology and medical research. Optical cuts of cells produced by confocal microscopes enable two-dimensional (2D) and three-dimensional (3D) reconstructions of observed cells. This paper discuses new possibilities for direct volume rendering of these data. We often encounter 16 or more bit images in confocal microscopy of cells. Most of the information contained in these images is unsubstantial for the human vision. Therefore, it is necessary to use mathematical algorithms for visualization of such images. Present software tools as OpenGL or DirectX run quickly in graphic station with special graphic cards, run very unsatisfactory on PC without these cards and outputs are usually poor for real data. These tools are black boxes for a common user and make it impossible to correct and improve them. With the method proposed, more parameters of the environment can be set, making it possible to apply 3D filters to set the output image sharpness in relation to the noise. The quality of the output is incomparable to the earlier described methods and is worth increasing the computing time. We would like to offer mathematical methods of 3D scalar data visualization describing new algorithms that run on standard PCs very well. PMID:22511504

  20. dc3dm: Software to efficiently form and apply a 3D DDM operator for a nonuniformly discretized rectangular planar fault

    NASA Astrophysics Data System (ADS)

    Bradley, A. M.

    2013-12-01

    My poster will describe dc3dm, a free open source software (FOSS) package that efficiently forms and applies the linear operator relating slip and traction components on a nonuniformly discretized rectangular planar fault in a homogeneous elastic (HE) half space. This linear operator implements what is called the displacement discontinuity method (DDM). The key properties of dc3dm are: 1. The mesh can be nonuniform. 2. Work and memory scale roughly linearly in the number of elements (rather than quadratically). 3. The order of accuracy of my method on a nonuniform mesh is the same as that of the standard method on a uniform mesh. Property 2 is achieved using my FOSS package hmmvp [AGU 2012]. A nonuniform mesh (property 1) is natural for some problems. For example, in a rate-state friction simulation, nucleation length, and so required element size, scales reciprocally with effective normal stress. Property 3 assures that if a nonuniform mesh is more efficient than a uniform mesh (in the sense of accuracy per element) at one level of mesh refinement, it will remain so at all further mesh refinements. I use the routine DC3D of Y. Okada, which calculates the stress tensor at a receiver resulting from a rectangular uniform dislocation source in an HE half space. On a uniform mesh, straightforward application of this Green's function (GF) yields a DDM I refer to as DDMu. On a nonuniform mesh, this same procedure leads to artifacts that degrade the order of accuracy of the DDM. I have developed a method I call IGA that implements the DDM using this GF for a nonuniformly discretized mesh having certain properties. Importantly, IGA's order of accuracy on a nonuniform mesh is the same as DDMu's on a uniform one. Boundary conditions can be periodic in the surface-parallel direction (in both directions if the GF is for a whole space), velocity on any side, and free surface. The mesh must have the following main property: each uniquely sized element must tile each element

  1. Clinical Experience With A Portable 3-D Reconstruction Program

    NASA Astrophysics Data System (ADS)

    Holshouser, Barbara A.; Christiansen, Edwin L.; Thompson, Joseph R.; Reynolds, R. Anthony; Goldwasser, Samuel M.

    1988-06-01

    Clinical experience with a computer program for reconstructing and visualizing three-dimensional (3-D) structures is reported. Applications to the study of soft-tissue and skeletal structures, such as the temporomandibular joint and craniofacial anatomy, using computed tomography (CT) data are described. Several features specific to the computer algorithm are demonstrated and evaluated. These include: (1) manipulation of density windows to selectively visualize bone or soft tissue structures; (2) the efficacy of gradient shading algorithms in revealing fine surface detail; and (3) the rapid generation of cut-away views revealing details of internal structures. Also demonstrated is the importance of high resolution data as input to the 3-D program. The implementation of the program (VoxelView-32) described here, is on a MASSCOMP computer running UNIX. Data were collected with General Electric or Siemens CT scanners and transferred to the MASSCOMP for off-line 3-D recon-struction, via magnetic tape or Ethernet. An interactive graphics facility on the MASSCOMP allows viewing of 2-D slices, subregioning, and selection of lower and upper density thresholds for segmentation. The software then enters a pre-processing phase during which a volume representation of the segmented object (soft tissue or bone) is automatically created. This is followed by a rendering phase during which multiple views of the segmented object are automatically generated. The pre-processing phase typically takes 4 to 8 minutes (although very large datasets may require as much as 30 minutes) and the rendering phase typically takes 1 to 2 minutes for each 3-D view. Volume representation and rendering techniques are used at all stages of the processing, and gradient shading is used for enhanced surface detail.

  2. YouDash3D: exploring stereoscopic 3D gaming for 3D movie theaters

    NASA Astrophysics Data System (ADS)

    Schild, Jonas; Seele, Sven; Masuch, Maic

    2012-03-01

    Along with the success of the digitally revived stereoscopic cinema, events beyond 3D movies become attractive for movie theater operators, i.e. interactive 3D games. In this paper, we present a case that explores possible challenges and solutions for interactive 3D games to be played by a movie theater audience. We analyze the setting and showcase current issues related to lighting and interaction. Our second focus is to provide gameplay mechanics that make special use of stereoscopy, especially depth-based game design. Based on these results, we present YouDash3D, a game prototype that explores public stereoscopic gameplay in a reduced kiosk setup. It features live 3D HD video stream of a professional stereo camera rig rendered in a real-time game scene. We use the effect to place the stereoscopic effigies of players into the digital game. The game showcases how stereoscopic vision can provide for a novel depth-based game mechanic. Projected trigger zones and distributed clusters of the audience video allow for easy adaptation to larger audiences and 3D movie theater gaming.

  3. 3D volume visualization in remote radiation treatment planning

    NASA Astrophysics Data System (ADS)

    Yun, David Y.; Garcia, Hong-Mei C.; Mun, Seong K.; Rogers, James E.; Tohme, Walid G.; Carlson, Wayne E.; May, Stephen; Yagel, Roni

    1996-03-01

    This paper reports a novel applications of 3D visualization in an ARPA-funded remote radiation treatment planning (RTP) experiment, utilizing supercomputer 3D volumetric modeling power and NASA ACTS (Advanced Communication Technology Satellite) communication bandwidths at the Ka-band range. The objective of radiation treatment is to deliver a tumorcidal dose of radiation to a tumor volume while minimizing doses to surrounding normal tissues. High performance graphics computers are required to allow physicians to view a 3D anatomy, specify proposed radiation beams, and evaluate the dose distribution around the tumor. Supercomputing power is needed to compute and even optimize dose distribution according to pre-specified requirements. High speed communications offer possibilities for sharing scarce and expensive computing resources (e.g., hardware, software, personnel, etc.) as well as medical expertise for 3D treatment planning among hospitals. This paper provides initial technical insights into the feasibility of such resource sharing. The overall deployment of the RTP experiment, visualization procedures, and parallel volume rendering in support of remote interactive 3D volume visualization will be described.

  4. PLOT3D/AMES, SGI IRIS VERSION (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    advanced features which aid visualization efforts. Shading and hidden line/surface removal can be used to enhance depth perception and other aspects of the graphical displays. A mouse can be used to translate, rotate, or zoom in on views. Files for several types of output can be produced. Two animation options are even offered: creation of simple animation sequences without the need for other software; and, creation of files for use in GAS (Graphics Animation System, ARC-12379), an IRIS program which offers more complex rendering and animation capabilities and can record images to digital disk, video tape, or 16-mm film. The version 3.6b+ SGI implementations of PLOT3D (ARC-12783) and PLOT3D/TURB3D (ARC-12782) were developed for use on Silicon Graphics IRIS 2xxx/3xxx or IRIS 4D workstations. These programs are each distributed on one .25 inch magnetic tape cartridge in IRIS TAR format. Customers purchasing one implementation version of PLOT3D or PLOT3D/TURB3D will be given a $200 discount on each additional implementation version ordered at the same time. Version 3.6b+ of PLOT3D and PLOT3D/TURB3D are also supported for the following computers and graphics libraries: (1) generic UNIX Supercomputer and IRIS, suitable for CRAY 2/UNICOS, CONVEX, and Alliant with remote IRIS 2xxx/3xxx or IRIS 4D (ARC-12779, ARC-12784); (2) VAX computers running VMS Version 5.0 and DISSPLA Version 11.0 (ARC-12777,ARC-12781); (3) generic UNIX and DISSPLA Version 11.0 (ARC-12788, ARC-12778); and (4) Apollo computers running UNIX and GMR3D Version 2.0 (ARC-12789, ARC-12785 which have no capabilities to put text on plots). Silicon Graphics Iris, IRIS 4D, and IRIS 2xxx/3xxx are trademarks of Silicon Graphics Incorporated. VAX and VMS are trademarks of Digital Electronics Corporation. DISSPLA is a trademark of Computer Associates. CRAY 2 and UNICOS are trademarks of CRAY Research, Incorporated. CONVEX is a trademark of Convex Computer Corporation. Alliant is a trademark of Alliant. Apollo and GMR3D are

  5. PLOT3D/AMES, SGI IRIS VERSION (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    advanced features which aid visualization efforts. Shading and hidden line/surface removal can be used to enhance depth perception and other aspects of the graphical displays. A mouse can be used to translate, rotate, or zoom in on views. Files for several types of output can be produced. Two animation options are even offered: creation of simple animation sequences without the need for other software; and, creation of files for use in GAS (Graphics Animation System, ARC-12379), an IRIS program which offers more complex rendering and animation capabilities and can record images to digital disk, video tape, or 16-mm film. The version 3.6b+ SGI implementations of PLOT3D (ARC-12783) and PLOT3D/TURB3D (ARC-12782) were developed for use on Silicon Graphics IRIS 2xxx/3xxx or IRIS 4D workstations. These programs are each distributed on one .25 inch magnetic tape cartridge in IRIS TAR format. Customers purchasing one implementation version of PLOT3D or PLOT3D/TURB3D will be given a $200 discount on each additional implementation version ordered at the same time. Version 3.6b+ of PLOT3D and PLOT3D/TURB3D are also supported for the following computers and graphics libraries: (1) generic UNIX Supercomputer and IRIS, suitable for CRAY 2/UNICOS, CONVEX, and Alliant with remote IRIS 2xxx/3xxx or IRIS 4D (ARC-12779, ARC-12784); (2) VAX computers running VMS Version 5.0 and DISSPLA Version 11.0 (ARC-12777,ARC-12781); (3) generic UNIX and DISSPLA Version 11.0 (ARC-12788, ARC-12778); and (4) Apollo computers running UNIX and GMR3D Version 2.0 (ARC-12789, ARC-12785 which have no capabilities to put text on plots). Silicon Graphics Iris, IRIS 4D, and IRIS 2xxx/3xxx are trademarks of Silicon Graphics Incorporated. VAX and VMS are trademarks of Digital Electronics Corporation. DISSPLA is a trademark of Computer Associates. CRAY 2 and UNICOS are trademarks of CRAY Research, Incorporated. CONVEX is a trademark of Convex Computer Corporation. Alliant is a trademark of Alliant. Apollo and GMR3D are

  6. 3D laptop for defense applications

    NASA Astrophysics Data System (ADS)

    Edmondson, Richard; Chenault, David

    2012-06-01

    Polaris Sensor Technologies has developed numerous 3D display systems using a US Army patented approach. These displays have been developed as prototypes for handheld controllers for robotic systems and closed hatch driving, and as part of a TALON robot upgrade for 3D vision, providing depth perception for the operator for improved manipulation and hazard avoidance. In this paper we discuss the prototype rugged 3D laptop computer and its applications to defense missions. The prototype 3D laptop combines full temporal and spatial resolution display with the rugged Amrel laptop computer. The display is viewed through protective passive polarized eyewear, and allows combined 2D and 3D content. Uses include robot tele-operation with live 3D video or synthetically rendered scenery, mission planning and rehearsal, enhanced 3D data interpretation, and simulation.

  7. NON-NRC FUNDED RELAP5-3D VERSION 4.x.x SOFTWARE REACTOR EXCURSION AND LEAK ANALYSIS PACKAGE - THREE DIMENSIONAL

    2012-03-26

    The RELAP5-3D Version 3.x code has been developed for best-estimate transient simulation of nuclear reactor coolant systems during postulated accidents. The code models the coupled behavior of the reactor coolant system and the core for loss-of-coolant accidents and operational transients such as anticipated transient without scram, loss of offsite power, loss of feedwater, and loss of flow. A generic modeling approach is used that permits simulating a variety of thermal hydraulic systems including pressurized watermore » reactors, boiling water reactors, Soviet-designed reactors, heavy water reactors, gas-cooled reactors, liquid metal and molten salt cooled reactors, and even fusion reactors. Numerical models include multi-dimensional hydrodynamics, 1- and 2-D heat transfer in metal walls, 0-, 1-, 2-, and 3-D neutron kinetics, trips, and control systems. Secondary system components are included to permit modeling of plant controls, turbines, condensers, and secondary feedwater systems.« less

  8. NON-NRC FUNDED RELAP5-3D VERSION 4.x.x SOFTWARE REACTOR EXCURSION AND LEAK ANALYSIS PACKAGE - THREE DIMENSIONAL

    SciTech Connect

    2012-03-26

    The RELAP5-3D Version 3.x code has been developed for best-estimate transient simulation of nuclear reactor coolant systems during postulated accidents. The code models the coupled behavior of the reactor coolant system and the core for loss-of-coolant accidents and operational transients such as anticipated transient without scram, loss of offsite power, loss of feedwater, and loss of flow. A generic modeling approach is used that permits simulating a variety of thermal hydraulic systems including pressurized water reactors, boiling water reactors, Soviet-designed reactors, heavy water reactors, gas-cooled reactors, liquid metal and molten salt cooled reactors, and even fusion reactors. Numerical models include multi-dimensional hydrodynamics, 1- and 2-D heat transfer in metal walls, 0-, 1-, 2-, and 3-D neutron kinetics, trips, and control systems. Secondary system components are included to permit modeling of plant controls, turbines, condensers, and secondary feedwater systems.

  9. 3D Surgical Simulation

    PubMed Central

    Cevidanes, Lucia; Tucker, Scott; Styner, Martin; Kim, Hyungmin; Chapuis, Jonas; Reyes, Mauricio; Proffit, William; Turvey, Timothy; Jaskolka, Michael

    2009-01-01

    This paper discusses the development of methods for computer-aided jaw surgery. Computer-aided jaw surgery allows us to incorporate the high level of precision necessary for transferring virtual plans into the operating room. We also present a complete computer-aided surgery (CAS) system developed in close collaboration with surgeons. Surgery planning and simulation include construction of 3D surface models from Cone-beam CT (CBCT), dynamic cephalometry, semi-automatic mirroring, interactive cutting of bone and bony segment repositioning. A virtual setup can be used to manufacture positioning splints for intra-operative guidance. The system provides further intra-operative assistance with the help of a computer display showing jaw positions and 3D positioning guides updated in real-time during the surgical procedure. The CAS system aids in dealing with complex cases with benefits for the patient, with surgical practice, and for orthodontic finishing. Advanced software tools for diagnosis and treatment planning allow preparation of detailed operative plans, osteotomy repositioning, bone reconstructions, surgical resident training and assessing the difficulties of the surgical procedures prior to the surgery. CAS has the potential to make the elaboration of the surgical plan a more flexible process, increase the level of detail and accuracy of the plan, yield higher operative precision and control, and enhance documentation of cases. Supported by NIDCR DE017727, and DE018962 PMID:20816308

  10. PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  11. PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  12. Development of 3D mobile receiver for stereoscopic video and data service in T-DMB

    NASA Astrophysics Data System (ADS)

    Lee, Gwangsoon; Lee, Hyun; Yun, Kugjin; Hur, Namho; Lee, Soo In

    2011-02-01

    In this paper, we present a development of 3D-T DMB (three-dimensional digital multimedia broadcasting) receiver for providing 3D video and data service. First, for a 3D video service, the developed receiver is capable of decoding and playing 3D AV contents that is encoded by simulcast encoding method and that is transmitted via T-DMB network. Second, the developed receiver can render stereoscopic multimedia objects delivered using MPEG-4 BIFS technology that is also employed in T-DMB. Specially, this paper introduces hardware and software architecture and its implementation of 3D T-DMB receiver. The developed 3D T-DMB receiver has capabilities of generating stereoscopic viewing on the glasses-free 3D mobile display, therefore we propose parameters for designing the 3D display, together with evaluating the viewing angle and distance through both computer simulation and actual measurement. Finally, the availability of 3D video and data service is verified using the experimental system including the implemented receiver and a variety of service examples.

  13. Euro3D Science Conference

    NASA Astrophysics Data System (ADS)

    Walsh, J. R.

    2004-02-01

    completed 3D instruments - CIRPASS, GMOS, PMAS and SPIFFI. Work on 3D software, being developed as part of the Euro3D RTN, was also described and demonstrated. This proceedings volume, consisting of carefully refereed and edited manuscripts, represents the bulk of the talks at the conference and amply demonstrates that 3D spectroscopy is a lively and burgeoning field of optical observation.

  14. Vector quantization of 3-D point clouds

    NASA Astrophysics Data System (ADS)

    Sim, Jae-Young; Kim, Chang-Su; Lee, Sang-Uk

    2005-10-01

    A geometry compression algorithm for 3-D QSplat data using vector quantization (VQ) is proposed in this work. The positions of child spheres are transformed to the local coordinate system, which is determined by the parent children relationship. The coordinate transform makes child positions more compactly distributed in 3-D space, facilitating effective quantization. Moreover, we develop a constrained encoding method for sphere radii, which guarantees hole-free surface rendering at the decoder side. Simulation results show that the proposed algorithm provides a faithful rendering quality even at low bitrates.

  15. 3D MR imaging in real time

    NASA Astrophysics Data System (ADS)

    Guttman, Michael A.; McVeigh, Elliot R.

    2001-05-01

    A system has been developed to produce live 3D volume renderings from an MR scanner. Whereas real-time 2D MR imaging has been demonstrated by several groups, 3D volumes are currently rendered off-line to gain greater understanding of anatomical structures. For example, surgical planning is sometimes performed by viewing 2D images or 3D renderings from previously acquired image data. A disadvantage of this approach is misregistration which could occur if the anatomy changes due to normal muscle contractions or surgical manipulation. The ability to produce volume renderings in real-time and present them in the magnet room could eliminate this problem, and enable or benefit other types of interventional procedures. The system uses the data stream generated by a fast 2D multi- slice pulse sequence to update a volume rendering immediately after a new slice is available. We demonstrate some basic types of user interaction with the rendering during imaging at a rate of up to 20 frames per second.

  16. High-Performance 3D Articulated Robot Display

    NASA Technical Reports Server (NTRS)

    Powell, Mark W.; Torres, Recaredo J.; Mittman, David S.; Kurien, James A.; Abramyan, Lucy

    2011-01-01

    In the domain of telerobotic operations, the primary challenge facing the operator is to understand the state of the robotic platform. One key aspect of understanding the state is to visualize the physical location and configuration of the platform. As there is a wide variety of mobile robots, the requirements for visualizing their configurations vary diversely across different platforms. There can also be diversity in the mechanical mobility, such as wheeled, tracked, or legged mobility over surfaces. Adaptable 3D articulated robot visualization software can accommodate a wide variety of robotic platforms and environments. The visualization has been used for surface, aerial, space, and water robotic vehicle visualization during field testing. It has been used to enable operations of wheeled and legged surface vehicles, and can be readily adapted to facilitate other mechanical mobility solutions. The 3D visualization can render an articulated 3D model of a robotic platform for any environment. Given the model, the software receives real-time telemetry from the avionics system onboard the vehicle and animates the robot visualization to reflect the telemetered physical state. This is used to track the position and attitude in real time to monitor the progress of the vehicle as it traverses its environment. It is also used to monitor the state of any or all articulated elements of the vehicle, such as arms, legs, or control surfaces. The visualization can also render other sorts of telemetered states visually, such as stress or strains that are measured by the avionics. Such data can be used to color or annotate the virtual vehicle to indicate nominal or off-nominal states during operation. The visualization is also able to render the simulated environment where the vehicle is operating. For surface and aerial vehicles, it can render the terrain under the vehicle as the avionics sends it location information (GPS, odometry, or star tracking), and locate the vehicle

  17. Direct Volume Rendering of Curvilinear Volumes

    NASA Technical Reports Server (NTRS)

    Vaziri, Arsi; Wilhelms, J.; Challinger, J.; Alper, N.; Ramamoorthy, S.; Kutler, Paul (Technical Monitor)

    1998-01-01

    Direct volume rendering can visualize sampled 3D scalar data as a continuous medium, or extract features. However, it is generally slow. Furthermore, most algorithms for direct volume rendering have assumed rectilinear gridded data. This paper discusses methods for using direct volume rendering when the original volume is curvilinear, i.e. is divided into six-sided cells which are not necessarily equilateral hexahedra. One approach is to ray-cast such volumes directly. An alternative approach is to interpolate the sample volumes to a rectilinear grid, and use this regular volume for rendering. Advantages and disadvantages of the two approaches in terms of speed and image quality are explored.

  18. Arena3D: visualization of biological networks in 3D

    PubMed Central

    Pavlopoulos, Georgios A; O'Donoghue, Seán I; Satagopam, Venkata P; Soldatos, Theodoros G; Pafilis, Evangelos; Schneider, Reinhard

    2008-01-01

    Background Complexity is a key problem when visualizing biological networks; as the number of entities increases, most graphical views become incomprehensible. Our goal is to enable many thousands of entities to be visualized meaningfully and with high performance. Results We present a new visualization tool, Arena3D, which introduces a new concept of staggered layers in 3D space. Related data – such as proteins, chemicals, or pathways – can be grouped onto separate layers and arranged via layout algorithms, such as Fruchterman-Reingold, distance geometry, and a novel hierarchical layout. Data on a layer can be clustered via k-means, affinity propagation, Markov clustering, neighbor joining, tree clustering, or UPGMA ('unweighted pair-group method with arithmetic mean'). A simple input format defines the name and URL for each node, and defines connections or similarity scores between pairs of nodes. The use of Arena3D is illustrated with datasets related to Huntington's disease. Conclusion Arena3D is a user friendly visualization tool that is able to visualize biological or any other network in 3D space. It is free for academic use and runs on any platform. It can be downloaded or lunched directly from . Java3D library and Java 1.5 need to be pre-installed for the software to run. PMID:19040715

  19. Open Source Software and Design-Based Research Symbiosis in Developing 3D Virtual Learning Environments: Examples from the iSocial Project

    ERIC Educational Resources Information Center

    Schmidt, Matthew; Galyen, Krista; Laffey, James; Babiuch, Ryan; Schmidt, Carla

    2014-01-01

    Design-based research (DBR) and open source software are both acknowledged as potentially productive ways for advancing learning technologies. These approaches have practical benefits for the design and development process and for building and leveraging community to augment and sustain design and development. This report presents a case study of…

  20. SNL3dFace

    2007-07-20

    This software distribution contains MATLAB and C++ code to enable identity verification using 3D images that may or may not contain a texture component. The code is organized to support system performance testing and system capability demonstration through the proper configuration of the available user interface. Using specific algorithm parameters the face recognition system has been demonstrated to achieve a 96.6% verification rate (Pd) at 0.001 false alarm rate. The system computes robust facial featuresmore » of a 3D normalized face using Principal Component Analysis (PCA) and Fisher Linear Discriminant Analysis (FLDA). A 3D normalized face is obtained by alighning each face, represented by a set of XYZ coordinated, to a scaled reference face using the Iterative Closest Point (ICP) algorithm. The scaled reference face is then deformed to the input face using an iterative framework with parameters that control the deformed surface regulation an rate of deformation. A variety of options are available to control the information that is encoded by the PCA. Such options include the XYZ coordinates, the difference of each XYZ coordinates from the reference, the Z coordinate, the intensity/texture values, etc. In addition to PCA/FLDA feature projection this software supports feature matching to obtain similarity matrices for performance analysis. In addition, this software supports visualization of the STL, MRD, 2D normalized, and PCA synthetic representations in a 3D environment.« less

  1. SNL3dFace

    SciTech Connect

    Russ, Trina; Koch, Mark; Koudelka, Melissa; Peters, Ralph; Little, Charles; Boehnen, Chris; Peters, Tanya

    2007-07-20

    This software distribution contains MATLAB and C++ code to enable identity verification using 3D images that may or may not contain a texture component. The code is organized to support system performance testing and system capability demonstration through the proper configuration of the available user interface. Using specific algorithm parameters the face recognition system has been demonstrated to achieve a 96.6% verification rate (Pd) at 0.001 false alarm rate. The system computes robust facial features of a 3D normalized face using Principal Component Analysis (PCA) and Fisher Linear Discriminant Analysis (FLDA). A 3D normalized face is obtained by alighning each face, represented by a set of XYZ coordinated, to a scaled reference face using the Iterative Closest Point (ICP) algorithm. The scaled reference face is then deformed to the input face using an iterative framework with parameters that control the deformed surface regulation an rate of deformation. A variety of options are available to control the information that is encoded by the PCA. Such options include the XYZ coordinates, the difference of each XYZ coordinates from the reference, the Z coordinate, the intensity/texture values, etc. In addition to PCA/FLDA feature projection this software supports feature matching to obtain similarity matrices for performance analysis. In addition, this software supports visualization of the STL, MRD, 2D normalized, and PCA synthetic representations in a 3D environment.

  2. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  3. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  4. FRAMES-2.0 Software System: Linking to the Groundwater Modeling System (GMS) RT3D and MT3DMS Models

    SciTech Connect

    Whelan, Gene; Castleton, Karl J.; Pelton, Mitch A.

    2007-08-08

    Linkages to the Groundwater Modeling System have been developed at Pacific Northwest National Laboratory to enable the Nuclear Regulatory Commission (NRC) to more realistically assess the risk to the public of radioactive contaminants at NRC-licensed sites. Common software tools presently in use are limited in that they cannot assess contaminant migration through complex natural environments. The purpose of this initiative is to provide NRC with a licensing safety-analysis tool with sufficient power, flexibility, and utility that it can serve as the primary software platform for analyzing the hazards associated with licensing actions at those “complex” sites at which the traditional tools are inappropriate. As a tool designed to realistically approximate prospective doses to the public, this initiative addresses NRC’s safety-performance goal by confirming that licensing actions do not result in undue risk to the public.

  5. Preoperative planning for DIEP breast reconstruction: early experience of the use of computerised tomography angiography with VoNavix 3D software for perforator navigation.

    PubMed

    Pacifico, M D; See, M S; Cavale, N; Collyer, J; Francis, I; Jones, M E; Hazari, A; Boorman, J G; Smith, R W

    2009-11-01

    The deep inferior epigastric perforator (DIEP) flap is normally the first choice in breast reconstruction; however, due to the considerable vascular anatomical variation and the learning curve for the procedure, muscle-sparing transverse rectus abdominis musculocutaneous (TRAM) flaps are still frequently performed to reduce the rate of complications. Accurate preoperative investigation of the perforators would allow better operative preparation and possibly shorten the learning curve. In an effort to increase accuracy of preoperative planning and to aid preoperative decision-making in free abdominal flap breast reconstruction, we have acquired the use of VoNavix, software that creates three dimensional images from computerised tomography angiography (CTA) data. The use of the VoNavix software for analysis of CTA provides superior imaging that can be viewed in theatre. It, together with CTA, enables decisions to be made preoperatively, including: which side to raise the flap; whether to aim for a medial or lateral row perforator; whether to take a segment of muscle and whether to expect an easy or difficult dissection. We have now performed over 60 free abdominal flap breast reconstructions aided with CTA, and 10 of these cases also used VoNavix technology. This paper presents our initial experience with the use of this software, illustrated with three patient examples. The advantages and disadvantages are discussed. PMID:18708309

  6. Scalable rendering on PC clusters

    SciTech Connect

    WYLIE,BRIAN N.; LEWIS,VASILY; SHIRLEY,DAVID NOYES; PAVLAKOS,CONSTANTINE

    2000-04-25

    This case study presents initial results from research targeted at the development of cost-effective scalable visualization and rendering technologies. The implementations of two 3D graphics libraries based on the popular sort-last and sort-middle parallel rendering techniques are discussed. An important goal of these implementations is to provide scalable rendering capability for extremely large datasets (>> 5 million polygons). Applications can use these libraries for either run-time visualization, by linking to an existing parallel simulation, or for traditional post-processing by linking to an interactive display program. The use of parallel, hardware-accelerated rendering on commodity hardware is leveraged to achieve high performance. Current performance results show that, using current hardware (a small 16-node cluster), they can utilize up to 85% of the aggregate graphics performance and achieve rendering rates in excess of 20 million polygons/second using OpenGL{reg_sign} with lighting, Gouraud shading, and individually specified triangles (not t-stripped).

  7. GPU-Accelerated Denoising in 3D (GD3D)

    2013-10-01

    The raw computational power GPU Accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. This software addresses two facets of this promising application: what tuning is necessary to achieve optimal performance on a modern GPU? And what parameters yield the best denoising results in practice? To answer the first question, the software performs an autotuning step to empirically determine optimal memory blocking on the GPU. To answer themore » second, it performs a sweep of algorithm parameters to determine the combination that best reduces the mean squared error relative to a noiseless reference image.« less

  8. 3d-3d correspondence revisited

    NASA Astrophysics Data System (ADS)

    Chung, Hee-Joong; Dimofte, Tudor; Gukov, Sergei; Sułkowski, Piotr

    2016-04-01

    In fivebrane compactifications on 3-manifolds, we point out the importance of all flat connections in the proper definition of the effective 3d {N}=2 theory. The Lagrangians of some theories with the desired properties can be constructed with the help of homological knot invariants that categorify colored Jones polynomials. Higgsing the full 3d theories constructed this way recovers theories found previously by Dimofte-Gaiotto-Gukov. We also consider the cutting and gluing of 3-manifolds along smooth boundaries and the role played by all flat connections in this operation.

  9. CIRCE2/DEKGEN2: A software package for facilitated optical analysis of 3-D distributed solar energy concentrators. Theory and user manual

    SciTech Connect

    Romero, V.J.

    1994-03-01

    CIRCE2 is a computer code for modeling the optical performance of three-dimensional dish-type solar energy concentrators. Statistical methods are used to evaluate the directional distribution of reflected rays from any given point on the concentrator. Given concentrator and receiver geometries, sunshape (angular distribution of incident rays from the sun), and concentrator imperfections such as surface roughness and random deviation in slope, the code predicts the flux distribution and total power incident upon the target. Great freedom exists in the variety of concentrator and receiver configurations that can be modeled. Additionally, provisions for shading and receiver aperturing are included.- DEKGEN2 is a preprocessor designed to facilitate input of geometry, error distributions, and sun models. This manual describes the optical model, user inputs, code outputs, and operation of the software package. A user tutorial is included in which several collectors are built and analyzed in step-by-step examples.

  10. Cosmic origins: experiences making a stereoscopic 3D movie

    NASA Astrophysics Data System (ADS)

    Holliman, Nick

    2010-02-01

    Context: Stereoscopic 3D movies are gaining rapid acceptance commercially. In addition our previous experience with the short 3D movie "Cosmic Cookery" showed that there is great public interest in the presentation of cosmology research using this medium. Objective: The objective of the work reported in this paper was to create a three-dimensional stereoscopic movie describing the life of the Milky way galaxy. This was a technical and artistic exercise to take observed and simulated data from leading scientists and produce a short (six minute) movie that describes how the Milky Way was created and what happens in its future. The initial target audience was the visitors to the Royal Society's 2009 Summer Science Exhibition in central London, UK. The movie is also intended to become a presentation tool for scientists and educators following the exhibition. Apparatus: The presentation and playback systems used consisted of off-the shelf devices and software. The display platform for the Royal Society presentation was a RealD LP Pro switch used with a DLP projector to rear project a 4 metre diagonal image. The LP Pro enables the use of cheap disposable linearly polarising glasses so that the high turnover rate of the audience (every ten minutes at peak times) could be sustained without needing delays to clean the glasses. The playback system was a high speed PC with an external 8Tb RAID driving the projectors at 30Hz per eye, the Lightspeed DepthQ software was used to decode and generate the video stream. Results: A wide range of tools were used to render the image sequences, ranging from commercial to custom software. Each tool was able to produce a stream of 1080p images in stereo at 30fps. None of the rendering tools used allowed precise calibration of the stereo effect at render time and therefore all sequences were tuned extensively in a trial and error process until the stereo effect was acceptable and supported a comfortable viewing experience. Conclusion: We

  11. 3D and Education

    NASA Astrophysics Data System (ADS)

    Meulien Ohlmann, Odile

    2013-02-01

    Today the industry offers a chain of 3D products. Learning to "read" and to "create in 3D" becomes an issue of education of primary importance. 25 years professional experience in France, the United States and Germany, Odile Meulien set up a personal method of initiation to 3D creation that entails the spatial/temporal experience of the holographic visual. She will present some different tools and techniques used for this learning, their advantages and disadvantages, programs and issues of educational policies, constraints and expectations related to the development of new techniques for 3D imaging. Although the creation of display holograms is very much reduced compared to the creation of the 90ies, the holographic concept is spreading in all scientific, social, and artistic activities of our present time. She will also raise many questions: What means 3D? Is it communication? Is it perception? How the seeing and none seeing is interferes? What else has to be taken in consideration to communicate in 3D? How to handle the non visible relations of moving objects with subjects? Does this transform our model of exchange with others? What kind of interaction this has with our everyday life? Then come more practical questions: How to learn creating 3D visualization, to learn 3D grammar, 3D language, 3D thinking? What for? At what level? In which matter? for whom?

  12. Future Engineers 3-D Print Timelapse

    NASA Video Gallery

    NASA Challenges K-12 students to create a model of a container for space using 3-D modeling software. Astronauts need containers of all kinds - from advanced containers that can study fruit flies t...

  13. A 3D Geostatistical Mapping Tool

    1999-02-09

    This software provides accurate 3D reservoir modeling tools and high quality 3D graphics for PC platforms enabling engineers and geologists to better comprehend reservoirs and consequently improve their decisions. The mapping algorithms are fractals, kriging, sequential guassian simulation, and three nearest neighbor methods.

  14. 3-D structures of planetary nebulae

    NASA Astrophysics Data System (ADS)

    Steffen, W.

    2016-07-01

    Recent advances in the 3-D reconstruction of planetary nebulae are reviewed. We include not only results for 3-D reconstructions, but also the current techniques in terms of general methods and software. In order to obtain more accurate reconstructions, we suggest to extend the widely used assumption of homologous nebula expansion to map spectroscopically measured velocity to position along the line of sight.

  15. 3D PDF - a means of public access to geological 3D - objects, using the example of GTA3D

    NASA Astrophysics Data System (ADS)

    Slaby, Mark-Fabian; Reimann, Rüdiger

    2013-04-01

    In geology, 3D modeling has become very important. In the past, two-dimensional data such as isolines, drilling profiles, or cross-sections based on those, were used to illustrate the subsurface geology, whereas now, we can create complex digital 3D models. These models are produced with special software, such as GOCAD ®. The models can be viewed, only through the software used to create them, or through viewers available for free. The platform-independent PDF (Portable Document Format), enforced by Adobe, has found a wide distribution. This format has constantly evolved over time. Meanwhile, it is possible to display CAD data in an Adobe 3D PDF file with the free Adobe Reader (version 7). In a 3D PDF, a 3D model is freely rotatable and can be assembled from a plurality of objects, which can thus be viewed from all directions on their own. In addition, it is possible to create moveable cross-sections (profiles), and to assign transparency to the objects. Based on industry-standard CAD software, 3D PDFs can be generated from a large number of formats, or even be exported directly from this software. In geoinformatics, different approaches to creating 3D PDFs exist. The intent of the Authority for Mining, Energy and Geology to allow free access to the models of the Geotectonic Atlas (GTA3D), could not be realized with standard software solutions. A specially designed code converts the 3D objects to VRML (Virtual Reality Modeling Language). VRML is one of the few formats that allow using image files (maps) as textures, and to represent colors and shapes correctly. The files were merged in Acrobat X Pro, and a 3D PDF was generated subsequently. A topographic map, a display of geographic directions and horizontal and vertical scales help to facilitate the use.

  16. Efficient hardware accelerated rendering of multiple volumes by data dependent local render functions

    NASA Astrophysics Data System (ADS)

    Lehmann, Helko; Geller, Dieter; Weese, Jürgen; Kiefer, Gundolf

    2007-03-01

    The inspection of a patient's data for diagnostics, therapy planning or therapy guidance involves an increasing number of 3D data sets, e.g. acquired by different imaging modalities, with different scanner settings or at different times. To enable viewing of the data in one consistent anatomical context fused interactive renderings of multiple 3D data sets are desirable. However, interactive fused rendering of typical medical data sets using standard computing hardware remains a challenge. In this paper we present a method to render multiple 3D data sets. By introducing local rendering functions, i.e. functions that are adapted to the complexity of the visible data contained in the different regions of a scene, we can ensure that the overall performance for fused rendering of multiple data sets depends on the actual amount of visible data. This is in contrast to other approaches where the performance depends mainly on the number of rendered data sets. We integrate the method into a streaming rendering architecture with brick-based data representations of the volume data. This enables efficient handling of data sets that do not fit into the graphics board memory and a good utilization of the texture caches. Furthermore, transfer and rendering of volume data that does not contribute to the final image can be avoided. We illustrate the benefits of our method by experiments with clinical data.

  17. Versatile annotation and publication quality visualization of protein complexes using POLYVIEW-3D

    PubMed Central

    Porollo, Aleksey; Meller, Jaroslaw

    2007-01-01

    Background Macromolecular visualization as well as automated structural and functional annotation tools play an increasingly important role in the post-genomic era, contributing significantly towards the understanding of molecular systems and processes. For example, three dimensional (3D) models help in exploring protein active sites and functional hot spots that can be targeted in drug design. Automated annotation and visualization pipelines can also reveal other functionally important attributes of macromolecules. These goals are dependent on the availability of advanced tools that integrate better the existing databases, annotation servers and other resources with state-of-the-art rendering programs. Results We present a new tool for protein structure analysis, with the focus on annotation and visualization of protein complexes, which is an extension of our previously developed POLYVIEW web server. By integrating the web technology with state-of-the-art software for macromolecular visualization, such as the PyMol program, POLYVIEW-3D enables combining versatile structural and functional annotations with a simple web-based interface for creating publication quality structure rendering, as well as animated images for Powerpoint™, web sites and other electronic resources. The service is platform independent and no plug-ins are required. Several examples of how POLYVIEW-3D can be used for structural and functional analysis in the context of protein-protein interactions are presented to illustrate the available annotation options. Conclusion POLYVIEW-3D server features the PyMol image rendering that provides detailed and high quality presentation of macromolecular structures, with an easy to use web-based interface. POLYVIEW-3D also provides a wide array of options for automated structural and functional analysis of proteins and their complexes. Thus, the POLYVIEW-3D server may become an important resource for researches and educators in the fields of protein

  18. FastScript3D - A Companion to Java 3D

    NASA Technical Reports Server (NTRS)

    Koenig, Patti

    2005-01-01

    FastScript3D is a computer program, written in the Java 3D(TM) programming language, that establishes an alternative language that helps users who lack expertise in Java 3D to use Java 3D for constructing three-dimensional (3D)-appearing graphics. The FastScript3D language provides a set of simple, intuitive, one-line text-string commands for creating, controlling, and animating 3D models. The first word in a string is the name of a command; the rest of the string contains the data arguments for the command. The commands can also be used as an aid to learning Java 3D. Developers can extend the language by adding custom text-string commands. The commands can define new 3D objects or load representations of 3D objects from files in formats compatible with such other software systems as X3D. The text strings can be easily integrated into other languages. FastScript3D facilitates communication between scripting languages [which enable programming of hyper-text markup language (HTML) documents to interact with users] and Java 3D. The FastScript3D language can be extended and customized on both the scripting side and the Java 3D side.

  19. 3D Vision on Mars: Stereo processing and visualizations for NASA and ESA rover missions

    NASA Astrophysics Data System (ADS)

    Huber, Ben

    2016-07-01

    Three dimensional (3D) vision processing is an essential component of planetary rover mission planning and scientific data analysis. Standard ground vision processing products are digital terrain maps, panoramas, and virtual views of the environment. Such processing is currently developed for the PanCam instrument of ESA's ExoMars Rover mission by the PanCam 3D Vision Team under JOANNEUM RESEARCH coordination. Camera calibration, quality estimation of the expected results and the interfaces to other mission elements such as operations planning, rover navigation system and global Mars mapping are a specific focus of the current work. The main goals of the 3D Vision team in this context are: instrument design support & calibration processing: Development of 3D vision functionality Visualization: development of a 3D visualization tool for scientific data analysis. 3D reconstructions from stereo image data during the mission Support for 3D scientific exploitation to characterize the overall landscape geomorphology, processes, and the nature of the geologic record using the reconstructed 3D models. The developed processing framework PRoViP establishes an extensible framework for 3D vision processing in planetary robotic missions. Examples of processing products and capabilities are: Digital Terrain Models, Ortho images, 3D meshes, occlusion, solar illumination-, slope-, roughness-, and hazard-maps. Another important processing capability is the fusion of rover and orbiter based images with the support of multiple missions and sensors (e.g. MSL Mastcam stereo processing). For 3D visualization a tool called PRo3D has been developed to analyze and directly interpret digital outcrop models. Stereo image products derived from Mars rover data can be rendered in PRo3D, enabling the user to zoom, rotate and translate the generated 3D outcrop models. Interpretations can be digitized directly onto the 3D surface, and simple measurements of the outcrop and sedimentary features

  20. FPGA-based real-time anisotropic diffusion filtering of 3D ultrasound images

    NASA Astrophysics Data System (ADS)

    Castro-Pareja, Carlos R.; Dandekar, Omkar S.; Shekhar, Raj

    2005-02-01

    Three-dimensional ultrasonic imaging, especially the emerging real-time version of it, is particularly valuable in medical applications such as echocardiography, obstetrics and surgical navigation. A known problem with ultrasound images is their high level of speckle noise. Anisotropic diffusion filtering has been shown to be effective in enhancing the visual quality of 3D ultrasound images and as preprocessing prior to advanced image processing. However, due to its arithmetic complexity and the sheer size of 3D ultrasound images, it is not possible to perform online, real-time anisotropic diffusion filtering using standard software implementations. We present an FPGA-based architecture that allows performing anisotropic diffusion filtering of 3D images at acquisition rates, thus enabling the use of this filtering technique in real-time applications, such as visualization, registration and volume rendering.

  1. Texture splats for 3D vector and scalar field visualization

    SciTech Connect

    Crawfis, R.A.; Max, N.

    1993-04-06

    Volume Visualization is becoming an important tool for understanding large 3D datasets. A popular technique for volume rendering is known as splatting. With new hardware architectures offering substantial improvements in the performance of rendering texture mapped objects, we present textured splats. An ideal reconstruction function for 3D signals is developed which can be used as a texture map for a splat. Extensions to the basic splatting technique are then developed to additionally represent vector fields.

  2. From 3D view to 3D print

    NASA Astrophysics Data System (ADS)

    Dima, M.; Farisato, G.; Bergomi, M.; Viotto, V.; Magrin, D.; Greggio, D.; Farinato, J.; Marafatto, L.; Ragazzoni, R.; Piazza, D.

    2014-08-01

    In the last few years 3D printing is getting more and more popular and used in many fields going from manufacturing to industrial design, architecture, medical support and aerospace. 3D printing is an evolution of bi-dimensional printing, which allows to obtain a solid object from a 3D model, realized with a 3D modelling software. The final product is obtained using an additive process, in which successive layers of material are laid down one over the other. A 3D printer allows to realize, in a simple way, very complex shapes, which would be quite difficult to be produced with dedicated conventional facilities. Thanks to the fact that the 3D printing is obtained superposing one layer to the others, it doesn't need any particular work flow and it is sufficient to simply draw the model and send it to print. Many different kinds of 3D printers exist based on the technology and material used for layer deposition. A common material used by the toner is ABS plastics, which is a light and rigid thermoplastic polymer, whose peculiar mechanical properties make it diffusely used in several fields, like pipes production and cars interiors manufacturing. I used this technology to create a 1:1 scale model of the telescope which is the hardware core of the space small mission CHEOPS (CHaracterising ExOPlanets Satellite) by ESA, which aims to characterize EXOplanets via transits observations. The telescope has a Ritchey-Chrétien configuration with a 30cm aperture and the launch is foreseen in 2017. In this paper, I present the different phases for the realization of such a model, focusing onto pros and cons of this kind of technology. For example, because of the finite printable volume (10×10×12 inches in the x, y and z directions respectively), it has been necessary to split the largest parts of the instrument in smaller components to be then reassembled and post-processed. A further issue is the resolution of the printed material, which is expressed in terms of layers

  3. 3D Imaging.

    ERIC Educational Resources Information Center

    Hastings, S. K.

    2002-01-01

    Discusses 3 D imaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3 D imaging for libraries and museums. (LRW)

  4. Eyes on the Earth 3D

    NASA Technical Reports Server (NTRS)

    Kulikov, anton I.; Doronila, Paul R.; Nguyen, Viet T.; Jackson, Randal K.; Greene, William M.; Hussey, Kevin J.; Garcia, Christopher M.; Lopez, Christian A.

    2013-01-01

    Eyes on the Earth 3D software gives scientists, and the general public, a realtime, 3D interactive means of accurately viewing the real-time locations, speed, and values of recently collected data from several of NASA's Earth Observing Satellites using a standard Web browser (climate.nasa.gov/eyes). Anyone with Web access can use this software to see where the NASA fleet of these satellites is now, or where they will be up to a year in the future. The software also displays several Earth Science Data sets that have been collected on a daily basis. This application uses a third-party, 3D, realtime, interactive game engine called Unity 3D to visualize the satellites and is accessible from a Web browser.

  5. Remote interactive direct volume rendering of AMR data

    SciTech Connect

    Kreylos, Oliver; Weber, Gunther H.; Bethel, E. Wes; Shalf, John M.; Hamann, Bernd; Joy, Kenneth I.

    2002-03-28

    We describe a framework for direct volume rendering of adaptive mesh refinement (AMR) data that operates directly on the hierarchical grid structure, without the need to resample data onto a single, uniform rectilinear grid. The framework can be used for a range of renderers optimized for particular hardware architectures: a hardware-assisted renderer for single-processor graphics workstations, and a massively parallel software-only renderer for supercomputers. It is also possible to use the framework for distributed rendering servers. By exploiting the multiresolution structure of AMR data, the hardware-assisted renderers can render large AMR data sets at interactive rates, even if the data is stored remotely.

  6. Case study: Beauty and the Beast 3D: benefits of 3D viewing for 2D to 3D conversion

    NASA Astrophysics Data System (ADS)

    Handy Turner, Tara

    2010-02-01

    From the earliest stages of the Beauty and the Beast 3D conversion project, the advantages of accurate desk-side 3D viewing was evident. While designing and testing the 2D to 3D conversion process, the engineering team at Walt Disney Animation Studios proposed a 3D viewing configuration that not only allowed artists to "compose" stereoscopic 3D but also improved efficiency by allowing artists to instantly detect which image features were essential to the stereoscopic appeal of a shot and which features had minimal or even negative impact. At a time when few commercial 3D monitors were available and few software packages provided 3D desk-side output, the team designed their own prototype devices and collaborated with vendors to create a "3D composing" workstation. This paper outlines the display technologies explored, final choices made for Beauty and the Beast 3D, wish-lists for future development and a few rules of thumb for composing compelling 2D to 3D conversions.

  7. State-of-The-Art and Applications of 3D Imaging Sensors in Industry, Cultural Heritage, Medicine, and Criminal Investigation

    PubMed Central

    Sansoni, Giovanna; Trebeschi, Marco; Docchio, Franco

    2009-01-01

    3D imaging sensors for the acquisition of three dimensional (3D) shapes have created, in recent years, a considerable degree of interest for a number of applications. The miniaturization and integration of the optical and electronic components used to build them have played a crucial role in the achievement of compactness, robustness and flexibility of the sensors. Today, several 3D sensors are available on the market, even in combination with other sensors in a “sensor fusion” approach. An importance equal to that of physical miniaturization has the portability of the measurements, via suitable interfaces, into software environments designed for their elaboration, e.g., CAD-CAM systems, virtual renders, and rapid prototyping tools. In this paper, following an overview of the state-of-art of 3D imaging sensors, a number of significant examples of their use are presented, with particular reference to industry, heritage, medicine, and criminal investigation applications. PMID:22389618

  8. State-of-The-Art and Applications of 3D Imaging Sensors in Industry, Cultural Heritage, Medicine, and Criminal Investigation.

    PubMed

    Sansoni, Giovanna; Trebeschi, Marco; Docchio, Franco

    2009-01-01

    3D imaging sensors for the acquisition of three dimensional (3D) shapes have created, in recent years, a considerable degree of interest for a number of applications. The miniaturization and integration of the optical and electronic components used to build them have played a crucial role in the achievement of compactness, robustness and flexibility of the sensors. Today, several 3D sensors are available on the market, even in combination with other sensors in a "sensor fusion" approach. An importance equal to that of physical miniaturization has the portability of the measurements, via suitable interfaces, into software environments designed for their elaboration, e.g., CAD-CAM systems, virtual renders, and rapid prototyping tools. In this paper, following an overview of the state-of-art of 3D imaging sensors, a number of significant examples of their use are presented, with particular reference to industry, heritage, medicine, and criminal investigation applications. PMID:22389618

  9. The 3D visualization technology research of submarine pipeline based Horde3D GameEngine

    NASA Astrophysics Data System (ADS)

    Yao, Guanghui; Ma, Xiushui; Chen, Genlang; Ye, Lingjian

    2013-10-01

    With the development of 3D display and virtual reality technology, its application gets more and more widespread. This paper applies 3D display technology to the monitoring of submarine pipeline. We reconstruct the submarine pipeline and its surrounding submarine terrain in computer using Horde3D graphics rendering engine on the foundation database "submarine pipeline and relative landforms landscape synthesis database" so as to display the virtual scene of submarine pipeline based virtual reality and show the relevant data collected from the monitoring of submarine pipeline.

  10. Segmented images and 3D images for studying the anatomical structures in MRIs

    NASA Astrophysics Data System (ADS)

    Lee, Yong Sook; Chung, Min Suk; Cho, Jae Hyun

    2004-05-01

    For identifying the pathological findings in MRIs, the anatomical structures in MRIs should be identified in advance. For studying the anatomical structures in MRIs, an education al tool that includes the horizontal, coronal, sagittal MRIs of entire body, corresponding segmented images, 3D images, and browsing software is necessary. Such an educational tool, however, is hard to obtain. Therefore, in this research, such an educational tool which helps medical students and doctors study the anatomical structures in MRIs was made as follows. A healthy, young Korean male adult with standard body shape was selected. Six hundred thirteen horizontal MRIs of the entire body were scanned and inputted to the personal computer. Sixty anatomical structures in the horizontal MRIs were segmented to make horizontal segmented images. Coronal, sagittal MRIs and coronal, sagittal segmented images were made. 3D images of anatomical structures in the segmented images were reconstructed by surface rendering method. Browsing software of the MRIs, segmented images, and 3D images was composed. This educational tool that includes horizontal, coronal, sagittal MRIs of entire body, corresponding segmented images, 3D images, and browsing software is expected to help medical students and doctors study anatomical structures in MRIs.

  11. Visualization of liver in 3-D

    NASA Astrophysics Data System (ADS)

    Chen, Chin-Tu; Chou, Jin-Shin; Giger, Maryellen L.; Kahn, Charles E., Jr.; Bae, Kyongtae T.; Lin, Wei-Chung

    1991-05-01

    Visualization of the liver in three dimensions (3-D) can improve the accuracy of volumetric estimation and also aid in surgical planning. We have developed a method for 3-D visualization of the liver using x-ray computed tomography (CT) or magnetic resonance (MR) images. This method includes four major components: (1) segmentation algorithms for extracting liver data from tomographic images; (2) interpolation techniques for both shape and intensity; (3) schemes for volume rendering and display, and (4) routines for electronic surgery and image analysis. This method has been applied to cases from a living-donor liver transplant project and appears to be useful for surgical planning.

  12. TRACE 3-D documentation

    SciTech Connect

    Crandall, K.R.

    1987-08-01

    TRACE 3-D is an interactive beam-dynamics program that calculates the envelopes of a bunched beam, including linear space-charge forces, through a user-defined transport system. TRACE 3-D provides an immediate graphics display of the envelopes and the phase-space ellipses and allows nine types of beam-matching options. This report describes the beam-dynamics calculations and gives detailed instruction for using the code. Several examples are described in detail.

  13. Positional Awareness Map 3D (PAM3D)

    NASA Technical Reports Server (NTRS)

    Hoffman, Monica; Allen, Earl L.; Yount, John W.; Norcross, April Louise

    2012-01-01

    The Western Aeronautical Test Range of the National Aeronautics and Space Administration s Dryden Flight Research Center needed to address the aging software and hardware of its current situational awareness display application, the Global Real-Time Interactive Map (GRIM). GRIM was initially developed in the late 1980s and executes on older PC architectures using a Linux operating system that is no longer supported. Additionally, the software is difficult to maintain due to its complexity and loss of developer knowledge. It was decided that a replacement application must be developed or acquired in the near future. The replacement must provide the functionality of the original system, the ability to monitor test flight vehicles in real-time, and add improvements such as high resolution imagery and true 3-dimensional capability. This paper will discuss the process of determining the best approach to replace GRIM, and the functionality and capabilities of the first release of the Positional Awareness Map 3D.

  14. Interactive stereoscopic rendering of volumetric environments.

    PubMed

    Wan, Ming; Zhang, Nan; Qu, Huamin; Kaufman, Arie E

    2004-01-01

    We present an efficient stereoscopic rendering algorithm supporting interactive navigation through large-scale 3D voxel-based environments. In this algorithm, most of the pixel values of the right image are derived from the left image by a fast 3D warping based on a specific stereoscopic projection geometry. An accelerated volumetric ray casting then fills the remaining gaps in the warped right image. Our algorithm has been parallelized on a multiprocessor by employing effective task partitioning schemes and achieved a high cache coherency and load balancing. We also extend our stereoscopic rendering to include view-dependent shading and transparency effects. We have applied our algorithm in two virtual navigation systems, flythrough over terrain and virtual colonoscopy, and reached interactive stereoscopic rendering rates of more than 10 frames per second on a 16-processor SGI Challenge. PMID:15382695

  15. Medical 3D Printing for the Radiologist.

    PubMed

    Mitsouras, Dimitris; Liacouras, Peter; Imanzadeh, Amir; Giannopoulos, Andreas A; Cai, Tianrun; Kumamaru, Kanako K; George, Elizabeth; Wake, Nicole; Caterson, Edward J; Pomahac, Bohdan; Ho, Vincent B; Grant, Gerald T; Rybicki, Frank J

    2015-01-01

    While use of advanced visualization in radiology is instrumental in diagnosis and communication with referring clinicians, there is an unmet need to render Digital Imaging and Communications in Medicine (DICOM) images as three-dimensional (3D) printed models capable of providing both tactile feedback and tangible depth information about anatomic and pathologic states. Three-dimensional printed models, already entrenched in the nonmedical sciences, are rapidly being embraced in medicine as well as in the lay community. Incorporating 3D printing from images generated and interpreted by radiologists presents particular challenges, including training, materials and equipment, and guidelines. The overall costs of a 3D printing laboratory must be balanced by the clinical benefits. It is expected that the number of 3D-printed models generated from DICOM images for planning interventions and fabricating implants will grow exponentially. Radiologists should at a minimum be familiar with 3D printing as it relates to their field, including types of 3D printing technologies and materials used to create 3D-printed anatomic models, published applications of models to date, and clinical benefits in radiology. Online supplemental material is available for this article. PMID:26562233

  16. 3D Visualization Development of SIUE Campus

    NASA Astrophysics Data System (ADS)

    Nellutla, Shravya

    Geographic Information Systems (GIS) has progressed from the traditional map-making to the modern technology where the information can be created, edited, managed and analyzed. Like any other models, maps are simplified representations of real world. Hence visualization plays an essential role in the applications of GIS. The use of sophisticated visualization tools and methods, especially three dimensional (3D) modeling, has been rising considerably due to the advancement of technology. There are currently many off-the-shelf technologies available in the market to build 3D GIS models. One of the objectives of this research was to examine the available ArcGIS and its extensions for 3D modeling and visualization and use them to depict a real world scenario. Furthermore, with the advent of the web, a platform for accessing and sharing spatial information on the Internet, it is possible to generate interactive online maps. Integrating Internet capacity with GIS functionality redefines the process of sharing and processing the spatial information. Enabling a 3D map online requires off-the-shelf GIS software, 3D model builders, web server, web applications and client server technologies. Such environments are either complicated or expensive because of the amount of hardware and software involved. Therefore, the second objective of this research was to investigate and develop simpler yet cost-effective 3D modeling approach that uses available ArcGIS suite products and the free 3D computer graphics software for designing 3D world scenes. Both ArcGIS Explorer and ArcGIS Online will be used to demonstrate the way of sharing and distributing 3D geographic information on the Internet. A case study of the development of 3D campus for the Southern Illinois University Edwardsville is demonstrated.

  17. Parallel sphere rendering

    SciTech Connect

    Krogh, M.; Painter, J.; Hansen, C.

    1996-10-01

    Sphere rendering is an important method for visualizing molecular dynamics data. This paper presents a parallel algorithm that is almost 90 times faster than current graphics workstations. To render extremely large data sets and large images, the algorithm uses the MIMD features of the supercomputers to divide up the data, render independent partial images, and then finally composite the multiple partial images using an optimal method. The algorithm and performance results are presented for the CM-5 and the M.

  18. A rendering approach for stereoscopic web pages

    NASA Astrophysics Data System (ADS)

    Zhang, Jianlong; Wang, Wenmin; Wang, Ronggang; Chen, Qinshui

    2014-03-01

    Web technology provides a relatively easy way to generate contents for us to recognize the world, and with the development of stereoscopic display technology, the stereoscopic devices will become much more popular. The combination of web technology and stereoscopic display technology will bring revolutionary visual effect. The Stereoscopic 3D (S3D) web pages, in which text, image and video may have different depth, can be displayed on stereoscopic display devices. This paper presents the approach about how to render two viewing S3D web pages including text, images, widgets: first, an algorithm should be developed in order to display stereoscopic elements like text, widgets by using 2D graphic library; second, a method should be presented to render stereoscopic web page based on current framework of the browser; third, a rough solution is invented to fix the problem that comes out in the method.

  19. Architecture for high performance stereoscopic game rendering on Android

    NASA Astrophysics Data System (ADS)

    Flack, Julien; Sanderson, Hugh; Shetty, Sampath

    2014-03-01

    Stereoscopic gaming is a popular source of content for consumer 3D display systems. There has been a significant shift in the gaming industry towards casual games for mobile devices running on the Android™ Operating System and driven by ARM™ and other low power processors. Such systems are now being integrated directly into the next generation of 3D TVs potentially removing the requirement for an external games console. Although native stereo support has been integrated into some high profile titles on established platforms like Windows PC and PS3 there is a lack of GPU independent 3D support for the emerging Android platform. We describe a framework for enabling stereoscopic 3D gaming on Android for applications on mobile devices, set top boxes and TVs. A core component of the architecture is a 3D game driver, which is integrated into the Android OpenGL™ ES graphics stack to convert existing 2D graphics applications into stereoscopic 3D in real-time. The architecture includes a method of analyzing 2D games and using rule based Artificial Intelligence (AI) to position separate objects in 3D space. We describe an innovative stereo 3D rendering technique to separate the views in the depth domain and render directly into the display buffer. The advantages of the stereo renderer are demonstrated by characterizing the performance in comparison to more traditional render techniques, including depth based image rendering, both in terms of frame rates and impact on battery consumption.

  20. Fast volume rendering for medical image.

    PubMed

    Ying, Hu; Xin-He, Xu

    2005-01-01

    In orders to improve the rendering speed of ray casting and make this technique a practical routine in medical applications, two new and improved techniques are described in this paper. First, an integrated method using "proximity clouds" technique is applied to speed up ray casting. The second technique for speeding up the 3D rendering is done through a parallel implementation based on "single computer multi CPU" model Four groups of CT data sets have been used to validate the improvement of the rendering speed. The result shown that the interactive rendering speed is up to 6-10 fps, which is almost real-time making our algorithm practical in medical visualization routine. PMID:17281409

  1. Radiochromic 3D Detectors

    NASA Astrophysics Data System (ADS)

    Oldham, Mark

    2015-01-01

    Radiochromic materials exhibit a colour change when exposed to ionising radiation. Radiochromic film has been used for clinical dosimetry for many years and increasingly so recently, as films of higher sensitivities have become available. The two principle advantages of radiochromic dosimetry include greater tissue equivalence (radiologically) and the lack of requirement for development of the colour change. In a radiochromic material, the colour change arises direct from ionising interactions affecting dye molecules, without requiring any latent chemical, optical or thermal development, with important implications for increased accuracy and convenience. It is only relatively recently however, that 3D radiochromic dosimetry has become possible. In this article we review recent developments and the current state-of-the-art of 3D radiochromic dosimetry, and the potential for a more comprehensive solution for the verification of complex radiation therapy treatments, and 3D dose measurement in general.

  2. Magmatic Systems in 3-D

    NASA Astrophysics Data System (ADS)

    Kent, G. M.; Harding, A. J.; Babcock, J. M.; Orcutt, J. A.; Bazin, S.; Singh, S.; Detrick, R. S.; Canales, J. P.; Carbotte, S. M.; Diebold, J.

    2002-12-01

    Multichannel seismic (MCS) images of crustal magma chambers are ideal targets for advanced visualization techniques. In the mid-ocean ridge environment, reflections originating at the melt-lens are well separated from other reflection boundaries, such as the seafloor, layer 2A and Moho, which enables the effective use of transparency filters. 3-D visualization of seismic reflectivity falls into two broad categories: volume and surface rendering. Volumetric-based visualization is an extremely powerful approach for the rapid exploration of very dense 3-D datasets. These 3-D datasets are divided into volume elements or voxels, which are individually color coded depending on the assigned datum value; the user can define an opacity filter to reject plotting certain voxels. This transparency allows the user to peer into the data volume, enabling an easy identification of patterns or relationships that might have geologic merit. Multiple image volumes can be co-registered to look at correlations between two different data types (e.g., amplitude variation with offsets studies), in a manner analogous to draping attributes onto a surface. In contrast, surface visualization of seismic reflectivity usually involves producing "fence" diagrams of 2-D seismic profiles that are complemented with seafloor topography, along with point class data, draped lines and vectors (e.g. fault scarps, earthquake locations and plate-motions). The overlying seafloor can be made partially transparent or see-through, enabling 3-D correlations between seafloor structure and seismic reflectivity. Exploration of 3-D datasets requires additional thought when constructing and manipulating these complex objects. As numbers of visual objects grow in a particular scene, there is a tendency to mask overlapping objects; this clutter can be managed through the effective use of total or partial transparency (i.e., alpha-channel). In this way, the co-variation between different datasets can be investigated

  3. Bootstrapping 3D fermions

    NASA Astrophysics Data System (ADS)

    Iliesiu, Luca; Kos, Filip; Poland, David; Pufu, Silviu S.; Simmons-Duffin, David; Yacoby, Ran

    2016-03-01

    We study the conformal bootstrap for a 4-point function of fermions < ψψψψ> in 3D. We first introduce an embedding formalism for 3D spinors and compute the conformal blocks appearing in fermion 4-point functions. Using these results, we find general bounds on the dimensions of operators appearing in the ψ × ψ OPE, and also on the central charge C T . We observe features in our bounds that coincide with scaling dimensions in the GrossNeveu models at large N . We also speculate that other features could coincide with a fermionic CFT containing no relevant scalar operators.

  4. Interactive 3D Mars Visualization

    NASA Technical Reports Server (NTRS)

    Powell, Mark W.

    2012-01-01

    The Interactive 3D Mars Visualization system provides high-performance, immersive visualization of satellite and surface vehicle imagery of Mars. The software can be used in mission operations to provide the most accurate position information for the Mars rovers to date. When integrated into the mission data pipeline, this system allows mission planners to view the location of the rover on Mars to 0.01-meter accuracy with respect to satellite imagery, with dynamic updates to incorporate the latest position information. Given this information so early in the planning process, rover drivers are able to plan more accurate drive activities for the rover than ever before, increasing the execution of science activities significantly. Scientifically, this 3D mapping information puts all of the science analyses to date into geologic context on a daily basis instead of weeks or months, as was the norm prior to this contribution. This allows the science planners to judge the efficacy of their previously executed science observations much more efficiently, and achieve greater science return as a result. The Interactive 3D Mars surface view is a Mars terrain browsing software interface that encompasses the entire region of exploration for a Mars surface exploration mission. The view is interactive, allowing the user to pan in any direction by clicking and dragging, or to zoom in or out by scrolling the mouse or touchpad. This set currently includes tools for selecting a point of interest, and a ruler tool for displaying the distance between and positions of two points of interest. The mapping information can be harvested and shared through ubiquitous online mapping tools like Google Mars, NASA WorldWind, and Worldwide Telescope.

  5. Extra Dimensions: 3D in PDF Documentation

    NASA Astrophysics Data System (ADS)

    Graf, Norman A.

    2012-12-01

    Experimental science is replete with multi-dimensional information which is often poorly represented by the two dimensions of presentation slides and print media. Past efforts to disseminate such information to a wider audience have failed for a number of reasons, including a lack of standards which are easy to implement and have broad support. Adobe's Portable Document Format (PDF) has in recent years become the de facto standard for secure, dependable electronic information exchange. It has done so by creating an open format, providing support for multiple platforms and being reliable and extensible. By providing support for the ECMA standard Universal 3D (U3D) and the ISO PRC file format in its free Adobe Reader software, Adobe has made it easy to distribute and interact with 3D content. Until recently, Adobe's Acrobat software was also capable of incorporating 3D content into PDF files from a variety of 3D file formats, including proprietary CAD formats. However, this functionality is no longer available in Acrobat X, having been spun off to a separate company. Incorporating 3D content now requires the additional purchase of a separate plug-in. In this talk we present alternatives based on open source libraries which allow the programmatic creation of 3D content in PDF format. While not providing the same level of access to CAD files as the commercial software, it does provide physicists with an alternative path to incorporate 3D content into PDF files from such disparate applications as detector geometries from Geant4, 3D data sets, mathematical surfaces or tesselated volumes.

  6. Assessing 3d Photogrammetry Techniques in Craniometrics

    NASA Astrophysics Data System (ADS)

    Moshobane, M. C.; de Bruyn, P. J. N.; Bester, M. N.

    2016-06-01

    Morphometrics (the measurement of morphological features) has been revolutionized by the creation of new techniques to study how organismal shape co-varies with several factors such as ecophenotypy. Ecophenotypy refers to the divergence of phenotypes due to developmental changes induced by local environmental conditions, producing distinct ecophenotypes. None of the techniques hitherto utilized could explicitly address organismal shape in a complete biological form, i.e. three-dimensionally. This study investigates the use of the commercial software, Photomodeler Scanner® (PMSc®) three-dimensional (3D) modelling software to produce accurate and high-resolution 3D models. Henceforth, the modelling of Subantarctic fur seal (Arctocephalus tropicalis) and Antarctic fur seal (Arctocephalus gazella) skulls which could allow for 3D measurements. Using this method, sixteen accurate 3D skull models were produced and five metrics were determined. The 3D linear measurements were compared to measurements taken manually with a digital caliper. In addition, repetitive measurements were recorded by varying researchers to determine repeatability. To allow for comparison straight line measurements were taken with the software, assuming that close accord with all manually measured features would illustrate the model's accurate replication of reality. Measurements were not significantly different demonstrating that realistic 3D skull models can be successfully produced to provide a consistent basis for craniometrics, with the additional benefit of allowing non-linear measurements if required.

  7. An Improved Version of TOPAZ 3D

    SciTech Connect

    Krasnykh, Anatoly

    2003-07-29

    An improved version of the TOPAZ 3D gun code is presented as a powerful tool for beam optics simulation. In contrast to the previous version of TOPAZ 3D, the geometry of the device under test is introduced into TOPAZ 3D directly from a CAD program, such as Solid Edge or AutoCAD. In order to have this new feature, an interface was developed, using the GiD software package as a meshing code. The article describes this method with two models to illustrate the results.

  8. Methods for comparing 3D surface attributes

    NASA Astrophysics Data System (ADS)

    Pang, Alex; Freeman, Adam

    1996-03-01

    A common task in data analysis is to compare two or more sets of data, statistics, presentations, etc. A predominant method in use is side-by-side visual comparison of images. While straightforward, it burdens the user with the task of discerning the differences between the two images. The user if further taxed when the images are of 3D scenes. This paper presents several methods for analyzing the extent, magnitude, and manner in which surfaces in 3D differ in their attributes. The surface geometry are assumed to be identical and only the surface attributes (color, texture, etc.) are variable. As a case in point, we examine the differences obtained when a 3D scene is rendered progressively using radiosity with different form factor calculation methods. The comparison methods include extensions of simple methods such as mapping difference information to color or transparency, and more recent methods including the use of surface texture, perturbation, and adaptive placements of error glyphs.

  9. Brandenburg 3D - a comprehensive 3D Subsurface Model, Conception of an Infrastructure Node and a Web Application

    NASA Astrophysics Data System (ADS)

    Kerschke, Dorit; Schilling, Maik; Simon, Andreas; Wächter, Joachim

    2014-05-01

    The Energiewende and the increasing scarcity of raw materials will lead to an intensified utilization of the subsurface in Germany. Within this context, geological 3D modeling is a fundamental approach for integrated decision and planning processes. Initiated by the development of the European Geospatial Infrastructure INSPIRE, the German State Geological Offices started digitizing their predominantly analog archive inventory. Until now, a comprehensive 3D subsurface model of Brandenburg did not exist. Therefore the project B3D strived to develop a new 3D model as well as a subsequent infrastructure node to integrate all geological and spatial data within the Geodaten-Infrastruktur Brandenburg (Geospatial Infrastructure, GDI-BB) and provide it to the public through an interactive 2D/3D web application. The functionality of the web application is based on a client-server architecture. Server-sided, all available spatial data is published through GeoServer. GeoServer is designed for interoperability and acts as the reference implementation of the Open Geospatial Consortium (OGC) Web Feature Service (WFS) standard that provides the interface that allows requests for geographical features. In addition, GeoServer implements, among others, the high performance certified compliant Web Map Service (WMS) that serves geo-referenced map images. For publishing 3D data, the OGC Web 3D Service (W3DS), a portrayal service for three-dimensional geo-data, is used. The W3DS displays elements representing the geometry, appearance, and behavior of geographic objects. On the client side, the web application is solely based on Free and Open Source Software and leans on the JavaScript API WebGL that allows the interactive rendering of 2D and 3D graphics by means of GPU accelerated usage of physics and image processing as part of the web page canvas without the use of plug-ins. WebGL is supported by most web browsers (e.g., Google Chrome, Mozilla Firefox, Safari, and Opera). The web

  10. Object-oriented parallel polygon rendering

    SciTech Connect

    Heiland, R.W.

    1994-09-01

    Since many scientific datasets can be visualized using some polygonal representation, a polygon renderer has broad use for scientific visualization. With today`s high performance computing applications producing very large datasets, a parallel polygon renderer is a necessary tool for keeping the compute-visualize cycle at a minimum. This paper presents a DOIV on renderer that combines the shared-memory and message-passing models of parallel programming. It uses the Global Arrays library, a shared-memory programming toolkit for distributed memory machines. The experience of using an object oriented approach for software design and development is also discussed.

  11. High resolution renderings and interactive visualization of the 2006 Huntington Beach experiment

    NASA Astrophysics Data System (ADS)

    Im, T.; Nayak, A.; Keen, C.; Samilo, D.; Matthews, J.

    2006-12-01

    The Visualization Center at the Scripps Institution of Oceanography investigates innovative ways to represent graphically interactive 3D virtual landscapes and to produce high resolution, high quality renderings of Earth sciences data and the sensors and instruments used to collect the data . Among the Visualization Center's most recent work is the visualization of the Huntington Beach experiment, a study launched in July 2006 by the Southern California Ocean Observing System (http://www.sccoos.org/) to record and synthesize data of the Huntington Beach coastal region. Researchers and students at the Visualization Center created visual presentations that combine bathymetric data provided by SCCOOS with USGS aerial photography and with 3D polygonal models of sensors created in Maya into an interactive 3D scene using the Fledermaus suite of visualization tools (http://www.ivs3d.com). In addition, the Visualization Center has produced high definition (HD) animations of SCCOOS sensor instruments (e.g. REMUS, drifters, spray glider, nearshore mooring, OCSD/USGS mooring and CDIP mooring) using the Maya modeling and animation software and rendered over multiple nodes of the OptIPuter Visualization Cluster at Scripps. These visualizations are aimed at providing researchers with a broader context of sensor locations relative to geologic characteristics, to promote their use as an educational resource for informal education settings and increasing public awareness, and also as an aid for researchers' proposals and presentations. These visualizations are available for download on the Visualization Center website at http://siovizcenter.ucsd.edu/sccoos/hb2006.php.

  12. 3D imaging reconstruction and impacted third molars: case reports

    PubMed Central

    Tuzi, Andrea; Di Bari, Roberto; Cicconetti, Andrea

    2012-01-01

    Summary There is a debate in the literature about the need for Computed Tomagraphy (CT) before removing third molars, even if positive radiographic signs are present. In few cases, the third molar is so close to the inferior alveolar nerve that its extraction might expose patients to the risk of post-operative neuro-sensitive alterations of the skin and the mucosa of the homolateral lower lip and chin. Thus, the injury of the inferior alveolar nerve may represent a serious, though infrequent, neurologic complication in the surgery of the third molars rendering necessary a careful pre-operative evaluation of their anatomical relationship with the inferior alveolar nerve by means of radiographic imaging techniques. This contribution presents two case reports showing positive radiographic signs, which are the hallmarks of a possible close relationship between the inferior alveolar nerve and the third molars. We aim at better defining the relationship between third molars and the mandibular canal using Dental CT Scan, DICOM image acquisition and 3D reconstruction with a dedicated software. By our study we deduce that 3D images are not indispensable, but they can provide a very agreeable assistance in the most complicated cases. PMID:23386934

  13. Multivariate volume rendering

    SciTech Connect

    Crawfis, R.A.

    1996-03-01

    This paper presents a new technique for representing multivalued data sets defined on an integer lattice. It extends the state-of-the-art in volume rendering to include nonhomogeneous volume representations. That is, volume rendering of materials with very fine detail (e.g. translucent granite) within a voxel. Multivariate volume rendering is achieved by introducing controlled amounts of noise within the volume representation. Varying the local amount of noise within the volume is used to represent a separate scalar variable. The technique can also be used in image synthesis to create more realistic clouds and fog.

  14. Creating 3D visualizations of MRI data: A brief guide

    PubMed Central

    Madan, Christopher R.

    2015-01-01

    While magnetic resonance imaging (MRI) data is itself 3D, it is often difficult to adequately present the results papers and slides in 3D. As a result, findings of MRI studies are often presented in 2D instead. A solution is to create figures that include perspective and can convey 3D information; such figures can sometimes be produced by standard functional magnetic resonance imaging (fMRI) analysis packages and related specialty programs. However, many options cannot provide functionality such as visualizing activation clusters that are both cortical and subcortical (i.e., a 3D glass brain), the production of several statistical maps with an identical perspective in the 3D rendering, or animated renderings. Here I detail an approach for creating 3D visualizations of MRI data that satisfies all of these criteria. Though a 3D ‘glass brain’ rendering can sometimes be difficult to interpret, they are useful in showing a more overall representation of the results, whereas the traditional slices show a more local view. Combined, presenting both 2D and 3D representations of MR images can provide a more comprehensive view of the study’s findings. PMID:26594340

  15. Optimizing 3D image quality and performance for stereoscopic gaming

    NASA Astrophysics Data System (ADS)

    Flack, Julien; Sanderson, Hugh; Pegg, Steven; Kwok, Simon; Paterson, Daniel

    2009-02-01

    The successful introduction of stereoscopic TV systems, such as Samsung's 3D Ready Plasma, requires high quality 3D content to be commercially available to the consumer. Console and PC games provide the most readily accessible source of high quality 3D content. This paper describes innovative developments in a generic, PC-based game driver architecture that addresses the two key issues affecting 3D gaming: quality and speed. At the heart of the quality issue are the same considerations that studios face producing stereoscopic renders from CG movies: how best to perform the mapping from a geometric CG environment into the stereoscopic display volume. The major difference being that for game drivers this mapping cannot be choreographed by hand but must be automatically calculated in real-time without significant impact on performance. Performance is a critical issue when dealing with gaming. Stereoscopic gaming has traditionally meant rendering the scene twice with the associated performance overhead. An alternative approach is to render the scene from one virtual camera position and use information from the z-buffer to generate a stereo pair using Depth-Image-Based Rendering (DIBR). We analyze this trade-off in more detail and provide some results relating to both 3D image quality and render performance.

  16. Using Cabri3D Diagrams for Teaching Geometry

    ERIC Educational Resources Information Center

    Accascina, Giuseppe; Rogora, Enrico

    2006-01-01

    Cabri3D is a potentially very useful software for learning and teaching 3D geometry. The dynamic nature of the digital diagrams produced with it provides a useful aid for helping students to better develop concept images of geometric concepts. However, since any Cabri3D diagram represents three-dimensional objects on the two dimensional screen of…

  17. Interactive photogrammetric system for mapping 3D objects

    NASA Astrophysics Data System (ADS)

    Knopp, Dave E.

    1990-08-01

    A new system, FOTO-G, has been developed for 3D photogrammetric applications. It is a production-oriented software system designed to work with highly unconventional photogrammetric image configurations which result when photographing 3D objects. A demonstration with imagery from an actual 3D-mapping project is reported.

  18. Anatomical annotation on vascular structure in volume rendered images.

    PubMed

    Jiang, Zhengang; Nimura, Yukitaka; Hayashi, Yuichiro; Kitasaka, Takayuki; Misawa, Kazunari; Fujiwara, Michitaka; Kajita, Yasukazu; Wakabayashi, Toshihiko; Mori, Kensaku

    2013-03-01

    The precise annotation of vascular structure is desired in computer-assisted systems to help surgeons identify each vessel branch. This paper proposes a method that annotates vessels on volume rendered images by rendering their names on them using a two-pass rendering process. In the first rendering pass, vessel surface models are generated using such properties as centerlines, radii, and running directions. Then the vessel names are drawn on the vessel surfaces. Finally, the vessel name images and the corresponding depth buffer are generated by a virtual camera at the viewpoint. In the second rendering pass, volume rendered images are generated by a ray casting volume rendering algorithm that considers the depth buffer generated in the first rendering pass. After the two-pass rendering is finished, an annotated image is generated by blending the volume rendered image with the surface rendered image. To confirm the effectiveness of our proposed method, we performed a computer-assisted system for the automated annotation of abdominal arteries. The experimental results show that vessel names can be drawn on the corresponding vessel surface in the volume rendered images at a computing cost that is nearly the same as that by volume rendering only. The proposed method has enormous potential to be adopted to annotate the vessels in the 3D medical images in clinical applications, such as image-guided surgery. PMID:23562139

  19. Volume Rendering of Heliospheric Data

    NASA Astrophysics Data System (ADS)

    Hick, P. P.; Jackson, B. V.; Bailey, M. J.; Buffington, A.

    2001-12-01

    We demonstrate some of the techniques we currently use for the visualization of heliospheric volume data. Our 3D volume data usually are derived from tomographic reconstructions of the solar wind density and velocity from remote sensing observations (e.g., Thomson scattering and interplanetary scintillation observations). We show examples of hardware-based volume rendering using the Volume Pro PCI board (from TeraRecon, Inc.). This board updates the display at a rate of up to 30 frames per second using a parallel projection algorithm, allowing the manipulation of volume data in real-time. In addition, the manipulation of 4D volume data (the 4th dimension usually representing time) enables the visualization in real-time of an evolving (time-dependent) data set. We also show examples of perspective projections using IDL. This work was supported through NASA grant NAG5-9423.

  20. 3D visualization of biomedical CT images based on OpenGL and VRML techniques

    NASA Astrophysics Data System (ADS)

    Yin, Meng; Luo, Qingming; Xia, Fuhua

    2002-04-01

    Current high-performance computers and advanced image processing capabilities have made the application of three- dimensional visualization objects in biomedical computer tomographic (CT) images facilitate the researches on biomedical engineering greatly. Trying to cooperate with the update technology using Internet, where 3D data are typically stored and processed on powerful servers accessible by using TCP/IP, we should hold the results of the isosurface be applied in medical visualization generally. Furthermore, this project is a future part of PACS system our lab is working on. So in this system we use the 3D file format VRML2.0, which is used through the Web interface for manipulating 3D models. In this program we implemented to generate and modify triangular isosurface meshes by marching cubes algorithm. Then we used OpenGL and MFC techniques to render the isosurface and manipulating voxel data. This software is more adequate visualization of volumetric data. The drawbacks are that 3D image processing on personal computers is rather slow and the set of tools for 3D visualization is limited. However, these limitations have not affected the applicability of this platform for all the tasks needed in elementary experiments in laboratory or data preprocessed.

  1. 3D microscope

    NASA Astrophysics Data System (ADS)

    Iizuka, Keigo

    2008-02-01

    In order to circumvent the fact that only one observer can view the image from a stereoscopic microscope, an attachment was devised for displaying the 3D microscopic image on a large LCD monitor for viewing by multiple observers in real time. The principle of operation, design, fabrication, and performance are presented, along with tolerance measurements relating to the properties of the cellophane half-wave plate used in the design.

  2. Effect of viewing distance on 3D fatigue caused by viewing mobile 3D content

    NASA Astrophysics Data System (ADS)

    Mun, Sungchul; Lee, Dong-Su; Park, Min-Chul; Yano, Sumio

    2013-05-01

    With an advent of autostereoscopic display technique and increased needs for smart phones, there has been a significant growth in mobile TV markets. The rapid growth in technical, economical, and social aspects has encouraged 3D TV manufacturers to apply 3D rendering technology to mobile devices so that people have more opportunities to come into contact with many 3D content anytime and anywhere. Even if the mobile 3D technology leads to the current market growth, there is an important thing to consider for consistent development and growth in the display market. To put it briefly, human factors linked to mobile 3D viewing should be taken into consideration before developing mobile 3D technology. Many studies have investigated whether mobile 3D viewing causes undesirable biomedical effects such as motion sickness and visual fatigue, but few have examined main factors adversely affecting human health. Viewing distance is considered one of the main factors to establish optimized viewing environments from a viewer's point of view. Thus, in an effort to determine human-friendly viewing environments, this study aims to investigate the effect of viewing distance on human visual system when exposing to mobile 3D environments. Recording and analyzing brainwaves before and after watching mobile 3D content, we explore how viewing distance affects viewing experience from physiological and psychological perspectives. Results obtained in this study are expected to provide viewing guidelines for viewers, help ensure viewers against undesirable 3D effects, and lead to make gradual progress towards a human-friendly mobile 3D viewing.

  3. 3D scene reconstruction based on 3D laser point cloud combining UAV images

    NASA Astrophysics Data System (ADS)

    Liu, Huiyun; Yan, Yangyang; Zhang, Xitong; Wu, Zhenzhen

    2016-03-01

    It is a big challenge capturing and modeling 3D information of the built environment. A number of techniques and technologies are now in use. These include GPS, and photogrammetric application and also remote sensing applications. The experiment uses multi-source data fusion technology for 3D scene reconstruction based on the principle of 3D laser scanning technology, which uses the laser point cloud data as the basis and Digital Ortho-photo Map as an auxiliary, uses 3DsMAX software as a basic tool for building three-dimensional scene reconstruction. The article includes data acquisition, data preprocessing, 3D scene construction. The results show that the 3D scene has better truthfulness, and the accuracy of the scene meet the need of 3D scene construction.

  4. Visualization of 3D Geological Data using COLLADA and KML

    NASA Astrophysics Data System (ADS)

    Choi, Yosoon; Um, Jeong-Gi; Park, Myong-Ho

    2013-04-01

    This study presents a method to visualize 3D geological data using COLLAborative Design Activity(COLLADA, an open standard XML schema for establishing interactive 3D applications) and Keyhole Markup Language(KML, the XML-based scripting language of Google Earth).We used COLLADA files to represent different 3D geological data such as borehole, fence section, surface-based 3D volume and 3D grid by triangle meshes(a set of triangles connected by their common edges or corners). The COLLADA files were imported into the 3D render window of Google Earth using KML codes. An application to the Grosmont formation in Alberta, Canada showed that the combination of COLLADA and KML enables Google Earth to visualize 3D geological structures and properties.

  5. LOTT RANCH 3D PROJECT

    SciTech Connect

    Larry Lawrence; Bruce Miller

    2004-09-01

    The Lott Ranch 3D seismic prospect located in Garza County, Texas is a project initiated in September of 1991 by the J.M. Huber Corp., a petroleum exploration and production company. By today's standards the 126 square mile project does not seem monumental, however at the time it was conceived it was the most intensive land 3D project ever attempted. Acquisition began in September of 1991 utilizing GEO-SEISMIC, INC., a seismic data contractor. The field parameters were selected by J.M. Huber, and were of a radical design. The recording instruments used were GeoCor IV amplifiers designed by Geosystems Inc., which record the data in signed bit format. It would not have been practical, if not impossible, to have processed the entire raw volume with the tools available at that time. The end result was a dataset that was thought to have little utility due to difficulties in processing the field data. In 1997, Yates Energy Corp. located in Roswell, New Mexico, formed a partnership to further develop the project. Through discussions and meetings with Pinnacle Seismic, it was determined that the original Lott Ranch 3D volume could be vastly improved upon reprocessing. Pinnacle Seismic had shown the viability of improving field-summed signed bit data on smaller 2D and 3D projects. Yates contracted Pinnacle Seismic Ltd. to perform the reprocessing. This project was initiated with high resolution being a priority. Much of the potential resolution was lost through the initial summing of the field data. Modern computers that are now being utilized have tremendous speed and storage capacities that were cost prohibitive when this data was initially processed. Software updates and capabilities offer a variety of quality control and statics resolution, which are pertinent to the Lott Ranch project. The reprocessing effort was very successful. The resulting processed data-set was then interpreted using modern PC-based interpretation and mapping software. Production data, log data

  6. Miniaturized 3D microscope imaging system

    NASA Astrophysics Data System (ADS)

    Lan, Yung-Sung; Chang, Chir-Weei; Sung, Hsin-Yueh; Wang, Yen-Chang; Chang, Cheng-Yi

    2015-05-01

    We designed and assembled a portable 3-D miniature microscopic image system with the size of 35x35x105 mm3 . By integrating a microlens array (MLA) into the optical train of a handheld microscope, the biological specimen's image will be captured for ease of use in a single shot. With the light field raw data and program, the focal plane can be changed digitally and the 3-D image can be reconstructed after the image was taken. To localize an object in a 3-D volume, an automated data analysis algorithm to precisely distinguish profundity position is needed. The ability to create focal stacks from a single image allows moving or specimens to be recorded. Applying light field microscope algorithm to these focal stacks, a set of cross sections will be produced, which can be visualized using 3-D rendering. Furthermore, we have developed a series of design rules in order to enhance the pixel using efficiency and reduce the crosstalk between each microlens for obtain good image quality. In this paper, we demonstrate a handheld light field microscope (HLFM) to distinguish two different color fluorescence particles separated by a cover glass in a 600um range, show its focal stacks, and 3-D position.

  7. Fast interactive real-time volume rendering of real-time three-dimensional echocardiography: an implementation for low-end computers

    NASA Technical Reports Server (NTRS)

    Saracino, G.; Greenberg, N. L.; Shiota, T.; Corsi, C.; Lamberti, C.; Thomas, J. D.

    2002-01-01

    Real-time three-dimensional echocardiography (RT3DE) is an innovative cardiac imaging modality. However, partly due to lack of user-friendly software, RT3DE has not been widely accepted as a clinical tool. The object of this study was to develop and implement a fast and interactive volume renderer of RT3DE datasets designed for a clinical environment where speed and simplicity are not secondary to accuracy. Thirty-six patients (20 regurgitation, 8 normal, 8 cardiomyopathy) were imaged using RT3DE. Using our newly developed software, all 3D data sets were rendered in real-time throughout the cardiac cycle and assessment of cardiac function and pathology was performed for each case. The real-time interactive volume visualization system is user friendly and instantly provides consistent and reliable 3D images without expensive workstations or dedicated hardware. We believe that this novel tool can be used clinically for dynamic visualization of cardiac anatomy.

  8. Multiviewer 3D monitor

    NASA Astrophysics Data System (ADS)

    Kostrzewski, Andrew A.; Aye, Tin M.; Kim, Dai Hyun; Esterkin, Vladimir; Savant, Gajendra D.

    1998-09-01

    Physical Optics Corporation has developed an advanced 3-D virtual reality system for use with simulation tools for training technical and military personnel. This system avoids such drawbacks of other virtual reality (VR) systems as eye fatigue, headaches, and alignment for each viewer, all of which are due to the need to wear special VR goggles. The new system is based on direct viewing of an interactive environment. This innovative holographic multiplexed screen technology makes it unnecessary for the viewer to wear special goggles.

  9. 3D Audio System

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Ames Research Center research into virtual reality led to the development of the Convolvotron, a high speed digital audio processing system that delivers three-dimensional sound over headphones. It consists of a two-card set designed for use with a personal computer. The Convolvotron's primary application is presentation of 3D audio signals over headphones. Four independent sound sources are filtered with large time-varying filters that compensate for motion. The perceived location of the sound remains constant. Possible applications are in air traffic control towers or airplane cockpits, hearing and perception research and virtual reality development.

  10. DspaceOgreTerrain 3D Terrain Visualization Tool

    NASA Technical Reports Server (NTRS)

    Myint, Steven; Jain, Abhinandan; Pomerantz, Marc I.

    2012-01-01

    DspaceOgreTerrain is an extension to the DspaceOgre 3D visualization tool that supports real-time visualization of various terrain types, including digital elevation maps, planets, and meshes. DspaceOgreTerrain supports creating 3D representations of terrains and placing them in a scene graph. The 3D representations allow for a continuous level of detail, GPU-based rendering, and overlaying graphics like wheel tracks and shadows. It supports reading data from the SimScape terrain- modeling library. DspaceOgreTerrain solves the problem of displaying the results of simulations that involve very large terrains. In the past, it has been used to visualize simulations of vehicle traverses on Lunar and Martian terrains. These terrains were made up of billions of vertices and would not have been renderable in real-time without using a continuous level of detail rendering technique.

  11. 3D Simulation: Microgravity Environments and Applications

    NASA Technical Reports Server (NTRS)

    Hunter, Steve L.; Dischinger, Charles; Estes, Samantha; Parker, Nelson C. (Technical Monitor)

    2001-01-01

    Most, if not all, 3-D and Virtual Reality (VR) software programs are designed for one-G gravity applications. Space environments simulations require gravity effects of one one-thousandth to one one-million of that of the Earth's surface (10(exp -3) - 10(exp -6) G), thus one must be able to generate simulations that replicate those microgravity effects upon simulated astronauts. Unfortunately, the software programs utilized by the National Aeronautical and Space Administration does not have the ability to readily neutralize the one-G gravity effect. This pre-programmed situation causes the engineer or analysis difficulty during micro-gravity simulations. Therefore, microgravity simulations require special techniques or additional code in order to apply the power of 3D graphic simulation to space related applications. This paper discusses the problem and possible solutions to allow microgravity 3-D/VR simulations to be completed successfully without program code modifications.

  12. Real-time 3D image reconstruction guidance in liver resection surgery

    PubMed Central

    Nicolau, Stephane; Pessaux, Patrick; Mutter, Didier; Marescaux, Jacques

    2014-01-01

    Background Minimally invasive surgery represents one of the main evolutions of surgical techniques. However, minimally invasive surgery adds difficulty that can be reduced through computer technology. Methods From a patient’s medical image [US, computed tomography (CT) or MRI], we have developed an Augmented Reality (AR) system that increases the surgeon’s intraoperative vision by providing a virtual transparency of the patient. AR is based on two major processes: 3D modeling and visualization of anatomical or pathological structures appearing in the medical image, and the registration of this visualization onto the real patient. We have thus developed a new online service, named Visible Patient, providing efficient 3D modeling of patients. We have then developed several 3D visualization and surgical planning software tools to combine direct volume rendering and surface rendering. Finally, we have developed two registration techniques, one interactive and one automatic providing intraoperative augmented reality view. Results From January 2009 to June 2013, 769 clinical cases have been modeled by the Visible Patient service. Moreover, three clinical validations have been realized demonstrating the accuracy of 3D models and their great benefit, potentially increasing surgical eligibility in liver surgery (20% of cases). From these 3D models, more than 50 interactive AR-assisted surgical procedures have been realized illustrating the potential clinical benefit of such assistance to gain safety, but also current limits that automatic augmented reality will overcome. Conclusions Virtual patient modeling should be mandatory for certain interventions that have now to be defined, such as liver surgery. Augmented reality is clearly the next step of the new surgical instrumentation but remains currently limited due to the complexity of organ deformations during surgery. Intraoperative medical imaging used in new generation of automated augmented reality should solve this

  13. 3D Visualization of Recent Sumatra Earthquake

    NASA Astrophysics Data System (ADS)

    Nayak, Atul; Kilb, Debi

    2005-04-01

    Scientists and visualization experts at the Scripps Institution of Oceanography have created an interactive three-dimensional visualization of the 28 March 2005 magnitude 8.7 earthquake in Sumatra. The visualization shows the earthquake's hypocenter and aftershocks recorded until 29 March 2005, and compares it with the location of the 26 December 2004 magnitude 9 event and the consequent seismicity in that region. The 3D visualization was created using the Fledermaus software developed by Interactive Visualization Systems (http://www.ivs.unb.ca/) and stored as a ``scene'' file. To view this visualization, viewers need to download and install the free viewer program iView3D (http://www.ivs3d.com/products/iview3d).

  14. Modeling Cellular Processes in 3-D

    PubMed Central

    Mogilner, Alex; Odde, David

    2011-01-01

    Summary Recent advances in photonic imaging and fluorescent protein technology offer unprecedented views of molecular space-time dynamics in living cells. At the same time, advances in computing hardware and software enable modeling of ever more complex systems, from global climate to cell division. As modeling and experiment become more closely integrated, we must address the issue of modeling cellular processes in 3-D. Here, we highlight recent advances related to 3-D modeling in cell biology. While some processes require full 3-D analysis, we suggest that others are more naturally described in 2-D or 1-D. Keeping the dimensionality as low as possible reduces computational time and makes models more intuitively comprehensible; however, the ability to test full 3-D models will build greater confidence in models generally and remains an important emerging area of cell biological modeling. PMID:22036197

  15. Optically rewritable 3D liquid crystal displays.

    PubMed

    Sun, J; Srivastava, A K; Zhang, W; Wang, L; Chigrinov, V G; Kwok, H S

    2014-11-01

    Optically rewritable liquid crystal display (ORWLCD) is a concept based on the optically addressed bi-stable display that does not need any power to hold the image after being uploaded. Recently, the demand for the 3D image display has increased enormously. Several attempts have been made to achieve 3D image on the ORWLCD, but all of them involve high complexity for image processing on both hardware and software levels. In this Letter, we disclose a concept for the 3D-ORWLCD by dividing the given image in three parts with different optic axis. A quarter-wave plate is placed on the top of the ORWLCD to modify the emerging light from different domains of the image in different manner. Thereafter, Polaroid glasses can be used to visualize the 3D image. The 3D image can be refreshed, on the 3D-ORWLCD, in one-step with proper ORWLCD printer and image processing, and therefore, with easy image refreshing and good image quality, such displays can be applied for many applications viz. 3D bi-stable display, security elements, etc. PMID:25361316

  16. DataViewer3D: An Open-Source, Cross-Platform Multi-Modal Neuroimaging Data Visualization Tool.

    PubMed

    Gouws, André; Woods, Will; Millman, Rebecca; Morland, Antony; Green, Gary

    2009-01-01

    Integration and display of results from multiple neuroimaging modalities [e.g. magnetic resonance imaging (MRI), magnetoencephalography, EEG] relies on display of a diverse range of data within a common, defined coordinate frame. DataViewer3D (DV3D) is a multi-modal imaging data visualization tool offering a cross-platform, open-source solution to simultaneous data overlay visualization requirements of imaging studies. While DV3D is primarily a visualization tool, the package allows an analysis approach where results from one imaging modality can guide comparative analysis of another modality in a single coordinate space. DV3D is built on Python, a dynamic object-oriented programming language with support for integration of modular toolkits, and development of cross-platform software for neuroimaging. DV3D harnesses the power of the Visualization Toolkit (VTK) for two-dimensional (2D) and 3D rendering, calling VTK's low level C++ functions from Python. Users interact with data via an intuitive interface that uses Python to bind wxWidgets, which in turn calls the user's operating system dialogs and graphical user interface tools. DV3D currently supports NIfTI-1, ANALYZE and DICOM formats for MRI data display (including statistical data overlay). Formats for other data types are supported. The modularity of DV3D and ease of use of Python allows rapid integration of additional format support and user development. DV3D has been tested on Mac OSX, RedHat Linux and Microsoft Windows XP. DV3D is offered for free download with an extensive set of tutorial resources and example data. PMID:19352444

  17. DataViewer3D: An Open-Source, Cross-Platform Multi-Modal Neuroimaging Data Visualization Tool

    PubMed Central

    Gouws, André; Woods, Will; Millman, Rebecca; Morland, Antony; Green, Gary

    2008-01-01

    Integration and display of results from multiple neuroimaging modalities [e.g. magnetic resonance imaging (MRI), magnetoencephalography, EEG] relies on display of a diverse range of data within a common, defined coordinate frame. DataViewer3D (DV3D) is a multi-modal imaging data visualization tool offering a cross-platform, open-source solution to simultaneous data overlay visualization requirements of imaging studies. While DV3D is primarily a visualization tool, the package allows an analysis approach where results from one imaging modality can guide comparative analysis of another modality in a single coordinate space. DV3D is built on Python, a dynamic object-oriented programming language with support for integration of modular toolkits, and development of cross-platform software for neuroimaging. DV3D harnesses the power of the Visualization Toolkit (VTK) for two-dimensional (2D) and 3D rendering, calling VTK's low level C++ functions from Python. Users interact with data via an intuitive interface that uses Python to bind wxWidgets, which in turn calls the user's operating system dialogs and graphical user interface tools. DV3D currently supports NIfTI-1, ANALYZE™ and DICOM formats for MRI data display (including statistical data overlay). Formats for other data types are supported. The modularity of DV3D and ease of use of Python allows rapid integration of additional format support and user development. DV3D has been tested on Mac OSX, RedHat Linux and Microsoft Windows XP. DV3D is offered for free download with an extensive set of tutorial resources and example data. PMID:19352444

  18. Interactive Visualization of 3-D Mantle Convection Extended Through AJAX Applications

    NASA Astrophysics Data System (ADS)

    McLane, J. C.; Czech, W.; Yuen, D.; Greensky, J.; Knox, M. R.

    2008-12-01

    We have designed a new software system for real-time interactive visualization of results taken directly from large-scale simulations of 3-D mantle convection and other large-scale simulations. This approach allows for intense visualization sessions for a couple of hours as opposed to storing massive amounts of data in a storage system. Our data sets consist of 3-D data for volume rendering with over 10 million unknowns at each timestep. Large scale visualization on a display wall holding around 13 million pixels has already been accomplished with extension to hand-held devices, such as the OQO and Nokia N800 and recently the iPHONE. We are developing web-based software in Java to extend the use of this system across long distances. The software is aimed at creating an interactive and functional application capable of running on multiple browsers by taking advantage of two AJAX-enabled web frameworks: Echo2 and Google Web Toolkit. The software runs in two modes allowing for a user to control an interactive session or observe a session controlled by another user. Modular build of the system allows for components to be swapped out for new components so that other forms of visualization could be accommodated such as Molecular Dynamics in mineral physics or 2-D data sets from lithospheric regional models.

  19. Crowdsourcing Based 3d Modeling

    NASA Astrophysics Data System (ADS)

    Somogyi, A.; Barsi, A.; Molnar, B.; Lovas, T.

    2016-06-01

    Web-based photo albums that support organizing and viewing the users' images are widely used. These services provide a convenient solution for storing, editing and sharing images. In many cases, the users attach geotags to the images in order to enable using them e.g. in location based applications on social networks. Our paper discusses a procedure that collects open access images from a site frequently visited by tourists. Geotagged pictures showing the image of a sight or tourist attraction are selected and processed in photogrammetric processing software that produces the 3D model of the captured object. For the particular investigation we selected three attractions in Budapest. To assess the geometrical accuracy, we used laser scanner and DSLR as well as smart phone photography to derive reference values to enable verifying the spatial model obtained from the web-album images. The investigation shows how detailed and accurate models could be derived applying photogrammetric processing software, simply by using images of the community, without visiting the site.

  20. 3D polarimetric purity

    NASA Astrophysics Data System (ADS)

    Gil, José J.; San José, Ignacio

    2010-11-01

    From our previous definition of the indices of polarimetric purity for 3D light beams [J.J. Gil, J.M. Correas, P.A. Melero and C. Ferreira, Monogr. Semin. Mat. G. de Galdeano 31, 161 (2004)], an analysis of their geometric and physical interpretation is presented. It is found that, in agreement with previous results, the first parameter is a measure of the degree of polarization, whereas the second parameter (called the degree of directionality) is a measure of the mean angular aperture of the direction of propagation of the corresponding light beam. This pair of invariant, non-dimensional, indices of polarimetric purity contains complete information about the polarimetric purity of a light beam. The overall degree of polarimetric purity is obtained as a weighted quadratic average of the degree of polarization and the degree of directionality.

  1. 3D field harmonics

    SciTech Connect

    Caspi, S.; Helm, M.; Laslett, L.J.

    1991-03-30

    We have developed an harmonic representation for the three dimensional field components within the windings of accelerator magnets. The form by which the field is presented is suitable for interfacing with other codes that make use of the 3D field components (particle tracking and stability). The field components can be calculated with high precision and reduced cup time at any location (r,{theta},z) inside the magnet bore. The same conductor geometry which is used to simulate line currents is also used in CAD with modifications more readily available. It is our hope that the format used here for magnetic fields can be used not only as a means of delivering fields but also as a way by which beam dynamics can suggest correction to the conductor geometry. 5 refs., 70 figs.

  2. 'Bonneville' in 3-D!

    NASA Technical Reports Server (NTRS)

    2004-01-01

    The Mars Exploration Rover Spirit took this 3-D navigation camera mosaic of the crater called 'Bonneville' after driving approximately 13 meters (42.7 feet) to get a better vantage point. Spirit's current position is close enough to the edge to see the interior of the crater, but high enough and far enough back to get a view of all of the walls. Because scientists and rover controllers are so pleased with this location, they will stay here for at least two more martian days, or sols, to take high resolution panoramic camera images of 'Bonneville' in its entirety. Just above the far crater rim, on the left side, is the rover's heatshield, which is visible as a tiny reflective speck.

  3. Target surface finding using 3D SAR data

    NASA Astrophysics Data System (ADS)

    Ruiter, Jason R.; Burns, Joseph W.; Subotic, Nikola S.

    2005-05-01

    Methods of generating more literal, easily interpretable imagery from 3-D SAR data are being studied to provide all weather, near-visual target identification and/or scene interpretation. One method of approaching this problem is to automatically generate shape-based geometric renderings from the SAR data. In this paper we describe the application of the Marching Tetrahedrons surface finding algorithm to 3-D SAR data. The Marching Tetrahedrons algorithm finds a surface through the 3-D data cube, which provides a recognizable representation of the target surface. This algorithm was applied to the public-release X-patch simulations of a backhoe, which provided densely sampled 3-D SAR data sets. The performance of the algorithm to noise and spatial resolution were explored. Surface renderings were readily recognizable over a range of spatial resolution, and maintained their fidelity even under relatively low Signal-to-Noise Ratio (SNR) conditions.

  4. Illustrative visualization of 3D city models

    NASA Astrophysics Data System (ADS)

    Doellner, Juergen; Buchholz, Henrik; Nienhaus, Marc; Kirsch, Florian

    2005-03-01

    This paper presents an illustrative visualization technique that provides expressive representations of large-scale 3D city models, inspired by the tradition of artistic and cartographic visualizations typically found in bird"s-eye view and panoramic maps. We define a collection of city model components and a real-time multi-pass rendering algorithm that achieves comprehensible, abstract 3D city model depictions based on edge enhancement, color-based and shadow-based depth cues, and procedural facade texturing. Illustrative visualization provides an effective visual interface to urban spatial information and associated thematic information complementing visual interfaces based on the Virtual Reality paradigm, offering a huge potential for graphics design. Primary application areas include city and landscape planning, cartoon worlds in computer games, and tourist information systems.

  5. 3D Lasers Increase Efficiency, Safety of Moving Machines

    NASA Technical Reports Server (NTRS)

    2015-01-01

    Canadian company Neptec Design Group Ltd. developed its Laser Camera System, used by shuttles to render 3D maps of their hulls for assessing potential damage. Using NASA funding, the firm incorporated LiDAR technology and created the TriDAR 3D sensor. Its commercial arm, Neptec Technologies Corp., has sold the technology to Orbital Sciences, which uses it to guide its Cygnus spacecraft during rendezvous and dock operations at the International Space Station.

  6. Breast Tissue 3D Segmentation and Visualization on MRI

    PubMed Central

    Cui, Xiangfei; Sun, Feifei

    2013-01-01

    Tissue segmentation and visualization are useful for breast lesion detection and quantitative analysis. In this paper, a 3D segmentation algorithm based on Kernel-based Fuzzy C-Means (KFCM) is proposed to separate the breast MR images into different tissues. Then, an improved volume rendering algorithm based on a new transfer function model is applied to implement 3D breast visualization. Experimental results have been shown visually and have achieved reasonable consistency. PMID:23983676

  7. On Alternative Approaches to 3D Image Perception: Monoscopic 3D Techniques

    NASA Astrophysics Data System (ADS)

    Blundell, Barry G.

    2015-06-01

    In the eighteenth century, techniques that enabled a strong sense of 3D perception to be experienced without recourse to binocular disparities (arising from the spatial separation of the eyes) underpinned the first significant commercial sales of 3D viewing devices and associated content. However following the advent of stereoscopic techniques in the nineteenth century, 3D image depiction has become inextricably linked to binocular parallax and outside the vision science and arts communities relatively little attention has been directed towards earlier approaches. Here we introduce relevant concepts and terminology and consider a number of techniques and optical devices that enable 3D perception to be experienced on the basis of planar images rendered from a single vantage point. Subsequently we allude to possible mechanisms for non-binocular parallax based 3D perception. Particular attention is given to reviewing areas likely to be thought-provoking to those involved in 3D display development, spatial visualization, HCI, and other related areas of interdisciplinary research.

  8. Quasi 3D dosimetry (EPID, conventional 2D/3D detector matrices)

    NASA Astrophysics Data System (ADS)

    Bäck, A.

    2015-01-01

    Patient specific pretreatment measurement for IMRT and VMAT QA should preferably give information with a high resolution in 3D. The ability to distinguish complex treatment plans, i.e. treatment plans with a difference between measured and calculated dose distributions that exceeds a specified tolerance, puts high demands on the dosimetry system used for the pretreatment measurements and the results of the measurement evaluation needs a clinical interpretation. There are a number of commercial dosimetry systems designed for pretreatment IMRT QA measurements. 2D arrays such as MapCHECK® (Sun Nuclear), MatriXXEvolution (IBA Dosimetry) and OCTAVIOUS® 1500 (PTW), 3D phantoms such as OCTAVIUS® 4D (PTW), ArcCHECK® (Sun Nuclear) and Delta4 (ScandiDos) and software for EPID dosimetry and 3D reconstruction of the dose in the patient geometry such as EPIDoseTM (Sun Nuclear) and Dosimetry CheckTM (Math Resolutions) are available. None of those dosimetry systems can measure the 3D dose distribution with a high resolution (full 3D dose distribution). Those systems can be called quasi 3D dosimetry systems. To be able to estimate the delivered dose in full 3D the user is dependent on a calculation algorithm in the software of the dosimetry system. All the vendors of the dosimetry systems mentioned above provide calculation algorithms to reconstruct a full 3D dose in the patient geometry. This enables analyzes of the difference between measured and calculated dose distributions in DVHs of the structures of clinical interest which facilitates the clinical interpretation and is a promising tool to be used for pretreatment IMRT QA measurements. However, independent validation studies on the accuracy of those algorithms are scarce. Pretreatment IMRT QA using the quasi 3D dosimetry systems mentioned above rely on both measurement uncertainty and accuracy of calculation algorithms. In this article, these quasi 3D dosimetry systems and their use in patient specific pretreatment IMRT

  9. bioWeb3D: an online webGL 3D data visualisation tool

    PubMed Central

    2013-01-01

    Background Data visualization is critical for interpreting biological data. However, in practice it can prove to be a bottleneck for non trained researchers; this is especially true for three dimensional (3D) data representation. Whilst existing software can provide all necessary functionalities to represent and manipulate biological 3D datasets, very few are easily accessible (browser based), cross platform and accessible to non-expert users. Results An online HTML5/WebGL based 3D visualisation tool has been developed to allow biologists to quickly and easily view interactive and customizable three dimensional representations of their data along with multiple layers of information. Using the WebGL library Three.js written in Javascript, bioWeb3D allows the simultaneous visualisation of multiple large datasets inputted via a simple JSON, XML or CSV file, which can be read and analysed locally thanks to HTML5 capabilities. Conclusions Using basic 3D representation techniques in a technologically innovative context, we provide a program that is not intended to compete with professional 3D representation software, but that instead enables a quick and intuitive representation of reasonably large 3D datasets. PMID:23758781

  10. PACS-based interface for 3D anatomical structure visualization and surgical planning

    NASA Astrophysics Data System (ADS)

    Koehl, Christophe; Soler, Luc; Marescaux, Jacques

    2002-05-01

    The interpretation of radiological image is routine but it remains a rather difficult task for physicians. It requires complex mental processes, that permit translation from 2D slices into 3D localization and volume determination of visible diseases. An easier and more extensive visualization and exploitation of medical images can be reached through the use of computer-based systems that provide real help from patient admission to post-operative followup. In this way, we have developed a 3D visualization interface linked to a PACS database that allows manipulation and interaction on virtual organs delineated from CT-scan or MRI. This software provides the 3D real-time surface rendering of anatomical structures, an accurate evaluation of volumes and distances and the improvement of radiological image analysis and exam annotation through a negatoscope tool. It also provides a tool for surgical planning allowing the positioning of an interactive laparoscopic instrument and the organ resection. The software system could revolutionize the field of computerized imaging technology. Indeed, it provides a handy and portable tool for pre-operative and intra-operative analysis of anatomy and pathology in various medical fields. This constitutes the first step of the future development of augmented reality and surgical simulation systems.

  11. Modeling and modification of medical 3D objects. The benefit of using a haptic modeling tool.

    PubMed

    Kling-Petersen, T; Rydmark, M

    2000-01-01

    The Computer Laboratory of the medical faculty in Goteborg (Mednet) has since the end of 1998 been one of a limited numbers of participants in the development of a new modeling tool together with SensAble Technologies Inc [http:¿www.sensable.com/]. The software called SensAble FreeForm was officially released at Siggraph September 1999. Briefly, the software mimics the modeling techniques traditionally used by clay artists. An imported model or a user defined block of "clay" can be modified using different tools such as a ball, square block, scrape etc via the use of a SensAble Technologies PHANToM haptic arm. The model will deform in 3D as a result of touching the "clay" with any selected tool and the amount of deformation is linear to the force applied. By getting instantaneous haptic as well as visual feedback, precise and intuitive changes are easily made. While SensAble FreeForm lacks several of the features normally associated with a 3D modeling program (such as text handling, application of surface and bumpmaps, high-end rendering engines, etc) it's strength lies in the ability to rapidly create non-geometric 3D models. For medical use, very few anatomically correct models are created from scratch. However, FreeForm features tools enable advanced modification of reconstructed or 3D scanned models. One of the main problems with 3D laserscanning of medical specimens is that the technique usually leaves holes or gaps in the dataset corresponding to areas in shadows such as orifices, deep grooves etc. By using FreeForms different tools, these defects are easily corrected and gaps are filled out. Similarly, traditional 3D reconstruction (based on serial sections etc) often shows artifacts as a result of the triangulation and/or tessellation processes. These artifacts usually manifest as unnatural ridges or uneven areas ("the accordion effect"). FreeForm contains a smoothing algorithm that enables the user to select an area to be modified and subsequently apply

  12. 3D Data Mapping and Real-Time Experiment Control and Visualization in Brain Slices.

    PubMed

    Navarro, Marco A; Hibbard, Jaime V K; Miller, Michael E; Nivin, Tyler W; Milescu, Lorin S

    2015-10-20

    Here, we propose two basic concepts that can streamline electrophysiology and imaging experiments in brain slices and enhance data collection and analysis. The first idea is to interface the experiment with a software environment that provides a 3D scene viewer in which the experimental rig, the brain slice, and the recorded data are represented to scale. Within the 3D scene viewer, the user can visualize a live image of the sample and 3D renderings of the recording electrodes with real-time position feedback. Furthermore, the user can control the instruments and visualize their status in real time. The second idea is to integrate multiple types of experimental data into a spatial and temporal map of the brain slice. These data may include low-magnification maps of the entire brain slice, for spatial context, or any other type of high-resolution structural and functional image, together with time-resolved electrical and optical signals. The entire data collection can be visualized within the 3D scene viewer. These concepts can be applied to any other type of experiment in which high-resolution data are recorded within a larger sample at different spatial and temporal coordinates. PMID:26488641

  13. Prominent rocks - 3D

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Many prominent rocks near the Sagan Memorial Station are featured in this image, taken in stereo by the Imager for Mars Pathfinder (IMP) on Sol 3. 3D glasses are necessary to identify surface detail. Wedge is at lower left; Shark, Half-Dome, and Pumpkin are at center. Flat Top, about four inches high, is at lower right. The horizon in the distance is one to two kilometers away.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] Right

  14. 'Diamond' in 3-D

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This 3-D, microscopic imager mosaic of a target area on a rock called 'Diamond Jenness' was taken after NASA's Mars Exploration Rover Opportunity ground into the surface with its rock abrasion tool for a second time.

    Opportunity has bored nearly a dozen holes into the inner walls of 'Endurance Crater.' On sols 177 and 178 (July 23 and July 24, 2004), the rover worked double-duty on Diamond Jenness. Surface debris and the bumpy shape of the rock resulted in a shallow and irregular hole, only about 2 millimeters (0.08 inch) deep. The final depth was not enough to remove all the bumps and leave a neat hole with a smooth floor. This extremely shallow depression was then examined by the rover's alpha particle X-ray spectrometer.

    On Sol 178, Opportunity's 'robotic rodent' dined on Diamond Jenness once again, grinding almost an additional 5 millimeters (about 0.2 inch). The rover then applied its Moessbauer spectrometer to the deepened hole. This double dose of Diamond Jenness enabled the science team to examine the rock at varying layers. Results from those grindings are currently being analyzed.

    The image mosaic is about 6 centimeters (2.4 inches) across.

  15. Earthscape, a Multi-Purpose Interactive 3d Globe Viewer for Hybrid Data Visualization and Analysis

    NASA Astrophysics Data System (ADS)

    Sarthou, A.; Mas, S.; Jacquin, M.; Moreno, N.; Salamon, A.

    2015-08-01

    The hybrid visualization and interaction tool EarthScape is presented here. The software is able to display simultaneously LiDAR point clouds, draped videos with moving footprint, volume scientific data (using volume rendering, isosurface and slice plane), raster data such as still satellite images, vector data and 3D models such as buildings or vehicles. The application runs on touch screen devices such as tablets. The software is based on open source libraries, such as OpenSceneGraph, osgEarth and OpenCV, and shader programming is used to implement volume rendering of scientific data. The next goal of EarthScape is to perform data analysis using ENVI Services Engine, a cloud data analysis solution. EarthScape is also designed to be a client of Jagwire which provides multisource geo-referenced video fluxes. When all these components will be included, EarthScape will be a multi-purpose platform that will provide at the same time data analysis, hybrid visualization and complex interactions. The software is available on demand for free at france@exelisvis.com.

  16. Accelerating orthodontic tooth movement: A new, minimally-invasive corticotomy technique using a 3D-printed surgical template

    PubMed Central

    Giansanti, Matteo

    2016-01-01

    Background A reduction in orthodontic treatment time can be attained using corticotomies. The aggressive nature of corticotomy due to the elevation of muco-periosteal flaps and to the duration of the surgery raised reluctance for its employ among patients and dental community. This study aims to provide detailed information on the design and manufacture of a 3D-printed CAD-CAM (computer-aided design and computer-aided manufacturing) surgical guide which can aid the clinician in achieving a minimally-invasive, flapless corticotomy. Material and Methods An impression of dental arches was created; the models were digitally-acquired using a 3D scanner and saved as STereoLithography ( STL ) files. The patient underwent cone beam computed tomography (CBCT): images of jaws and teeth were transformed into 3D models and saved as an STL file. An acrylic template with the design of a surgical guide was manufactured and scanned. The STLs of jaws, scanned casts, and acrylic templates were matched. 3D modeling software allowed the view of the 3D models from different perspectives and planes with perfect rendering. The 3D model of the acrylic template was transformed into a surgical guide with slots designed to guide, at first, a scalpel blade and then a piezoelectric cutting insert. The 3D STL model of the surgical guide was printed. Results This procedure allowed the manufacturing of a 3D-printed CAD/CAM surgical guide, which overcomes the disadvantages of the corticotomy, removing the need for flap elevation. No discomfort, early surgical complications or unexpected events were observed. Conclusions The effectiveness of this minimally-invasive surgical technique can offer the clinician a valid alternative to other methods currently in use. Key words:Corticotomy, orthodontics, CAD/CAM, minimally invasive, surgical template, 3D printer. PMID:27031067

  17. Interior Reconstruction Using the 3d Hough Transform

    NASA Astrophysics Data System (ADS)

    Dumitru, R.-C.; Borrmann, D.; Nüchter, A.

    2013-02-01

    Laser scanners are often used to create accurate 3D models of buildings for civil engineering purposes, but the process of manually vectorizing a 3D point cloud is time consuming and error-prone (Adan and Huber, 2011). Therefore, the need to characterize and quantify complex environments in an automatic fashion arises, posing challenges for data analysis. This paper presents a system for 3D modeling by detecting planes in 3D point clouds, based on which the scene is reconstructed at a high architectural level through removing automatically clutter and foreground data. The implemented software detects openings, such as windows and doors and completes the 3D model by inpainting.

  18. Rendering the Topological Spines

    SciTech Connect

    Nieves-Rivera, D.

    2015-05-05

    Many tools to analyze and represent high dimensional data already exits yet most of them are not flexible, informative and intuitive enough to help the scientists make the corresponding analysis and predictions, understand the structure and complexity of scientific data, get a complete picture of it and explore a greater number of hypotheses. With this in mind, N-Dimensional Data Analysis and Visualization (ND²AV) is being developed to serve as an interactive visual analysis platform with the purpose of coupling together a number of these existing tools that range from statistics, machine learning, and data mining, with new techniques, in particular with new visualization approaches. My task is to create the rendering and implementation of a new concept called topological spines in order to extend ND²AV's scope. Other existing visualization tools create a representation preserving either the topological properties or the structural (geometric) ones because it is challenging to preserve them both simultaneously. Overcoming such challenge by creating a balance in between them, the topological spines are introduced as a new approach that aims to preserve them both. Its render using OpenGL and C++ and is currently being tested to further on be implemented on ND²AV. In this paper I will present what are the Topological Spines and how they are rendered.

  19. Visualization of 3D Geological Models on Google Earth

    NASA Astrophysics Data System (ADS)

    Choi, Y.; Um, J.; Park, M.

    2013-05-01

    Google Earth combines satellite imagery, aerial photography, thematic maps and various data sets to make a three-dimensional (3D) interactive image of the world. Currently, Google Earth is a popular visualization tool in a variety of fields and plays an increasingly important role not only for private users in daily life, but also for scientists, practitioners, policymakers and stakeholders in research and application. In this study, a method to visualize 3D geological models on Google Earth is presented. COLLAborative Design Activity (COLLADA, an open standard XML schema for establishing interactive 3D applications) was used to represent different 3D geological models such as borehole, fence section, surface-based 3D volume and 3D grid by triangle meshes (a set of triangles connected by their common edges or corners). In addition, we designed Keyhole Markup Language (KML, the XML-based scripting language of Google Earth) codes to import the COLLADA files into the 3D render window of Google Earth. The method was applied to the Grosmont formation in Alberta, Canada. The application showed that the combination of COLLADA and KML enables Google Earth to effectively visualize 3D geological structures and properties.; Visualization of the (a) boreholes, (b) fence sections, (c) 3D volume model and (d) 3D grid model of Grossmont formation on Google Earth

  20. 3D Face Modeling Using the Multi-Deformable Method

    PubMed Central

    Hwang, Jinkyu; Yu, Sunjin; Kim, Joongrock; Lee, Sangyoun

    2012-01-01

    In this paper, we focus on the problem of the accuracy performance of 3D face modeling techniques using corresponding features in multiple views, which is quite sensitive to feature extraction errors. To solve the problem, we adopt a statistical model-based 3D face modeling approach in a mirror system consisting of two mirrors and a camera. The overall procedure of our 3D facial modeling method has two primary steps: 3D facial shape estimation using a multiple 3D face deformable model and texture mapping using seamless cloning that is a type of gradient-domain blending. To evaluate our method's performance, we generate 3D faces of 30 individuals and then carry out two tests: accuracy test and robustness test. Our method shows not only highly accurate 3D face shape results when compared with the ground truth, but also robustness to feature extraction errors. Moreover, 3D face rendering results intuitively show that our method is more robust to feature extraction errors than other 3D face modeling methods. An additional contribution of our method is that a wide range of face textures can be acquired by the mirror system. By using this texture map, we generate realistic 3D face for individuals at the end of the paper. PMID:23201976

  1. Glnemo2: Interactive Visualization 3D Program

    NASA Astrophysics Data System (ADS)

    Lambert, Jean-Charles

    2011-10-01

    Glnemo2 is an interactive 3D visualization program developed in C++ using the OpenGL library and Nokia QT 4.X API. It displays in 3D the particles positions of the different components of an nbody snapshot. It quickly gives a lot of information about the data (shape, density area, formation of structures such as spirals, bars, or peanuts). It allows for in/out zooms, rotations, changes of scale, translations, selection of different groups of particles and plots in different blending colors. It can color particles according to their density or temperature, play with the density threshold, trace orbits, display different time steps, take automatic screenshots to make movies, select particles using the mouse, and fly over a simulation using a given camera path. All these features are accessible from a very intuitive graphic user interface. Glnemo2 supports a wide range of input file formats (Nemo, Gadget 1 and 2, phiGrape, Ramses, list of files, realtime gyrfalcON simulation) which are automatically detected at loading time without user intervention. Glnemo2 uses a plugin mechanism to load the data, so that it is easy to add a new file reader. It's powered by a 3D engine which uses the latest OpenGL technology, such as shaders (glsl), vertex buffer object, frame buffer object, and takes in account the power of the graphic card used in order to accelerate the rendering. With a fast GPU, millions of particles can be rendered in real time. Glnemo2 runs on Linux, Windows (using minGW compiler), and MaxOSX, thanks to the QT4API.

  2. Sorting and hardware assisted rendering for volume visualization

    SciTech Connect

    Stein, C.; Becker, B.; Max, N.

    1994-03-01

    We present some techniques for volume rendering unstructured data. Interpolation between vertex colors and opacities is performed using hardware assisted texture mapping, and color is integrated for use with a volume rendering system. We also present an O(n{sup 2}) method for sorting n arbitrarily shaped convex polyhedra prior to visualization. It generalizes the Newell, Newell and Sancha sort for polygons to 3-D volume elements.

  3. A Parallel Rendering Algorithm for MIMD Architectures

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.; Orloff, Tobias

    1991-01-01

    Applications such as animation and scientific visualization demand high performance rendering of complex three dimensional scenes. To deliver the necessary rendering rates, highly parallel hardware architectures are required. The challenge is then to design algorithms and software which effectively use the hardware parallelism. A rendering algorithm targeted to distributed memory MIMD architectures is described. For maximum performance, the algorithm exploits both object-level and pixel-level parallelism. The behavior of the algorithm is examined both analytically and experimentally. Its performance for large numbers of processors is found to be limited primarily by communication overheads. An experimental implementation for the Intel iPSC/860 shows increasing performance from 1 to 128 processors across a wide range of scene complexities. It is shown that minimal modifications to the algorithm will adapt it for use on shared memory architectures as well.

  4. Extra dimensions: 3D in PDF documentation

    SciTech Connect

    Graf, Norman A.

    2011-01-11

    Experimental science is replete with multi-dimensional information which is often poorly represented by the two dimensions of presentation slides and print media. Past efforts to disseminate such information to a wider audience have failed for a number of reasons, including a lack of standards which are easy to implement and have broad support. Adobe's Portable Document Format (PDF) has in recent years become the de facto standard for secure, dependable electronic information exchange. It has done so by creating an open format, providing support for multiple platforms and being reliable and extensible. By providing support for the ECMA standard Universal 3D (U3D) file format in its free Adobe Reader software, Adobe has made it easy to distribute and interact with 3D content. By providing support for scripting and animation, temporal data can also be easily distributed to a wide, non-technical audience. We discuss how the field of radiation imaging could benefit from incorporating full 3D information about not only the detectors, but also the results of the experimental analyses, in its electronic publications. In this article, we present examples drawn from high-energy physics, mathematics and molecular biology which take advantage of this functionality. Furthermore, we demonstrate how 3D detector elements can be documented, using either CAD drawings or other sources such as GEANT visualizations as input.

  5. Extra dimensions: 3D in PDF documentation

    DOE PAGESBeta

    Graf, Norman A.

    2011-01-11

    Experimental science is replete with multi-dimensional information which is often poorly represented by the two dimensions of presentation slides and print media. Past efforts to disseminate such information to a wider audience have failed for a number of reasons, including a lack of standards which are easy to implement and have broad support. Adobe's Portable Document Format (PDF) has in recent years become the de facto standard for secure, dependable electronic information exchange. It has done so by creating an open format, providing support for multiple platforms and being reliable and extensible. By providing support for the ECMA standard Universalmore » 3D (U3D) file format in its free Adobe Reader software, Adobe has made it easy to distribute and interact with 3D content. By providing support for scripting and animation, temporal data can also be easily distributed to a wide, non-technical audience. We discuss how the field of radiation imaging could benefit from incorporating full 3D information about not only the detectors, but also the results of the experimental analyses, in its electronic publications. In this article, we present examples drawn from high-energy physics, mathematics and molecular biology which take advantage of this functionality. Furthermore, we demonstrate how 3D detector elements can be documented, using either CAD drawings or other sources such as GEANT visualizations as input.« less

  6. A specification of 3D manipulation in virtual environments

    NASA Technical Reports Server (NTRS)

    Su, S. Augustine; Furuta, Richard

    1994-01-01

    In this paper we discuss the modeling of three basic kinds of 3-D manipulations in the context of a logical hand device and our virtual panel architecture. The logical hand device is a useful software abstraction representing hands in virtual environments. The virtual panel architecture is the 3-D component of the 2-D window systems. Both of the abstractions are intended to form the foundation for adaptable 3-D manipulation.

  7. Full 3D microwave quasi-holographic imaging

    NASA Astrophysics Data System (ADS)

    Castelli, Juan-Carlos; Tardivel, Francois

    A full 3D quasi-holographic image processing technique developed by ONERA is described. A complex backscattering coefficient of a drone scale model was measured for discrete values of the 3D backscattered wave vector in a frequency range between 4.5-8 GHz. The 3D image processing is implemented on a HP 1000 mini-computer and will be part of LASER 2 software to be used in three RCS measurement indoor facilities.

  8. PLOT3D Export Tool for Tecplot

    NASA Technical Reports Server (NTRS)

    Alter, Stephen

    2010-01-01

    The PLOT3D export tool for Tecplot solves the problem of modified data being impossible to output for use by another computational science solver. The PLOT3D Exporter add-on enables the use of the most commonly available visualization tools to engineers for output of a standard format. The exportation of PLOT3D data from Tecplot has far reaching effects because it allows for grid and solution manipulation within a graphical user interface (GUI) that is easily customized with macro language-based and user-developed GUIs. The add-on also enables the use of Tecplot as an interpolation tool for solution conversion between different grids of different types. This one add-on enhances the functionality of Tecplot so significantly, it offers the ability to incorporate Tecplot into a general suite of tools for computational science applications as a 3D graphics engine for visualization of all data. Within the PLOT3D Export Add-on are several functions that enhance the operations and effectiveness of the add-on. Unlike Tecplot output functions, the PLOT3D Export Add-on enables the use of the zone selection dialog in Tecplot to choose which zones are to be written by offering three distinct options - output of active, inactive, or all zones (grid blocks). As the user modifies the zones to output with the zone selection dialog, the zones to be written are similarly updated. This enables the use of Tecplot to create multiple configurations of a geometry being analyzed. For example, if an aircraft is loaded with multiple deflections of flaps, by activating and deactivating different zones for a specific flap setting, new specific configurations of that aircraft can be easily generated by only writing out specific zones. Thus, if ten flap settings are loaded into Tecplot, the PLOT3D Export software can output ten different configurations, one for each flap setting.

  9. Simulated and Real Sheet-of-Light 3D Object Scanning Using a-Si:H Thin Film PSD Arrays

    PubMed Central

    Contreras, Javier; Tornero, Josep; Ferreira, Isabel; Martins, Rodrigo; Gomes, Luis; Fortunato, Elvira

    2015-01-01

    A MATLAB/SIMULINK software simulation model (structure and component blocks) has been constructed in order to view and analyze the potential of the PSD (Position Sensitive Detector) array concept technology before it is further expanded or developed. This simulation allows changing most of its parameters, such as the number of elements in the PSD array, the direction of vision, the viewing/scanning angle, the object rotation, translation, sample/scan/simulation time, etc. In addition, results show for the first time the possibility of scanning an object in 3D when using an a-Si:H thin film 128 PSD array sensor and hardware/software system. Moreover, this sensor technology is able to perform these scans and render 3D objects at high speeds and high resolutions when using a sheet-of-light laser within a triangulation platform. As shown by the simulation, a substantial enhancement in 3D object profile image quality and realism can be achieved by increasing the number of elements of the PSD array sensor as well as by achieving an optimal position response from the sensor since clearly the definition of the 3D object profile depends on the correct and accurate position response of each detector as well as on the size of the PSD array. PMID:26633403

  10. Simulated and Real Sheet-of-Light 3D Object Scanning Using a-Si:H Thin Film PSD Arrays.

    PubMed

    Contreras, Javier; Tornero, Josep; Ferreira, Isabel; Martins, Rodrigo; Gomes, Luis; Fortunato, Elvira

    2015-01-01

    A MATLAB/SIMULINK software simulation model (structure and component blocks) has been constructed in order to view and analyze the potential of the PSD (Position Sensitive Detector) array concept technology before it is further expanded or developed. This simulation allows changing most of its parameters, such as the number of elements in the PSD array, the direction of vision, the viewing/scanning angle, the object rotation, translation, sample/scan/simulation time, etc. In addition, results show for the first time the possibility of scanning an object in 3D when using an a-Si:H thin film 128 PSD array sensor and hardware/software system. Moreover, this sensor technology is able to perform these scans and render 3D objects at high speeds and high resolutions when using a sheet-of-light laser within a triangulation platform. As shown by the simulation, a substantial enhancement in 3D object profile image quality and realism can be achieved by increasing the number of elements of the PSD array sensor as well as by achieving an optimal position response from the sensor since clearly the definition of the 3D object profile depends on the correct and accurate position response of each detector as well as on the size of the PSD array. PMID:26633403

  11. 3D reconstruction based on CT image and its application

    NASA Astrophysics Data System (ADS)

    Zhang, Jianxun; Zhang, Mingmin

    2004-03-01

    Reconstitute the 3-D model of the liver and its internal piping system and simulation of the liver surgical operation can increase the accurate and security of the liver surgical operation, attain a purpose for the biggest limit decrease surgical operation wound, shortening surgical operation time, increasing surgical operation succeeding rate, reducing medical treatment expenses and promoting patient recovering from illness. This text expatiated technology and method that the author constitutes 3-D the model of the liver and its internal piping system and simulation of the liver surgical operation according to the images of CT. The direct volume rendering method establishes 3D the model of the liver. Under the environment of OPENGL adopt method of space point rendering to display liver's internal piping system and simulation of the liver surgical operation. Finally, we adopt the wavelet transform method compressed the medical image data.

  12. Parallel Rendering of Large Time-Varying Volume Data

    NASA Technical Reports Server (NTRS)

    Garbutt, Alexander E.

    2005-01-01

    Interactive visualization of large time-varying 3D volume datasets has been and still is a great challenge to the modem computational world. It stretches the limits of the memory capacity, the disk space, the network bandwidth and the CPU speed of a conventional computer. In this SURF project, we propose to develop a parallel volume rendering program on SGI's Prism, a cluster computer equipped with state-of-the-art graphic hardware. The proposed program combines both parallel computing and hardware rendering in order to achieve an interactive rendering rate. We use 3D texture mapping and a hardware shader to implement 3D volume rendering on each workstation. We use SGI's VisServer to enable remote rendering using Prism's graphic hardware. And last, we will integrate this new program with ParVox, a parallel distributed visualization system developed at JPL. At the end of the project, we Will demonstrate remote interactive visualization using this new hardware volume renderer on JPL's Prism System using a time-varying dataset from selected JPL applications.

  13. Multiple ray cluster rendering for interactive integral imaging system.

    PubMed

    Jiao, Shaohui; Wang, Xiaoguang; Zhou, Mingcai; Li, Weiming; Hong, Tao; Nam, Dongkyung; Lee, Jin-Ho; Wu, Enhua; Wang, Haitao; Kim, Ji-Yeun

    2013-04-22

    In this paper, we present an efficient Computer Generated Integral Imaging (CGII) method, called multiple ray cluster rendering (MRCR). Based on the MRCR, an interactive integral imaging system is realized, which provides accurate 3D image satisfying the changeable observers' positions in real time. The MRCR method can generate all the elemental image pixels within only one rendering pass by ray reorganization of multiple ray clusters and 3D content duplication. It is compatible with various graphic contents including mesh, point cloud, and medical data. Moreover, multi-sampling method is embedded in MRCR method for acquiring anti-aliased 3D image result. To our best knowledge, the MRCR method outperforms the existing CGII methods in both the speed performance and the display quality. Experimental results show that the proposed CGII method can achieve real-time computational speed for large-scale 3D data with about 50,000 points. PMID:23609712

  14. 3D TRUMP - A GBI launch window tool

    NASA Astrophysics Data System (ADS)

    Karels, Steven N.; Hancock, John; Matchett, Gary

    3D TRUMP is a novel GPS and communicatons-link software analysis tool developed for the SDIO's Ground-Based Interceptor (GBI) program. 3D TRUMP uses a computationally efficient analysis tool which provides key GPS-based performance measures for an entire GBI mission's reentry vehicle and interceptor trajectories. Algorithms and sample outputs are presented.

  15. Supernova Remnant in 3-D

    NASA Technical Reports Server (NTRS)

    2009-01-01

    wavelengths. Since the amount of the wavelength shift is related to the speed of motion, one can determine how fast the debris are moving in either direction. Because Cas A is the result of an explosion, the stellar debris is expanding radially outwards from the explosion center. Using simple geometry, the scientists were able to construct a 3-D model using all of this information. A program called 3-D Slicer modified for astronomical use by the Astronomical Medicine Project at Harvard University in Cambridge, Mass. was used to display and manipulate the 3-D model. Commercial software was then used to create the 3-D fly-through.

    The blue filaments defining the blast wave were not mapped using the Doppler effect because they emit a different kind of light synchrotron radiation that does not emit light at discrete wavelengths, but rather in a broad continuum. The blue filaments are only a representation of the actual filaments observed at the blast wave.

    This visualization shows that there are two main components to this supernova remnant: a spherical component in the outer parts of the remnant and a flattened (disk-like) component in the inner region. The spherical component consists of the outer layer of the star that exploded, probably made of helium and carbon. These layers drove a spherical blast wave into the diffuse gas surrounding the star. The flattened component that astronomers were unable to map into 3-D prior to these Spitzer observations consists of the inner layers of the star. It is made from various heavier elements, not all shown in the visualization, such as oxygen, neon, silicon, sulphur, argon and iron.

    High-velocity plumes, or jets, of this material are shooting out from the explosion in the plane of the disk-like component mentioned above. Plumes of silicon appear in the northeast and southwest, while those of iron are seen in the southeast and north. These jets were already known and Doppler velocity measurements have been made for these

  16. 3D Spectroscopy in Astronomy

    NASA Astrophysics Data System (ADS)

    Mediavilla, Evencio; Arribas, Santiago; Roth, Martin; Cepa-Nogué, Jordi; Sánchez, Francisco

    2011-09-01

    Preface; Acknowledgements; 1. Introductory review and technical approaches Martin M. Roth; 2. Observational procedures and data reduction James E. H. Turner; 3. 3D Spectroscopy instrumentation M. A. Bershady; 4. Analysis of 3D data Pierre Ferruit; 5. Science motivation for IFS and galactic studies F. Eisenhauer; 6. Extragalactic studies and future IFS science Luis Colina; 7. Tutorials: how to handle 3D spectroscopy data Sebastian F. Sánchez, Begona García-Lorenzo and Arlette Pécontal-Rousset.

  17. A fast high accuracy volume renderer for unstructured data.

    SciTech Connect

    Angel, Edward S.; Moreland, Kenneth D.

    2004-07-01

    In this paper, we describe an unstructured mesh volume renderer. Our renderer is interactive and accurately integrates light intensity an order of magnitude faster than previous methods. We employ a projective technique that takes advantage of the expanded programmability of the latest 3D graphics hardware. We also analyze an optical model commonly used for scientific volume rendering and derive a new method to compute it that is very accurate but computationally feasible in real time. We demonstrate a system that can accurately produce a volume rendering of an unstructured mesh with a first-order approximation to any classification method. Furthermore, our system is capable of rendering over 300 thousand tetrahedra per second yet is independent of the classification scheme used.

  18. 3D Elevation Program—Virtual USA in 3D

    USGS Publications Warehouse

    Lukas, Vicki; Stoker, J.M.

    2016-01-01

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) uses a laser system called ‘lidar’ (light detection and ranging) to create a virtual reality map of the Nation that is very accurate. 3D maps have many uses with new uses being discovered all the time.  

  19. The EISCAT_3D Science Case

    NASA Astrophysics Data System (ADS)

    Tjulin, A.; Mann, I.; McCrea, I.; Aikio, A. T.

    2013-05-01

    projection in the high-latitude ionosphere. EISCAT_3D can also be used to study solar system properties. Thanks to the high power and great accuracy, mapping of objects like the Moon and asteroids is possible. With the high power and large antenna aperture, incoherent scatter radars can be extraordinarily good monitors of extraterrestrial dust and its interaction with the atmosphere. Although incoherent scatter radars, such as EISCAT_3D, are few in number, the power and versatility of their measurement technique mean that they can measure parameters which are not obtainable otherwise, and thus also be a cornerstone in the international efforts to measure and predict space weather effects. Finally, over the years the EISCAT radars have served as a testbed for new ideas in radar coding and data analysis. EISCAT_3D will be the first of a new generation of "software radars" whose advanced capabilities will be realised not by its hardware but by the flexibility and adaptability of the scheduling, beam-forming, signal processing and analysis software used to control the radar and process its data. Thus, new techniques will be developed into standard observing applications for implementation in the next generation of software radars.

  20. Elasticity-based three dimensional ultrasound real-time volume rendering

    NASA Astrophysics Data System (ADS)

    Boctor, Emad M.; Matinfar, Mohammad; Ahmad, Omar; Rivaz, Hassan; Choti, Michael; Taylor, Russell H.

    2009-02-01

    Volumetric ultrasound imaging has not gained wide recognition, despite the availability of real-time 3D ultrasound scanners and the anticipated potential of 3D ultrasound imaging in diagnostic and interventional radiology. Their use, however, has been hindered by the lack of real-time visualization methods that are capable of producing high quality 3D rendering of the target/surface of interest. Volume rendering is a known visualization method, which can display clear surfaces out of the acquired volumetric data, and has an increasing number of applications utilizing CT and MRI data. The key element of any volume rendering pipeline is the ability to classify the target/surface of interest by setting an appropriate opacity function. Practical and successful real-time 3D ultrasound volume rendering can be achieved in Obstetrics and Angio applications where setting these opacity functions can be done rapidly, and reliably. Unfortunately, 3D ultrasound volume rendering of soft tissues is a challenging task due to the presence of significant amount of noise and speckle. Recently, several research groups have shown the feasibility of producing 3D elasticity volume from two consecutive 3D ultrasound scans. This report describes a novel volume rendering pipeline utilizing elasticity information. The basic idea is to compute B-mode voxel opacity from the rapidly calculated strain values, which can also be mixed with conventional gradient based opacity function. We have implemented the volume renderer using GPU unit, which gives an update rate of 40 volume/sec.

  1. Efficient volume rendering using octree space subdivision

    NASA Astrophysics Data System (ADS)

    Krumhauer, Peter; Tsygankov, Michael; Reich, Christian; Evgrafov, Anton

    1999-03-01

    This paper describes a discrete ray-tracing algorithm, which employs the adaptive hierarchical spatial subdivision (octree) technique for 3D uniform binary voxel space representation. The binary voxel space contains voxels of two kinds: 'surface' and 'non-surface.' Surface voxels include property information like the surface normal and color. The usage of octrees dramatically reduces the memory amount required to store 3D models. The average compression ratio is in the range between 1:24 up to 1:50 compared to uncompressed voxels. A fast ray casting algorithm called BOXER was developed, which allows rendering 256 X 256 X 256 and 512 X 512 X 512 volumes nearly in real-time on standard Intel-based PCs.

  2. Supernova Remnant in 3-D

    NASA Technical Reports Server (NTRS)

    2009-01-01

    wavelengths. Since the amount of the wavelength shift is related to the speed of motion, one can determine how fast the debris are moving in either direction. Because Cas A is the result of an explosion, the stellar debris is expanding radially outwards from the explosion center. Using simple geometry, the scientists were able to construct a 3-D model using all of this information. A program called 3-D Slicer modified for astronomical use by the Astronomical Medicine Project at Harvard University in Cambridge, Mass. was used to display and manipulate the 3-D model. Commercial software was then used to create the 3-D fly-through.

    The blue filaments defining the blast wave were not mapped using the Doppler effect because they emit a different kind of light synchrotron radiation that does not emit light at discrete wavelengths, but rather in a broad continuum. The blue filaments are only a representation of the actual filaments observed at the blast wave.

    This visualization shows that there are two main components to this supernova remnant: a spherical component in the outer parts of the remnant and a flattened (disk-like) component in the inner region. The spherical component consists of the outer layer of the star that exploded, probably made of helium and carbon. These layers drove a spherical blast wave into the diffuse gas surrounding the star. The flattened component that astronomers were unable to map into 3-D prior to these Spitzer observations consists of the inner layers of the star. It is made from various heavier elements, not all shown in the visualization, such as oxygen, neon, silicon, sulphur, argon and iron.

    High-velocity plumes, or jets, of this material are shooting out from the explosion in the plane of the disk-like component mentioned above. Plumes of silicon appear in the northeast and southwest, while those of iron are seen in the southeast and north. These jets were already known and Doppler velocity measurements have been made for these

  3. Vector quantization for volume rendering

    NASA Technical Reports Server (NTRS)

    Ning, Paul; Hesselink, Lambertus

    1992-01-01

    Volume rendering techniques typically process volumetric data in raw, uncompressed form. As algorithmic and architectural advances improve rendering speeds, however, larger data sets will be evaluated requiring consideration of data storage and transmission issues. In this paper, we analyze the data compression requirements for volume rendering applications and present a solution based on vector quantization. The proposed system compresses volumetric data and then renders images directly from the new data format. Tests on a fluid flow data set demonstrate that good image quality may be achieved at a compression ratio of 17:1 with only a 5 percent cost in additional rendering time.

  4. Image based 3D city modeling : Comparative study

    NASA Astrophysics Data System (ADS)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-06-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing rapidly for various engineering and non-engineering applications. Generally four main image based approaches were used for virtual 3D city models generation. In first approach, researchers were used Sketch based modeling, second method is Procedural grammar based modeling, third approach is Close range photogrammetry based modeling and fourth approach is mainly based on Computer Vision techniques. SketchUp, CityEngine, Photomodeler and Agisoft Photoscan are the main softwares to represent these approaches respectively. These softwares have different approaches & methods suitable for image based 3D city modeling. Literature study shows that till date, there is no complete such type of comparative study available to create complete 3D city model by using images. This paper gives a comparative assessment of these four image based 3D modeling approaches. This comparative study is mainly based on data acquisition methods, data processing techniques and output 3D model products. For this research work, study area is the campus of civil engineering department, Indian Institute of Technology, Roorkee (India). This 3D campus acts as a prototype for city. This study also explains various governing parameters, factors and work experiences. This research work also gives a brief introduction, strengths and weakness of these four image based techniques. Some personal comment is also given as what can do or what can't do from these softwares. At the last, this study shows; it concluded that, each and every software has some advantages and limitations. Choice of software depends on user requirements of 3D project. For normal visualization project, SketchUp software is a good option. For 3D documentation record, Photomodeler gives good result. For Large city

  5. Design Software

    NASA Technical Reports Server (NTRS)

    1991-01-01

    A NASA contractor and Small Business Innovation Research (SBIR) participant has converted its research into commercial software products for auto design, structural analysis and other applications. ViGYAN, Inc., utilizing the aeronautical research principle of computational fluid dynamics, has created - with VGRID3D and VPLOT3D - an easier alternative to conventional structured grids for fluid dynamic calculations.

  6. Digital Hammurabi: design and development of a 3D scanner for cuneiform tablets

    NASA Astrophysics Data System (ADS)

    Hahn, Daniel V.; Duncan, Donald D.; Baldwin, Kevin C.; Cohen, Jonathon D.; Purnomo, Budirijanto

    2006-02-01

    Cuneiform is an ancient form of writing in which wooden reeds were used to impress shapes upon moist clay tablets. Upon drying, the tablets preserved the written script with remarkable accuracy and durability. There are currently hundreds of thousands of cuneiform tablets spread throughout the world in both museums and private collections. The global scale of these artifacts presents several problems for scholars who wish to study them. It may be difficult or impossible to obtain access to a given collection. In addition, photographic records of the tablets many times prove to be inadequate for proper examination. Photographs lack the ability to alter the lighting conditions and view direction. As a solution to these problems, we describe a 3D scanner capable of acquiring the shape, color, and reflectance of a tablet as a complete 3D object. This data set could then be stored in an online library and manipulated by suitable rendering software that would allow a user to specify any view direction and lighting condition. The scanner utilizes a camera and telecentric lens to acquire images of the tablet under varying controlled illumination conditions. Image data are processed using photometric stereo and structured light techniques to determine the tablet shape; color information is reconstructed from primary color monochrome image data. The scanned surface is sampled at 26.8 μm lateral spacing and the height information is calculated on a much smaller scale. Scans of adjacent tablet sides are registered together to form a 3D surface model.

  7. Modular 3-D Transport model

    EPA Science Inventory

    MT3D was first developed by Chunmiao Zheng in 1990 at S.S. Papadopulos & Associates, Inc. with partial support from the U.S. Environmental Protection Agency (USEPA). Starting in 1990, MT3D was released as a pubic domain code from the USEPA. Commercial versions with enhanced capab...

  8. Market study: 3-D eyetracker

    NASA Technical Reports Server (NTRS)

    1977-01-01

    A market study of a proposed version of a 3-D eyetracker for initial use at NASA's Ames Research Center was made. The commercialization potential of a simplified, less expensive 3-D eyetracker was ascertained. Primary focus on present and potential users of eyetrackers, as well as present and potential manufacturers has provided an effective means of analyzing the prospects for commercialization.

  9. LLNL-Earth3D

    2013-10-01

    Earth3D is a computer code designed to allow fast calculation of seismic rays and travel times through a 3D model of the Earth. LLNL is using this for earthquake location and global tomography efforts and such codes are of great interest to the Earth Science community.

  10. [3-D ultrasound in gastroenterology].

    PubMed

    Zoller, W G; Liess, H

    1994-06-01

    Three-dimensional (3D) sonography represents a development of noninvasive diagnostic imaging by real-time two-dimensional (2D) sonography. The use of transparent rotating scans, comparable to a block of glass, generates a 3D effect. The objective of the present study was to optimate 3D presentation of abdominal findings. Additional investigations were made with a new volumetric program to determine the volume of selected findings of the liver. The results were compared with the estimated volumes of 2D sonography and 2D computer tomography (CT). For the processing of 3D images, typical parameter constellations were found for the different findings, which facilitated processing of 3D images. In more than 75% of the cases examined we found an optimal 3D presentation of sonographic findings with respect to the evaluation criteria developed by us for the 3D imaging of processed data. Great differences were found for the estimated volumes of the findings of the liver concerning the three different techniques applied. 3D ultrasound represents a valuable method to judge morphological appearance in abdominal findings. The possibility of volumetric measurements enlarges its potential diagnostic significance. Further clinical investigations are necessary to find out if definite differentiation between benign and malign findings is possible. PMID:7919882

  11. 3D World Building System

    SciTech Connect

    2013-10-30

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  12. 3D World Building System

    ScienceCinema

    None

    2014-02-26

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  13. 3D Imaging with Structured Illumination for Advanced Security Applications

    SciTech Connect

    Birch, Gabriel Carisle; Dagel, Amber Lynn; Kast, Brian A.; Smith, Collin S.

    2015-09-01

    Three-dimensional (3D) information in a physical security system is a highly useful dis- criminator. The two-dimensional data from an imaging systems fails to provide target dis- tance and three-dimensional motion vector, which can be used to reduce nuisance alarm rates and increase system effectiveness. However, 3D imaging devices designed primarily for use in physical security systems are uncommon. This report discusses an architecture favorable to physical security systems; an inexpensive snapshot 3D imaging system utilizing a simple illumination system. The method of acquiring 3D data, tests to understand illumination de- sign, and software modifications possible to maximize information gathering capability are discussed.

  14. 3D visualization of polymer nanostructure

    SciTech Connect

    Werner, James H

    2009-01-01

    at {approx} 10 nm resolution over hundreds of microns in 3 spatial dimensions. Super-resolution microcopy methods based upon single molecule localization were originally limited to 2D slices. Recent advances in this field have extended these methods to three dimensions. However, the 3D rendering was limited to viewing sparsely labeled cellular structures over a z-depth of less than 1 micron. Our first goal is to extend super resolution microscopy to z-depths of hundreds of microns. This substantial improvement is needed to image polymer nanostructure over functionally relevant length scales. (2) Benchmark this instrument by studying the 3D nanostructure of diblock co-polymer morphologies. We will test and benchmark our instrument by imaging fluorescently labeled diblock copolymers, molecules that self-assemble into a variety of 3D nano-structures. We reiterate these polymers are useful for a variety of applications ranging from lithography to light harvesting.

  15. 3D printing in dentistry.

    PubMed

    Dawood, A; Marti Marti, B; Sauret-Jackson, V; Darwood, A

    2015-12-01

    3D printing has been hailed as a disruptive technology which will change manufacturing. Used in aerospace, defence, art and design, 3D printing is becoming a subject of great interest in surgery. The technology has a particular resonance with dentistry, and with advances in 3D imaging and modelling technologies such as cone beam computed tomography and intraoral scanning, and with the relatively long history of the use of CAD CAM technologies in dentistry, it will become of increasing importance. Uses of 3D printing include the production of drill guides for dental implants, the production of physical models for prosthodontics, orthodontics and surgery, the manufacture of dental, craniomaxillofacial and orthopaedic implants, and the fabrication of copings and frameworks for implant and dental restorations. This paper reviews the types of 3D printing technologies available and their various applications in dentistry and in maxillofacial surgery. PMID:26657435

  16. Emerging Applications of Bedside 3D Printing in Plastic Surgery.

    PubMed

    Chae, Michael P; Rozen, Warren M; McMenamin, Paul G; Findlay, Michael W; Spychal, Robert T; Hunter-Smith, David J

    2015-01-01

    Modern imaging techniques are an essential component of preoperative planning in plastic and reconstructive surgery. However, conventional modalities, including three-dimensional (3D) reconstructions, are limited by their representation on 2D workstations. 3D printing, also known as rapid prototyping or additive manufacturing, was once the province of industry to fabricate models from a computer-aided design (CAD) in a layer-by-layer manner. The early adopters in clinical practice have embraced the medical imaging-guided 3D-printed biomodels for their ability to provide tactile feedback and a superior appreciation of visuospatial relationship between anatomical structures. With increasing accessibility, investigators are able to convert standard imaging data into a CAD file using various 3D reconstruction softwares and ultimately fabricate 3D models using 3D printing techniques, such as stereolithography, multijet modeling, selective laser sintering, binder jet technique, and fused deposition modeling. However, many clinicians have questioned whether the cost-to-benefit ratio justifies its ongoing use. The cost and size of 3D printers have rapidly decreased over the past decade in parallel with the expiration of key 3D printing patents. Significant improvements in clinical imaging and user-friendly 3D software have permitted computer-aided 3D modeling of anatomical structures and implants without outsourcing in many cases. These developments offer immense potential for the application of 3D printing at the bedside for a variety of clinical applications. In this review, existing uses of 3D printing in plastic surgery practice spanning the spectrum from templates for facial transplantation surgery through to the formation of bespoke craniofacial implants to optimize post-operative esthetics are described. Furthermore, we discuss the potential of 3D printing to become an essential office-based tool in plastic surgery to assist in preoperative planning, developing

  17. Emerging Applications of Bedside 3D Printing in Plastic Surgery

    PubMed Central

    Chae, Michael P.; Rozen, Warren M.; McMenamin, Paul G.; Findlay, Michael W.; Spychal, Robert T.; Hunter-Smith, David J.

    2015-01-01

    Modern imaging techniques are an essential component of preoperative planning in plastic and reconstructive surgery. However, conventional modalities, including three-dimensional (3D) reconstructions, are limited by their representation on 2D workstations. 3D printing, also known as rapid prototyping or additive manufacturing, was once the province of industry to fabricate models from a computer-aided design (CAD) in a layer-by-layer manner. The early adopters in clinical practice have embraced the medical imaging-guided 3D-printed biomodels for their ability to provide tactile feedback and a superior appreciation of visuospatial relationship between anatomical structures. With increasing accessibility, investigators are able to convert standard imaging data into a CAD file using various 3D reconstruction softwares and ultimately fabricate 3D models using 3D printing techniques, such as stereolithography, multijet modeling, selective laser sintering, binder jet technique, and fused deposition modeling. However, many clinicians have questioned whether the cost-to-benefit ratio justifies its ongoing use. The cost and size of 3D printers have rapidly decreased over the past decade in parallel with the expiration of key 3D printing patents. Significant improvements in clinical imaging and user-friendly 3D software have permitted computer-aided 3D modeling of anatomical structures and implants without outsourcing in many cases. These developments offer immense potential for the application of 3D printing at the bedside for a variety of clinical applications. In this review, existing uses of 3D printing in plastic surgery practice spanning the spectrum from templates for facial transplantation surgery through to the formation of bespoke craniofacial implants to optimize post-operative esthetics are described. Furthermore, we discuss the potential of 3D printing to become an essential office-based tool in plastic surgery to assist in preoperative planning, developing

  18. Openwebglobe 2: Visualization of Complex 3D-GEODATA in the (mobile) Webbrowser

    NASA Astrophysics Data System (ADS)

    Christen, M.

    2016-06-01

    Providing worldwide high resolution data for virtual globes consists of compute and storage intense tasks for processing data. Furthermore, rendering complex 3D-Geodata, such as 3D-City models with an extremely high polygon count and a vast amount of textures at interactive framerates is still a very challenging task, especially on mobile devices. This paper presents an approach for processing, caching and serving massive geospatial data in a cloud-based environment for large scale, out-of-core, highly scalable 3D scene rendering on a web based virtual globe. Cloud computing is used for processing large amounts of geospatial data and also for providing 2D and 3D map data to a large amount of (mobile) web clients. In this paper the approach for processing, rendering and caching very large datasets in the currently developed virtual globe "OpenWebGlobe 2" is shown, which displays 3D-Geodata on nearly every device.

  19. McIDAS-V: Advanced Visualization for 3D Remote Sensing Data

    NASA Astrophysics Data System (ADS)

    Rink, T.; Achtor, T. H.

    2010-12-01

    McIDAS-V is a Java-based, open-source, freely available software package for analysis and visualization of geophysical data. Its advanced capabilities provide very interactive 4-D displays, including 3D volumetric rendering and fast sub-manifold slicing, linked to an abstract mathematical data model with built-in metadata for units, coordinate system transforms and sampling topology. A Jython interface provides user defined analysis and computation in terms of the internal data model. These powerful capabilities to integrate data, analysis and visualization are being applied to hyper-spectral sounding retrievals, eg. AIRS and IASI, of moisture and cloud density to interrogate and analyze their 3D structure, as well as, validate with instruments such as CALIPSO, CloudSat and MODIS. The object oriented framework design allows for specialized extensions for novel displays and new sources of data. Community defined CF-conventions for gridded data are understood by the software, and can be immediately imported into the application. This presentation will show examples how McIDAS-V is used in 3-dimensional data analysis, display and evaluation.

  20. Spacecraft 3D Augmented Reality Mobile App

    NASA Technical Reports Server (NTRS)

    Hussey, Kevin J.; Doronila, Paul R.; Kumanchik, Brian E.; Chan, Evan G.; Ellison, Douglas J.; Boeck, Andrea; Moore, Justin M.

    2013-01-01

    The Spacecraft 3D application allows users to learn about and interact with iconic NASA missions in a new and immersive way using common mobile devices. Using Augmented Reality (AR) techniques to project 3D renditions of the mission spacecraft into real-world surroundings, users can interact with and learn about Curiosity, GRAIL, Cassini, and Voyager. Additional updates on future missions, animations, and information will be ongoing. Using a printed AR Target and camera on a mobile device, users can get up close with these robotic explorers, see how some move, and learn about these engineering feats, which are used to expand knowledge and understanding about space. The software receives input from the mobile device's camera to recognize the presence of an AR marker in the camera's field of view. It then displays a 3D rendition of the selected spacecraft in the user's physical surroundings, on the mobile device's screen, while it tracks the device's movement in relation to the physical position of the spacecraft's 3D image on the AR marker.

  1. Output-sensitive 3D line integral convolution.

    PubMed

    Falk, Martin; Weiskopf, Daniel

    2008-01-01

    We propose an output-sensitive visualization method for 3D line integral convolution (LIC) whose rendering speed is largely independent of the data set size and mostly governed by the complexity of the output on the image plane. Our approach of view-dependent visualization tightly links the LIC generation with the volume rendering of the LIC result in order to avoid the computation of unnecessary LIC points: early-ray termination and empty-space leaping techniques are used to skip the computation of the LIC integral in a lazy-evaluation approach; both ray casting and texture slicing can be used as volume-rendering techniques. The input noise is modeled in object space to allow for temporal coherence under object and camera motion. Different noise models are discussed, covering dense representations based on filtered white noise all the way to sparse representations similar to oriented LIC. Aliasing artifacts are avoided by frequency control over the 3D noise and by employing a 3D variant of MIPmapping. A range of illumination models is applied to the LIC streamlines: different codimension-2 lighting models and a novel gradient-based illumination model that relies on precomputed gradients and does not require any direct calculation of gradients after the LIC integral is evaluated. We discuss the issue of proper sampling of the LIC and volume-rendering integrals by employing a frequency-space analysis of the noise model and the precomputed gradients. Finally, we demonstrate that our visualization approach lends itself to a fast graphics processing unit (GPU) implementation that supports both steady and unsteady flow. Therefore, this 3D LIC method allows users to interactively explore 3D flow by means of high-quality, view-dependent, and adaptive LIC volume visualization. Applications to flow visualization in combination with feature extraction and focus-and-context visualization are described, a comparison to previous methods is provided, and a detailed performance

  2. Automated 3D vascular segmentation in CT hepatic venography

    NASA Astrophysics Data System (ADS)

    Fetita, Catalin; Lucidarme, Olivier; Preteux, Francoise

    2005-08-01

    In the framework of preoperative evaluation of the hepatic venous anatomy in living-donor liver transplantation or oncologic rejections, this paper proposes an automated approach for the 3D segmentation of the liver vascular structure from 3D CT hepatic venography data. The developed segmentation approach takes into account the specificities of anatomical structures in terms of spatial location, connectivity and morphometric properties. It implements basic and advanced morphological operators (closing, geodesic dilation, gray-level reconstruction, sup-constrained connection cost) in mono- and multi-resolution filtering schemes in order to achieve an automated 3D reconstruction of the opacified hepatic vessels. A thorough investigation of the venous anatomy including morphometric parameter estimation is then possible via computer-vision 3D rendering, interaction and navigation capabilities.

  3. Bioprinting of 3D hydrogels.

    PubMed

    Stanton, M M; Samitier, J; Sánchez, S

    2015-08-01

    Three-dimensional (3D) bioprinting has recently emerged as an extension of 3D material printing, by using biocompatible or cellular components to build structures in an additive, layer-by-layer methodology for encapsulation and culture of cells. These 3D systems allow for cell culture in a suspension for formation of highly organized tissue or controlled spatial orientation of cell environments. The in vitro 3D cellular environments simulate the complexity of an in vivo environment and natural extracellular matrices (ECM). This paper will focus on bioprinting utilizing hydrogels as 3D scaffolds. Hydrogels are advantageous for cell culture as they are highly permeable to cell culture media, nutrients, and waste products generated during metabolic cell processes. They have the ability to be fabricated in customized shapes with various material properties with dimensions at the micron scale. 3D hydrogels are a reliable method for biocompatible 3D printing and have applications in tissue engineering, drug screening, and organ on a chip models. PMID:26066320

  4. Unassisted 3D camera calibration

    NASA Astrophysics Data System (ADS)

    Atanassov, Kalin; Ramachandra, Vikas; Nash, James; Goma, Sergio R.

    2012-03-01

    With the rapid growth of 3D technology, 3D image capture has become a critical part of the 3D feature set on mobile phones. 3D image quality is affected by the scene geometry as well as on-the-device processing. An automatic 3D system usually assumes known camera poses accomplished by factory calibration using a special chart. In real life settings, pose parameters estimated by factory calibration can be negatively impacted by movements of the lens barrel due to shaking, focusing, or camera drop. If any of these factors displaces the optical axes of either or both cameras, vertical disparity might exceed the maximum tolerable margin and the 3D user may experience eye strain or headaches. To make 3D capture more practical, one needs to consider unassisted (on arbitrary scenes) calibration. In this paper, we propose an algorithm that relies on detection and matching of keypoints between left and right images. Frames containing erroneous matches, along with frames with insufficiently rich keypoint constellations, are detected and discarded. Roll, pitch yaw , and scale differences between left and right frames are then estimated. The algorithm performance is evaluated in terms of the remaining vertical disparity as compared to the maximum tolerable vertical disparity.

  5. Sea modeling and rendering

    NASA Astrophysics Data System (ADS)

    Cathala, Thierry; Latger, Jean

    2010-10-01

    More and more defence and civil applications require simulation of marine synthetic environment. Currently, the "Future Anti-Surface-Guided-Weapon" (FASGW) or "anti-navire léger" (ANL) missile needs this kind of modelling. This paper presents a set of technical enhancement of the SE-Workbench that aim at better representing the sea profile and the interaction with targets. The operational scenario variability is a key criterion: the generic geographical area (e.g. Persian Gulf, coast of Somalia,...), the type of situation (e.g. peace keeping, peace enforcement, anti-piracy, drug interdiction,...)., the objectives (political, strategic, or military objectives), the description of the mission(s) (e.g. antipiracy) and operation(s) (e.g. surveillance and reconnaissance, escort, convoying) to achieve the objectives, the type of environment (Weather, Time of day, Geography [coastlines, islands, hills/mountains]). The paper insists on several points such as the dual rendering using either ray tracing [and the GP GPU optimization] or rasterization [and GPU shaders optimization], the modelling of sea-surface based on hypertextures and shaders, the wakes modelling, the buoyancy models for targets, the interaction of coast and littoral, the dielectric infrared modelling of water material.

  6. Parallel hierarchical radiosity rendering

    SciTech Connect

    Carter, M.

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  7. Fdf in US3D

    NASA Astrophysics Data System (ADS)

    Otis, Collin; Ferrero, Pietro; Candler, Graham; Givi, Peyman

    2013-11-01

    The scalar filtered mass density function (SFMDF) methodology is implemented into the computer code US3D. This is an unstructured Eulerian finite volume hydrodynamic solver and has proven very effective for simulation of compressible turbulent flows. The resulting SFMDF-US3D code is employed for large eddy simulation (LES) on unstructured meshes. Simulations are conducted of subsonic and supersonic flows under non-reacting and reacting conditions. The consistency and the accuracy of the simulated results are assessed along with appraisal of the overall performance of the methodology. The SFMDF-US3D is now capable of simulating high speed flows in complex configurations.

  8. Real time 3D scanner: investigations and results

    NASA Astrophysics Data System (ADS)

    Nouri, Taoufik; Pflug, Leopold

    1993-12-01

    This article presents a concept of reconstruction of 3-D objects using non-invasive and touch loss techniques. The principle of this method is to display parallel interference optical fringes on an object and then to record the object under two angles of view. According to an appropriated treatment one reconstructs the 3-D object even when the object has no symmetrical plan. The 3-D surface data is available immediately in digital form for computer- visualization and for analysis software tools. The optical set-up for recording the 3-D object, the 3-D data extraction and treatment, as well as the reconstruction of the 3-D object are reported and commented on. This application is dedicated for reconstructive/cosmetic surgery, CAD, animation and research purposes.

  9. Fast DRR splat rendering using common consumer graphics hardware

    SciTech Connect

    Spoerk, Jakob; Bergmann, Helmar; Wanschitz, Felix; Dong, Shuo; Birkfellner, Wolfgang

    2007-11-15

    Digitally rendered radiographs (DRR) are a vital part of various medical image processing applications such as 2D/3D registration for patient pose determination in image-guided radiotherapy procedures. This paper presents a technique to accelerate DRR creation by using conventional graphics hardware for the rendering process. DRR computation itself is done by an efficient volume rendering method named wobbled splatting. For programming the graphics hardware, NVIDIAs C for Graphics (Cg) is used. The description of an algorithm used for rendering DRRs on the graphics hardware is presented, together with a benchmark comparing this technique to a CPU-based wobbled splatting program. Results show a reduction of rendering time by about 70%-90% depending on the amount of data. For instance, rendering a volume of 2x10{sup 6} voxels is feasible at an update rate of 38 Hz compared to 6 Hz on a common Intel-based PC using the graphics processing unit (GPU) of a conventional graphics adapter. In addition, wobbled splatting using graphics hardware for DRR computation provides higher resolution DRRs with comparable image quality due to special processing characteristics of the GPU. We conclude that DRR generation on common graphics hardware using the freely available Cg environment is a major step toward 2D/3D registration in clinical routine.

  10. Wavefront construction in 3-D

    SciTech Connect

    Chilcoat, S.R. Hildebrand, S.T.

    1995-12-31

    Travel time computation in inhomogeneous media is essential for pre-stack Kirchhoff imaging in areas such as the sub-salt province in the Gulf of Mexico. The 2D algorithm published by Vinje, et al, has been extended to 3D to compute wavefronts in complicated inhomogeneous media. The 3D wavefront construction algorithm provides many advantages over conventional ray tracing and other methods of computing travel times in 3D. The algorithm dynamically maintains a reasonably consistent ray density without making a priori guesses at the number of rays to shoot. The determination of caustics in 3D is a straight forward geometric procedure. The wavefront algorithm also enables the computation of multi-valued travel time surfaces.

  11. Heterodyne 3D ghost imaging

    NASA Astrophysics Data System (ADS)

    Yang, Xu; Zhang, Yong; Yang, Chenghua; Xu, Lu; Wang, Qiang; Zhao, Yuan

    2016-06-01

    Conventional three dimensional (3D) ghost imaging measures range of target based on pulse fight time measurement method. Due to the limit of data acquisition system sampling rate, range resolution of the conventional 3D ghost imaging is usually low. In order to take off the effect of sampling rate to range resolution of 3D ghost imaging, a heterodyne 3D ghost imaging (HGI) system is presented in this study. The source of HGI is a continuous wave laser instead of pulse laser. Temporal correlation and spatial correlation of light are both utilized to obtain the range image of target. Through theory analysis and numerical simulations, it is demonstrated that HGI can obtain high range resolution image with low sampling rate.

  12. Interpreting three-dimensional structures from two-dimensional images: a web-based interactive 3D teaching model of surgical liver anatomy

    PubMed Central

    Crossingham, Jodi L; Jenkinson, Jodie; Woolridge, Nick; Gallinger, Steven; Tait, Gordon A; Moulton, Carol-Anne E

    2009-01-01

    Background: Given the increasing number of indications for liver surgery and the growing complexity of operations, many trainees in surgical, imaging and related subspecialties require a good working knowledge of the complex intrahepatic anatomy. Computed tomography (CT), the most commonly used liver imaging modality, enhances our understanding of liver anatomy, but comprises a two-dimensional (2D) representation of a complex 3D organ. It is challenging for trainees to acquire the necessary skills for converting these 2D images into 3D mental reconstructions because learning opportunities are limited and internal hepatic anatomy is complicated, asymmetrical and variable. We have created a website that uses interactive 3D models of the liver to assist trainees in understanding the complex spatial anatomy of the liver and to help them create a 3D mental interpretation of this anatomy when viewing CT scans. Methods: Computed tomography scans were imported into DICOM imaging software (OsiriX™) to obtain 3D surface renderings of the liver and its internal structures. Using these 3D renderings as a reference, 3D models of the liver surface and the intrahepatic structures, portal veins, hepatic veins, hepatic arteries and the biliary system were created using 3D modelling software (Cinema 4D™). Results: Using current best practices for creating multimedia tools, a unique, freely available, online learning resource has been developed, entitled Visual Interactive Resource for Teaching, Understanding And Learning Liver Anatomy (VIRTUAL Liver) (http://pie.med.utoronto.ca/VLiver). This website uses interactive 3D models to provide trainees with a constructive resource for learning common liver anatomy and liver segmentation, and facilitates the development of the skills required to mentally reconstruct a 3D version of this anatomy from 2D CT scans. Discussion: Although the intended audience for VIRTUAL Liver consists of residents in various medical and surgical specialties

  13. 3D Visualization of Machine Learning Algorithms with Astronomical Data

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.

    2016-01-01

    We present innovative machine learning (ML) methods using unsupervised clustering with minimum spanning trees (MSTs) to study 3D astronomical catalogs. Utilizing Python code to build trees based on galaxy catalogs, we can render the results with the visualization suite Blender to produce interactive 360 degree panoramic videos. The catalogs and their ML results can be explored in a 3D space using mobile devices, tablets or desktop browsers. We compare the statistics of the MST results to a number of machine learning methods relating to optimization and efficiency.

  14. PLOT3D/AMES, DEC VAX VMS VERSION USING DISSPLA (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  15. PLOT3D/AMES, DEC VAX VMS VERSION USING DISSPLA (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P. G.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  16. Combinatorial 3D Mechanical Metamaterials

    NASA Astrophysics Data System (ADS)

    Coulais, Corentin; Teomy, Eial; de Reus, Koen; Shokef, Yair; van Hecke, Martin

    2015-03-01

    We present a class of elastic structures which exhibit 3D-folding motion. Our structures consist of cubic lattices of anisotropic unit cells that can be tiled in a complex combinatorial fashion. We design and 3d-print this complex ordered mechanism, in which we combine elastic hinges and defects to tailor the mechanics of the material. Finally, we use this large design space to encode smart functionalities such as surface patterning and multistability.

  17. A 3D interactive multi-object segmentation tool using local robust statistics driven active contours.

    PubMed

    Gao, Yi; Kikinis, Ron; Bouix, Sylvain; Shenton, Martha; Tannenbaum, Allen

    2012-08-01

    Extracting anatomical and functional significant structures renders one of the important tasks for both the theoretical study of the medical image analysis, and the clinical and practical community. In the past, much work has been dedicated only to the algorithmic development. Nevertheless, for clinical end users, a well designed algorithm with an interactive software is necessary for an algorithm to be utilized in their daily work. Furthermore, the software would better be open sourced in order to be used and validated by not only the authors but also the entire community. Therefore, the contribution of the present work is twofolds: first, we propose a new robust statistics based conformal metric and the conformal area driven multiple active contour framework, to simultaneously extract multiple targets from MR and CT medical imagery in 3D. Second, an open source graphically interactive 3D segmentation tool based on the aforementioned contour evolution is implemented and is publicly available for end users on multiple platforms. In using this software for the segmentation task, the process is initiated by the user drawn strokes (seeds) in the target region in the image. Then, the local robust statistics are used to describe the object features, and such features are learned adaptively from the seeds under a non-parametric estimation scheme. Subsequently, several active contours evolve simultaneously with their interactions being motivated by the principles of action and reaction-this not only guarantees mutual exclusiveness among the contours, but also no longer relies upon the assumption that the multiple objects fill the entire image domain, which was tacitly or explicitly assumed in many previous works. In doing so, the contours interact and converge to equilibrium at the desired positions of the desired multiple objects. Furthermore, with the aim of not only validating the algorithm and the software, but also demonstrating how the tool is to be used, we provide

  18. The three-dimensional Event-Driven Graphics Environment (3D-EDGE)

    NASA Technical Reports Server (NTRS)

    Freedman, Jeffrey; Hahn, Roger; Schwartz, David M.

    1993-01-01

    Stanford Telecom developed the Three-Dimensional Event-Driven Graphics Environment (3D-EDGE) for NASA GSFC's (GSFC) Communications Link Analysis and Simulation System (CLASS). 3D-EDGE consists of a library of object-oriented subroutines which allow engineers with little or no computer graphics experience to programmatically manipulate, render, animate, and access complex three-dimensional objects.

  19. 11. Photographic copy of rendering (February 15, 1913, original rendering ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    11. Photographic copy of rendering (February 15, 1913, original rendering in Archives, Public Affairs Department, Sears Merchandise Group, Hoffman Estates, Illinois), Artist unknown. OVERALL VIEW OF MAIL ORDER PLANT, VIEW TO SOUTH - Sears Roebuck & Company Mail Order Plant, Bounded by Lexington & Grenshaw Streets, Kedzie Avenue & Independence Boulevard, Chicago, Cook County, IL

  20. PLOT3D/AMES, GENERIC UNIX VERSION USING DISSPLA (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  1. PLOT3D/AMES, GENERIC UNIX VERSION USING DISSPLA (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  2. Real time 3D and heterogeneous data fusion

    SciTech Connect

    Little, C.Q.; Small, D.E.

    1998-03-01

    This project visualizes characterization data in a 3D setting, in real time. Real time in this sense means collecting the data and presenting it before it delays the user, and processing faster than the acquisition systems so no bottlenecks occur. The goals have been to build a volumetric viewer to display 3D data, demonstrate projecting other data, such as images, onto the 3D data, and display both the 3D and projected images as fast as the data became available. The authors have examined several ways to display 3D surface data. The most effective was generating polygonal surface meshes. They have created surface maps form a continuous stream of 3D range data, fused image data onto the geometry, and displayed the data with a standard 3D rendering package. In parallel with this, they have developed a method to project real-time images onto the surface created. A key component is mapping the data on the correct surfaces, which requires a-priori positional information along with accurate calibration of the camera and lens system.

  3. Construction of programmable interconnected 3D microfluidic networks

    NASA Astrophysics Data System (ADS)

    Hunziker, Patrick R.; Wolf, Marc P.; Wang, Xueya; Zhang, Bei; Marsch, Stephan; Salieb-Beugelaar, Georgette B.

    2015-02-01

    Microfluidic systems represent a key-enabling platform for novel diagnostic tools for use at the point-of-care in clinical contexts as well as for evolving single cell diagnostics. The design of 3D microfluidic systems is an active field of development, but construction of true interconnected 3D microfluidic networks is still a challenge, in particular when the goal is rapid prototyping, accurate design and flexibility. We report a novel approach for the construction of programmable 3D microfluidic systems consisting of modular 3D template casting of interconnected threads to allow user-programmable flow paths and examine its structural characteristics and its modular function. To overcome problems with thread template casting reported in the literature, low-surface-energy polymer threads were used, that allow solvent-free production. Connected circular channels with excellent roundness and low diameter variability were created. Variable channel termination allowed programming a flow path on-the-fly, thus rendering the resulting 3D microfluidic systems highly customizable even after production. Thus, construction of programmable/reprogrammable fully 3D microfluidic systems by template casting of a network of interconnecting threads is feasible, leads to high-quality and highly reproducible, complex 3D geometries.

  4. Recent Advances in Visualizing 3D Flow with LIC

    NASA Technical Reports Server (NTRS)

    Interrante, Victoria; Grosch, Chester

    1998-01-01

    Line Integral Convolution (LIC), introduced by Cabral and Leedom in 1993, is an elegant and versatile technique for representing directional information via patterns of correlation in a texture. Although most commonly used to depict 2D flow, or flow over a surface in 3D, LIC methods can equivalently be used to portray 3D flow through a volume. However, the popularity of LIC as a device for illustrating 3D flow has historically been limited both by the computational expense of generating and rendering such a 3D texture and by the difficulties inherent in clearly and effectively conveying the directional information embodied in the volumetric output textures that are produced. In an earlier paper, we briefly discussed some of the factors that may underlie the perceptual difficulties that we can encounter with dense 3D displays and outlined several strategies for more effectively visualizing 3D flow with volume LIC. In this article, we review in more detail techniques for selectively emphasizing critical regions of interest in a flow and for facilitating the accurate perception of the 3D depth and orientation of overlapping streamlines, and we demonstrate new methods for efficiently incorporating an indication of orientation into a flow representation and for conveying additional information about related scalar quantities such as temperature or vorticity over a flow via subtle, continuous line width and color variations.

  5. Point Cloud Visualization in AN Open Source 3d Globe

    NASA Astrophysics Data System (ADS)

    De La Calle, M.; Gómez-Deck, D.; Koehler, O.; Pulido, F.

    2011-09-01

    During the last years the usage of 3D applications in GIS is becoming more popular. Since the appearance of Google Earth, users are familiarized with 3D environments. On the other hand, nowadays computers with 3D acceleration are common, broadband access is widespread and the public information that can be used in GIS clients that are able to use data from the Internet is constantly increasing. There are currently several libraries suitable for this kind of applications. Based on these facts, and using libraries that are already developed and connected to our own developments, we are working on the implementation of a real 3D GIS with analysis capabilities. Since a 3D GIS such as this can be very interesting for tasks like LiDAR or Laser Scanner point clouds rendering and analysis, special attention is given to get an optimal handling of very large data sets. Glob3 will be a multidimensional GIS in which 3D point clouds could be explored and analysed, even if they are consist of several million points.The latest addition to our visualization libraries is the development of a points cloud server that works regardless of the cloud's size. The server receives and processes petitions from a 3d client (for example glob3, but could be any other, such as one based on WebGL) and delivers the data in the form of pre-processed tiles, depending on the required level of detail.

  6. Identifying novel sequence variants of RNA 3D motifs

    PubMed Central

    Zirbel, Craig L.; Roll, James; Sweeney, Blake A.; Petrov, Anton I.; Pirrung, Meg; Leontis, Neocles B.

    2015-01-01

    Predicting RNA 3D structure from sequence is a major challenge in biophysics. An important sub-goal is accurately identifying recurrent 3D motifs from RNA internal and hairpin loop sequences extracted from secondary structure (2D) diagrams. We have developed and validated new probabilistic models for 3D motif sequences based on hybrid Stochastic Context-Free Grammars and Markov Random Fields (SCFG/MRF). The SCFG/MRF models are constructed using atomic-resolution RNA 3D structures. To parameterize each model, we use all instances of each motif found in the RNA 3D Motif Atlas and annotations of pairwise nucleotide interactions generated by the FR3D software. Isostericity relations between non-Watson–Crick basepairs are used in scoring sequence variants. SCFG techniques model nested pairs and insertions, while MRF ideas handle crossing interactions and base triples. We use test sets of randomly-generated sequences to set acceptance and rejection thresholds for each motif group and thus control the false positive rate. Validation was carried out by comparing results for four motif groups to RMDetect. The software developed for sequence scoring (JAR3D) is structured to automatically incorporate new motifs as they accumulate in the RNA 3D Motif Atlas when new structures are solved and is available free for download. PMID:26130723

  7. Identifying novel sequence variants of RNA 3D motifs.

    PubMed

    Zirbel, Craig L; Roll, James; Sweeney, Blake A; Petrov, Anton I; Pirrung, Meg; Leontis, Neocles B

    2015-09-01

    Predicting RNA 3D structure from sequence is a major challenge in biophysics. An important sub-goal is accurately identifying recurrent 3D motifs from RNA internal and hairpin loop sequences extracted from secondary structure (2D) diagrams. We have developed and validated new probabilistic models for 3D motif sequences based on hybrid Stochastic Context-Free Grammars and Markov Random Fields (SCFG/MRF). The SCFG/MRF models are constructed using atomic-resolution RNA 3D structures. To parameterize each model, we use all instances of each motif found in the RNA 3D Motif Atlas and annotations of pairwise nucleotide interactions generated by the FR3D software. Isostericity relations between non-Watson-Crick basepairs are used in scoring sequence variants. SCFG techniques model nested pairs and insertions, while MRF ideas handle crossing interactions and base triples. We use test sets of randomly-generated sequences to set acceptance and rejection thresholds for each motif group and thus control the false positive rate. Validation was carried out by comparing results for four motif groups to RMDetect. The software developed for sequence scoring (JAR3D) is structured to automatically incorporate new motifs as they accumulate in the RNA 3D Motif Atlas when new structures are solved and is available free for download. PMID:26130723

  8. 3-D Sound for Virtual Reality and Multimedia

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Trejo, Leonard J. (Technical Monitor)

    2000-01-01

    Technology and applications for the rendering of virtual acoustic spaces are reviewed. Chapter 1 deals with acoustics and psychoacoustics. Chapters 2 and 3 cover cues to spatial hearing and review psychoacoustic literature. Chapter 4 covers signal processing and systems overviews of 3-D sound systems. Chapter 5 covers applications to computer workstations, communication systems, aeronautics and space, and sonic arts. Chapter 6 lists resources. This TM is a reprint of the 1994 book from Academic Press.

  9. New Uses for Rendered Protein

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The history of rendering shows that current uses for rendered protein are a result of value-adding research. Feed nutritionists discovered how to use this formerly very low value by-product as an important component of formulated livestock feed. Today the feed market is mature and has been severel...

  10. Massively parallel visualization: Parallel rendering

    SciTech Connect

    Hansen, C.D.; Krogh, M.; White, W.

    1995-12-01

    This paper presents rendering algorithms, developed for massively parallel processors (MPPs), for polygonal, spheres, and volumetric data. The polygon algorithm uses a data parallel approach whereas the sphere and volume renderer use a MIMD approach. Implementations for these algorithms are presented for the Thinking Machines Corporation CM-5 MPP.

  11. Advanced Data Visualization in Astrophysics: The X3D Pathway

    NASA Astrophysics Data System (ADS)

    Vogt, Frédéric P. A.; Owen, Chris I.; Verdes-Montenegro, Lourdes; Borthakur, Sanchayeeta

    2016-02-01

    Most modern astrophysical data sets are multi-dimensional; a characteristic that can nowadays generally be conserved and exploited scientifically during the data reduction/simulation and analysis cascades. However, the same multi-dimensional data sets are systematically cropped, sliced, and/or projected to printable two-dimensional diagrams at the publication stage. In this article, we introduce the concept of the “X3D pathway” as a mean of simplifying and easing the access to data visualization and publication via three-dimensional (3D) diagrams. The X3D pathway exploits the facts that (1) the X3D 3D file format lies at the center of a product tree that includes interactive HTML documents, 3D printing, and high-end animations, and (2) all high-impact-factor and peer-reviewed journals in astrophysics are now published (some exclusively) online. We argue that the X3D standard is an ideal vector for sharing multi-dimensional data sets because it provides direct access to a range of different data visualization techniques, is fully open source, and is a well-defined standard from the International Organization for Standardization. Unlike other earlier propositions to publish multi-dimensional data sets via 3D diagrams, the X3D pathway is not tied to specific software (prone to rapid and unexpected evolution), but instead is compatible with a range of open-source software already in use by our community. The interactive HTML branch of the X3D pathway is also actively supported by leading peer-reviewed journals in the field of astrophysics. Finally, this article provides interested readers with a detailed set of practical astrophysical examples designed to act as a stepping stone toward the implementation of the X3D pathway for any other data set.

  12. Objective and subjective quality assessment of geometry compression of reconstructed 3D humans in a 3D virtual room

    NASA Astrophysics Data System (ADS)

    Mekuria, Rufael; Cesar, Pablo; Doumanis, Ioannis; Frisiello, Antonella

    2015-09-01

    Compression of 3D object based video is relevant for 3D Immersive applications. Nevertheless, the perceptual aspects of the degradation introduced by codecs for meshes and point clouds are not well understood. In this paper we evaluate the subjective and objective degradations introduced by such codecs in a state of art 3D immersive virtual room. In the 3D immersive virtual room, users are captured with multiple cameras, and their surfaces are reconstructed as photorealistic colored/textured 3D meshes or point clouds. To test the perceptual effect of compression and transmission, we render degraded versions with different frame rates in different contexts (near/far) in the scene. A quantitative subjective study with 16 users shows that negligible distortion of decoded surfaces compared to the original reconstructions can be achieved in the 3D virtual room. In addition, a qualitative task based analysis in a full prototype field trial shows increased presence, emotion, user and state recognition of the reconstructed 3D Human representation compared to animated computer avatars.

  13. Remote 3D Medical Consultation

    NASA Astrophysics Data System (ADS)

    Welch, Greg; Sonnenwald, Diane H.; Fuchs, Henry; Cairns, Bruce; Mayer-Patel, Ketan; Yang, Ruigang; State, Andrei; Towles, Herman; Ilie, Adrian; Krishnan, Srinivas; Söderholm, Hanna M.

    Two-dimensional (2D) video-based telemedical consultation has been explored widely in the past 15-20 years. Two issues that seem to arise in most relevant case studies are the difficulty associated with obtaining the desired 2D camera views, and poor depth perception. To address these problems we are exploring the use of a small array of cameras to synthesize a spatially continuous range of dynamic three-dimensional (3D) views of a remote environment and events. The 3D views can be sent across wired or wireless networks to remote viewers with fixed displays or mobile devices such as a personal digital assistant (PDA). The viewpoints could be specified manually or automatically via user head or PDA tracking, giving the remote viewer virtual head- or hand-slaved (PDA-based) remote cameras for mono or stereo viewing. We call this idea remote 3D medical consultation (3DMC). In this article we motivate and explain the vision for 3D medical consultation; we describe the relevant computer vision/graphics, display, and networking research; we present a proof-of-concept prototype system; and we present some early experimental results supporting the general hypothesis that 3D remote medical consultation could offer benefits over conventional 2D televideo.

  14. Speaking Volumes About 3-D

    NASA Technical Reports Server (NTRS)

    2002-01-01

    In 1999, Genex submitted a proposal to Stennis Space Center for a volumetric 3-D display technique that would provide multiple users with a 360-degree perspective to simultaneously view and analyze 3-D data. The futuristic capabilities of the VolumeViewer(R) have offered tremendous benefits to commercial users in the fields of medicine and surgery, air traffic control, pilot training and education, computer-aided design/computer-aided manufacturing, and military/battlefield management. The technology has also helped NASA to better analyze and assess the various data collected by its satellite and spacecraft sensors. Genex capitalized on its success with Stennis by introducing two separate products to the commercial market that incorporate key elements of the 3-D display technology designed under an SBIR contract. The company Rainbow 3D(R) imaging camera is a novel, three-dimensional surface profile measurement system that can obtain a full-frame 3-D image in less than 1 second. The third product is the 360-degree OmniEye(R) video system. Ideal for intrusion detection, surveillance, and situation management, this unique camera system offers a continuous, panoramic view of a scene in real time.

  15. I-125 ROPES eye plaque dosimetry: Validation of a commercial 3D ophthalmic brachytherapy treatment planning system and independent dose calculation software with GafChromic{sup ®} EBT3 films

    SciTech Connect

    Poder, Joel; Corde, Stéphanie

    2013-12-15

    Purpose: The purpose of this study was to measure the dose distributions for different Radiation Oncology Physics and Engineering Services, Australia (ROPES) type eye plaques loaded with I-125 (model 6711) seeds using GafChromic{sup ®} EBT3 films, in order to verify the dose distributions in the Plaque Simulator™ (PS) ophthalmic 3D treatment planning system. The brachytherapy module of RADCALC{sup ®} was used to independently check the dose distributions calculated by PS. Correction factors were derived from the measured data to be used in PS to account for the effect of the stainless steel ROPES plaque backing on the 3D dose distribution.Methods: Using GafChromic{sup ®} EBT3 films inserted in a specially designed Solid Water™ eye ball phantom, dose distributions were measured three-dimensionally both along and perpendicular to I-125 (model 6711) loaded ROPES eye plaque's central axis (CAX) with 2 mm depth increments. Each measurement was performed in full scatter conditions both with and without the stainless steel plaque backing attached to the eye plaque, to assess its effect on the dose distributions. Results were compared to the dose distributions calculated by Plaque Simulator™ and checked independently with RADCALC{sup ®}.Results: The EBT3 film measurements without the stainless steel backing were found to agree with PS and RADCALC{sup ®} to within 2% and 4%, respectively, on the plaque CAX. Also, RADCALC{sup ®} was found to agree with PS to within 2%. The CAX depth doses measured using EBT3 film with the stainless steel backing were observed to result in a 4% decrease relative to when the backing was not present. Within experimental uncertainty, the 4% decrease was found to be constant with depth and independent of plaque size. Using a constant dose correction factor of T= 0.96 in PS, where the calculated dose for the full water scattering medium is reduced by 4% in every voxel in the dose grid, the effect of the plaque backing was accurately

  16. Panoramic-image-based rendering solutions for visualizing remote locations via the web

    NASA Astrophysics Data System (ADS)

    Obeysekare, Upul R.; Egts, David; Bethmann, John

    2000-05-01

    With advances in panoramic image-based rendering techniques and the rapid expansion of web advertising, new techniques are emerging for visualizing remote locations on the WWW. Success of these techniques depends on how easy and inexpensive it is to develop a new type of web content that provides pseudo 3D visualization at home, 24-hours a day. Furthermore, the acceptance of this new visualization medium depends on the effectiveness of the familiarization tools by a segment of the population that was never exposed to this type of visualization. This paper addresses various hardware and software solutions available to collect, produce, and view panoramic content. While cost and effectiveness of building the content is being addressed using a few commercial hardware solutions, effectiveness of familiarization tools is evaluated using a few sample data sets.

  17. DNA Assembly in 3D Printed Fluidics.

    PubMed

    Patrick, William G; Nielsen, Alec A K; Keating, Steven J; Levy, Taylor J; Wang, Che-Wei; Rivera, Jaime J; Mondragón-Palomino, Octavio; Carr, Peter A; Voigt, Christopher A; Oxman, Neri; Kong, David S

    2015-01-01

    The process of connecting genetic parts-DNA assembly-is a foundational technology for synthetic biology. Microfluidics present an attractive solution for minimizing use of costly reagents, enabling multiplexed reactions, and automating protocols by integrating multiple protocol steps. However, microfluidics fabrication and operation can be expensive and requires expertise, limiting access to the technology. With advances in commodity digital fabrication tools, it is now possible to directly print fluidic devices and supporting hardware. 3D printed micro- and millifluidic devices are inexpensive, easy to make and quick to produce. We demonstrate Golden Gate DNA assembly in 3D-printed fluidics with reaction volumes as small as 490 nL, channel widths as fine as 220 microns, and per unit part costs ranging from $0.61 to $5.71. A 3D-printed syringe pump with an accompanying programmable software interface was designed and fabricated to operate the devices. Quick turnaround and inexpensive materials allowed for rapid exploration of device parameters, demonstrating a manufacturing paradigm for designing and fabricating hardware for synthetic biology. PMID:26716448

  18. DNA Assembly in 3D Printed Fluidics

    PubMed Central

    Patrick, William G.; Nielsen, Alec A. K.; Keating, Steven J.; Levy, Taylor J.; Wang, Che-Wei; Rivera, Jaime J.; Mondragón-Palomino, Octavio; Carr, Peter A.; Voigt, Christopher A.; Oxman, Neri; Kong, David S.

    2015-01-01

    The process of connecting genetic parts—DNA assembly—is a foundational technology for synthetic biology. Microfluidics present an attractive solution for minimizing use of costly reagents, enabling multiplexed reactions, and automating protocols by integrating multiple protocol steps. However, microfluidics fabrication and operation can be expensive and requires expertise, limiting access to the technology. With advances in commodity digital fabrication tools, it is now possible to directly print fluidic devices and supporting hardware. 3D printed micro- and millifluidic devices are inexpensive, easy to make and quick to produce. We demonstrate Golden Gate DNA assembly in 3D-printed fluidics with reaction volumes as small as 490 nL, channel widths as fine as 220 microns, and per unit part costs ranging from $0.61 to $5.71. A 3D-printed syringe pump with an accompanying programmable software interface was designed and fabricated to operate the devices. Quick turnaround and inexpensive materials allowed for rapid exploration of device parameters, demonstrating a manufacturing paradigm for designing and fabricating hardware for synthetic biology. PMID:26716448

  19. Server-based approach to web visualization of integrated 3-D medical image data.

    PubMed Central

    Poliakov, A. V.; Albright, E.; Corina, D.; Ojemann, G.; Martin, R. F.; Brinkley, J. F.

    2001-01-01

    Although computer processing power and network bandwidth are rapidly increasing, the average desktop is still not able to rapidly process large datasets such as 3-D medical image volumes. We have therefore developed a server side approach to this problem, in which a high performance graphics server accepts commands from web clients to load, process and render 3-D image volumes and models. The renderings are saved as 2-D snapshots on the server, where they are uploaded and displayed on the client. User interactions with the graphic interface on the client side are translated into additional commands to manipulate the 3-D scene, after which the server re-renders the scene and sends a new image to the client. Example forms-based and Java-based clients are described for a brain mapping application, but the techniques should be applicable to multiple domains where 3-D medical image visualization is of interest. PMID:11825248

  20. Research and implementation of visualization techniques for 3D explosion fields

    NASA Astrophysics Data System (ADS)

    Ning, Jianguo; Xu, Xiangzhao; Ma, Tianbao; Yu, Wen

    2015-12-01

    The visualization of scalar data in 3D explosion fields was devised to solve the problems of the complex physical and the huge data in numerical simulation of explosion mechanics problems. For enhancing the explosion effects and reducing the impacts of image analysis, the adjustment coefficient was added into original Phong illumination model. A variety of accelerated volume rendering algorithm and multithread technique were used to realize the fast rendering and real-time interactive control of 3D explosion fields. Cutaway view was implemented, so arbitrary section of 3D explosion fields can be seen conveniently. Slice can be extracted along three axes of 3D explosion fields, and the value at an arbitrary point on the slice can be gained. The experiment results show that the volume rendering acceleration algorithm can generate high quality images and can increase the speed of image generating, while achieve interactive control quickly.

  1. Virtual VMASC: A 3D Game Environment

    NASA Technical Reports Server (NTRS)

    Manepalli, Suchitra; Shen, Yuzhong; Garcia, Hector M.; Lawsure, Kaleen

    2010-01-01

    The advantages of creating interactive 3D simulations that allow viewing, exploring, and interacting with land improvements, such as buildings, in digital form are manifold and range from allowing individuals from anywhere in the world to explore those virtual land improvements online, to training military personnel in dealing with war-time environments, and to making those land improvements available in virtual worlds such as Second Life. While we haven't fully explored the true potential of such simulations, we have identified a requirement within our organization to use simulations like those to replace our front-desk personnel and allow visitors to query, naVigate, and communicate virtually with various entities within the building. We implemented the Virtual VMASC 3D simulation of the Virginia Modeling Analysis and Simulation Center (VMASC) office building to not only meet our front-desk requirement but also to evaluate the effort required in designing such a simulation and, thereby, leverage the experience we gained in future projects of this kind. This paper describes the goals we set for our implementation, the software approach taken, the modeling contribution made, and the technologies used such as XNA Game Studio, .NET framework, Autodesk software packages, and, finally, the applicability of our implementation on a variety of architectures including Xbox 360 and PC. This paper also summarizes the result of our evaluation and the lessons learned from our effort.

  2. 3D Computations and Experiments

    SciTech Connect

    Couch, R; Faux, D; Goto, D; Nikkel, D

    2004-04-05

    This project consists of two activities. Task A, Simulations and Measurements, combines all the material model development and associated numerical work with the materials-oriented experimental activities. The goal of this effort is to provide an improved understanding of dynamic material properties and to provide accurate numerical representations of those properties for use in analysis codes. Task B, ALE3D Development, involves general development activities in the ALE3D code with the focus of improving simulation capabilities for problems of mutual interest to DoD and DOE. Emphasis is on problems involving multi-phase flow, blast loading of structures and system safety/vulnerability studies.

  3. 3D Computations and Experiments

    SciTech Connect

    Couch, R; Faux, D; Goto, D; Nikkel, D

    2003-05-12

    This project is in its first full year after the combining of two previously funded projects: ''3D Code Development'' and ''Dynamic Material Properties''. The motivation behind this move was to emphasize and strengthen the ties between the experimental work and the computational model development in the materials area. The next year's activities will indicate the merging of the two efforts. The current activity is structured in two tasks. Task A, ''Simulations and Measurements'', combines all the material model development and associated numerical work with the materials-oriented experimental activities. Task B, ''ALE3D Development'', is a continuation of the non-materials related activities from the previous project.

  4. 3D Integration for Wireless Multimedia

    NASA Astrophysics Data System (ADS)

    Kimmich, Georg

    The convergence of mobile phone, internet, mapping, gaming and office automation tools with high quality video and still imaging capture capability is becoming a strong market trend for portable devices. High-density video encode and decode, 3D graphics for gaming, increased application-software complexity and ultra-high-bandwidth 4G modem technologies are driving the CPU performance and memory bandwidth requirements close to the PC segment. These portable multimedia devices are battery operated, which requires the deployment of new low-power-optimized silicon process technologies and ultra-low-power design techniques at system, architecture and device level. Mobile devices also need to comply with stringent silicon-area and package-volume constraints. As for all consumer devices, low production cost and fast time-to-volume production is key for success. This chapter shows how 3D architectures can bring a possible breakthrough to meet the conflicting power, performance and area constraints. Multiple 3D die-stacking partitioning strategies are described and analyzed on their potential to improve the overall system power, performance and cost for specific application scenarios. Requirements and maturity of the basic process-technology bricks including through-silicon via (TSV) and die-to-die attachment techniques are reviewed. Finally, we highlight new challenges which will arise with 3D stacking and an outlook on how they may be addressed: Higher power density will require thermal design considerations, new EDA tools will need to be developed to cope with the integration of heterogeneous technologies and to guarantee signal and power integrity across the die stack. The silicon/wafer test strategies have to be adapted to handle high-density IO arrays, ultra-thin wafers and provide built-in self-test of attached memories. New standards and business models have to be developed to allow cost-efficient assembly and testing of devices from different silicon and technology

  5. Dynamic real-time 4D cardiac MDCT image display using GPU-accelerated volume rendering.

    PubMed

    Zhang, Qi; Eagleson, Roy; Peters, Terry M

    2009-09-01

    Intraoperative cardiac monitoring, accurate preoperative diagnosis, and surgical planning are important components of minimally-invasive cardiac therapy. Retrospective, electrocardiographically (ECG) gated, multidetector computed tomographical (MDCT), four-dimensional (3D + time), real-time, cardiac image visualization is an important tool for the surgeon in such procedure, particularly if the dynamic volumetric image can be registered to, and fused with the actual patient anatomy. The addition of stereoscopic imaging provides a more intuitive environment by adding binocular vision and depth cues to structures within the beating heart. In this paper, we describe the design and implementation of a comprehensive stereoscopic 4D cardiac image visualization and manipulation platform, based on the opacity density radiation model, which exploits the power of modern graphics processing units (GPUs) in the rendering pipeline. In addition, we present a new algorithm to synchronize the phases of the dynamic heart to clinical ECG signals, and to calculate and compensate for latencies in the visualization pipeline. A dynamic multiresolution display is implemented to enable the interactive selection and emphasis of volume of interest (VOI) within the entire contextual cardiac volume and to enhance performance, and a novel color and opacity adjustment algorithm is designed to increase the uniformity of the rendered multiresolution image of heart. Our system provides a visualization environment superior to noninteractive software-based implementations, but with a rendering speed that is comparable to traditional, but inferior quality, volume rendering approaches based on texture mapping. This retrospective ECG-gated dynamic cardiac display system can provide real-time feedback regarding the suspected pathology, function, and structural defects, as well as anatomical information such as chamber volume and morphology. PMID:19467840

  6. 3D annotation and manipulation of medical anatomical structures

    NASA Astrophysics Data System (ADS)

    Vitanovski, Dime; Schaller, Christian; Hahn, Dieter; Daum, Volker; Hornegger, Joachim

    2009-02-01

    Although the medical scanners are rapidly moving towards a three-dimensional paradigm, the manipulation and annotation/labeling of the acquired data is still performed in a standard 2D environment. Editing and annotation of three-dimensional medical structures is currently a complex task and rather time-consuming, as it is carried out in 2D projections of the original object. A major problem in 2D annotation is the depth ambiguity, which requires 3D landmarks to be identified and localized in at least two of the cutting planes. Operating directly in a three-dimensional space enables the implicit consideration of the full 3D local context, which significantly increases accuracy and speed. A three-dimensional environment is as well more natural optimizing the user's comfort and acceptance. The 3D annotation environment requires the three-dimensional manipulation device and display. By means of two novel and advanced technologies, Wii Nintendo Controller and Philips 3D WoWvx display, we define an appropriate 3D annotation tool and a suitable 3D visualization monitor. We define non-coplanar setting of four Infrared LEDs with a known and exact position, which are tracked by the Wii and from which we compute the pose of the device by applying a standard pose estimation algorithm. The novel 3D renderer developed by Philips uses either the Z-value of a 3D volume, or it computes the depth information out of a 2D image, to provide a real 3D experience without having some special glasses. Within this paper we present a new framework for manipulation and annotation of medical landmarks directly in three-dimensional volume.

  7. Hybrid 3D reconstruction and image-based rendering techniques for reality modeling

    NASA Astrophysics Data System (ADS)

    Sequeira, Vitor; Wolfart, Erik; Bovisio, Emanuele; Biotti, Ester; Goncalves, Joao G. M.

    2000-12-01

    This paper presents a component approach that combines in a seamless way the strong features of laser range acquisition with the visual quality of purely photographic approaches. The relevant components of the system are: (i) Panoramic images for distant background scenery where parallax is insignificant; (ii) Photogrammetry for background buildings and (iii) High detailed laser based models for the primary environment, structure of exteriors of buildings and interiors of rooms. These techniques have a wide range of applications in visualization, virtual reality, cost effective as-built analysis of architectural and industrial environments, building facilities management, real-estate, E-commerce, remote inspection of hazardous environments, TV production and many others.

  8. USM3D Predictions of Supersonic Nozzle Flow

    NASA Technical Reports Server (NTRS)

    Carter, Melissa B.; Elmiligui, Alaa A.; Campbell, Richard L.; Nayani, Sudheer N.

    2014-01-01

    This study focused on the NASA Tetrahedral Unstructured Software System CFD code (USM3D) capability to predict supersonic plume flow. Previous studies, published in 2004 and 2009, investigated USM3D's results versus historical experimental data. This current study continued that comparison however focusing on the use of the volume souring to capture the shear layers and internal shock structure of the plume. This study was conducted using two benchmark axisymmetric supersonic jet experimental data sets. The study showed that with the use of volume sourcing, USM3D was able to capture and model a jet plume's shear layer and internal shock structure.

  9. S3D: An interactive surface grid generation tool

    NASA Technical Reports Server (NTRS)

    Luh, Raymond Ching-Chung; Pierce, Lawrence E.; Yip, David

    1992-01-01

    S3D, an interactive software tool for surface grid generation, is described. S3D provides the means with which a geometry definition based either on a discretized curve set or a rectangular set can be quickly processed towards the generation of a surface grid for computational fluid dynamics (CFD) applications. This is made possible as a result of implementing commonly encountered surface gridding tasks in an environment with a highly efficient and user friendly graphical interface. Some of the more advanced features of S3D include surface-surface intersections, optimized surface domain decomposition and recomposition, and automated propagation of edge distributions to surrounding grids.

  10. Making Inexpensive 3-D Models

    ERIC Educational Resources Information Center

    Manos, Harry

    2016-01-01

    Visual aids are important to student learning, and they help make the teacher's job easier. Keeping with the "TPT" theme of "The Art, Craft, and Science of Physics Teaching," the purpose of this article is to show how teachers, lacking equipment and funds, can construct a durable 3-D model reference frame and a model gravity…

  11. 3D Printing: Exploring Capabilities

    ERIC Educational Resources Information Center

    Samuels, Kyle; Flowers, Jim

    2015-01-01

    As 3D printers become more affordable, schools are using them in increasing numbers. They fit well with the emphasis on product design in technology and engineering education, allowing students to create high-fidelity physical models to see and test different iterations in their product designs. They may also help students to "think in three…

  12. Met.3D - a new open-source tool for interactive 3D visualization of ensemble weather forecasts

    NASA Astrophysics Data System (ADS)

    Rautenhaus, Marc; Kern, Michael; Schäfler, Andreas; Westermann, Rüdiger

    2015-04-01

    We introduce Met.3D, a new open-source tool for the interactive 3D visualization of numerical ensemble weather predictions. The tool has been developed to support weather forecasting during aircraft-based atmospheric field campaigns, however, is applicable to further forecasting, research and teaching activities. Our work approaches challenging topics related to the visual analysis of numerical atmospheric model output -- 3D visualisation, ensemble visualization, and how both can be used in a meaningful way suited to weather forecasting. Met.3D builds a bridge from proven 2D visualization methods commonly used in meteorology to 3D visualization by combining both visualization types in a 3D context. It implements methods that address the issue of spatial perception in the 3D view as well as approaches to using the ensemble in order to assess forecast uncertainty. Interactivity is key to the Met.3D approach. The tool uses modern graphics hardware technology to achieve interactive visualization of present-day numerical weather prediction datasets on standard consumer hardware. Met.3D supports forecast data from the European Centre for Medium Range Weather Forecasts and operates directly on ECMWF hybrid sigma-pressure level grids. In this presentation, we provide an overview of the software --illustrated with short video examples--, and give information on its availability.

  13. 3D Microperiodic Hydrogel Scaffolds for Robust Neuronal Cultures

    PubMed Central

    Hanson Shepherd, Jennifer N.; Parker, Sara T.; Shepherd, Robert F.; Gillette, Martha U.; Lewis, Jennifer A.; Nuzzo, Ralph G.

    2011-01-01

    Three-dimensional (3D) microperiodic scaffolds of poly(2-hydroxyethyl methacrylate) (pHEMA) have been fabricated by direct-write assembly of a photopolymerizable hydrogel ink. The ink is initially composed of physically entangled pHEMA chains dissolved in a solution of HEMA monomer, comonomer, photoinitiator and water. Upon printing 3D scaffolds of varying architecture, the ink filaments are exposed to UV light, where they are transformed into an interpenetrating hydrogel network of chemically cross-linked and physically entangled pHEMA chains. These 3D microperiodic scaffolds are rendered growth compliant for primary rat hippocampal neurons by absorption of polylysine. Neuronal cells thrive on these scaffolds, forming differentiated, intricately branched networks. Confocal laser scanning microscopy reveals that both cell distribution and extent of neuronal process alignment depend upon scaffold architecture. This work provides an important step forward in the creation of suitable platforms for in vitro study of sensitive cell types. PMID:21709750

  14. Multiple footprint stereo algorithms for 3D display content generation

    NASA Astrophysics Data System (ADS)

    Boughorbel, Faysal

    2007-02-01

    This research focuses on the conversion of stereoscopic video material into an image + depth format which is suitable for rendering on the multiview auto-stereoscopic displays of Philips. The recent interest shown in the movie industry for 3D significantly increased the availability of stereo material. In this context the conversion from stereo to the input formats of 3D displays becomes an important task. In this paper we present a stereo algorithm that uses multiple footprints generating several depth candidates for each image pixel. We characterize the various matching windows and we devise a robust strategy for extracting high quality estimates from the resulting depth candidates. The proposed algorithm is based on a surface filtering method that employs simultaneously the available depth estimates in a small local neighborhood while ensuring correct depth discontinuities by the inclusion of image constraints. The resulting highquality image-aligned depth maps proved an excellent match with our 3D displays.

  15. 3D web visualization of huge CityGML models

    NASA Astrophysics Data System (ADS)

    Prandi, F.; Devigili, F.; Soave, M.; Di Staso, U.; De Amicis, R.

    2015-08-01

    Nowadays, rapid technological development into acquiring geo-spatial information; joined to the capabilities to process these data in a relative short period of time, allows the generation of detailed 3D textured city models that will become an essential part of the modern city information infrastructure (Spatial Data Infrastructure) and, can be used to integrate various data from different sources for public accessible visualisation and many other applications. One of the main bottlenecks, which at the moment limit the use of these datasets to few experts, is a lack on efficient visualization systems through the web and interoperable frameworks that allow standardising the access to the city models. The work presented in this paper tries to satisfy these two requirements developing a 3D web-based visualization system based on OGC standards and effective visualization concepts. The architectural framework, based on Services Oriented Architecture (SOA) concepts, provides the 3D city data to a web client designed to support the view process in a very effective way. The first part of the work is to design a framework compliant to the 3D Portrayal Service drafted by the of the Open Geospatial Consortium (OGC) 3D standardization working group. The latter is related to the development of an effective web client able to render in an efficient way the 3D city models.

  16. 3-D rigid body tracking using vision and depth sensors.

    PubMed

    Gedik, O Serdar; Alatan, A Aydn

    2013-10-01

    In robotics and augmented reality applications, model-based 3-D tracking of rigid objects is generally required. With the help of accurate pose estimates, it is required to increase reliability and decrease jitter in total. Among many solutions of pose estimation in the literature, pure vision-based 3-D trackers require either manual initializations or offline training stages. On the other hand, trackers relying on pure depth sensors are not suitable for AR applications. An automated 3-D tracking algorithm, which is based on fusion of vision and depth sensors via extended Kalman filter, is proposed in this paper. A novel measurement-tracking scheme, which is based on estimation of optical flow using intensity and shape index map data of 3-D point cloud, increases 2-D, as well as 3-D, tracking performance significantly. The proposed method requires neither manual initialization of pose nor offline training, while enabling highly accurate 3-D tracking. The accuracy of the proposed method is tested against a number of conventional techniques, and a superior performance is clearly observed in terms of both objectively via error metrics and subjectively for the rendered scenes. PMID:23955795

  17. TACO3D. 3-D Finite Element Heat Transfer Code

    SciTech Connect

    Mason, W.E.

    1992-03-04

    TACO3D is a three-dimensional, finite-element program for heat transfer analysis. An extension of the two-dimensional TACO program, it can perform linear and nonlinear analyses and can be used to solve either transient or steady-state problems. The program accepts time-dependent or temperature-dependent material properties, and materials may be isotropic or orthotropic. A variety of time-dependent and temperature-dependent boundary conditions and loadings are available including temperature, flux, convection, and radiation boundary conditions and internal heat generation. Additional specialized features treat enclosure radiation, bulk nodes, and master/slave internal surface conditions (e.g., contact resistance). Data input via a free-field format is provided. A user subprogram feature allows for any type of functional representation of any independent variable. A profile (bandwidth) minimization option is available. The code is limited to implicit time integration for transient solutions. TACO3D has no general mesh generation capability. Rows of evenly-spaced nodes and rows of sequential elements may be generated, but the program relies on separate mesh generators for complex zoning. TACO3D does not have the ability to calculate view factors internally. Graphical representation of data in the form of time history and spatial plots is provided through links to the POSTACO and GRAPE postprocessor codes.

  18. Clip art rendering of smooth isosurfaces.

    PubMed

    Stroila, Matei; Eisemann, Elmar; Hart, John

    2008-01-01

    Clip art is a simplified illustration form consisting of layered filled polygons or closed curves used to convey 3D shape information in a 2D vector graphics format. This paper focuses on the problem of direct conversion of smooth surfaces, ranging from the free-form shapes of art and design to the mathematical structures of geometry and topology, into a clip art form suitable for illustration use in books, papers and presentations. We show how to represent silhouette, shadow, gleam and other surface feature curves as the intersection of implicit surfaces, and derive equations for their efficient interrogation via particle chains. We further describe how to sort, orient, identify and fill the closed regions that overlay to form clip art. We demonstrate the results with numerous renderings used to illustrate the paper itself. PMID:17993708

  19. Virtual Boutique: a 3D modeling and content-based management approach to e-commerce

    NASA Astrophysics Data System (ADS)

    Paquet, Eric; El-Hakim, Sabry F.

    2000-12-01

    The Virtual Boutique is made out of three modules: the decor, the market and the search engine. The decor is the physical space occupied by the Virtual Boutique. It can reproduce any existing boutique. For this purpose, photogrammetry is used. A set of pictures of a real boutique or space is taken and a virtual 3D representation of this space is calculated from them. Calculations are performed with software developed at NRC. This representation consists of meshes and texture maps. The camera used in the acquisition process determines the resolution of the texture maps. Decorative elements are added like painting, computer generated objects and scanned objects. The objects are scanned with laser scanner developed at NRC. This scanner allows simultaneous acquisition of range and color information based on white laser beam triangulation. The second module, the market, is made out of all the merchandises and the manipulators, which are used to manipulate and compare the objects. The third module, the search engine, can search the inventory based on an object shown by the customer in order to retrieve similar objects base don shape and color. The items of interest are displayed in the boutique by reconfiguring the market space, which mean that the boutique can be continuously customized according to the customer's needs. The Virtual Boutique is entirely written in Java 3D and can run in mono and stereo mode and has been optimized in order to allow high quality rendering.

  20. Distributed 3D Information Visualization - Towards Integration of the Dynamic 3D Graphics and Web Services

    NASA Astrophysics Data System (ADS)

    Vucinic, Dean; Deen, Danny; Oanta, Emil; Batarilo, Zvonimir; Lacor, Chris

    This paper focuses on visualization and manipulation of graphical content in distributed network environments. The developed graphical middleware and 3D desktop prototypes were specialized for situational awareness. This research was done in the LArge Scale COllaborative decision support Technology (LASCOT) project, which explored and combined software technologies to support human-centred decision support system for crisis management (earthquake, tsunami, flooding, airplane or oil-tanker incidents, chemical, radio-active or other pollutants spreading, etc.). The performed state-of-the-art review did not identify any publicly available large scale distributed application of this kind. Existing proprietary solutions rely on the conventional technologies and 2D representations. Our challenge was to apply the "latest" available technologies, such Java3D, X3D and SOAP, compatible with average computer graphics hardware. The selected technologies are integrated and we demonstrate: the flow of data, which originates from heterogeneous data sources; interoperability across different operating systems and 3D visual representations to enhance the end-users interactions.

  1. Identifying Key Structural Features and Spatial Relationships in Archean Microbialites Using 2D and 3D Visualization Methods

    NASA Astrophysics Data System (ADS)

    Stevens, E. W.; Sumner, D. Y.

    2009-12-01

    Microbialites in the 2521 ± 3 Ma Gamohaan Formation, South Africa, have several different end-member morphologies which show distinct growth structures and spatial relationships. We characterized several growth structures and spatial relationships in two samples (DK20 and 2_06) using a combination of 2D and 3D analytical techniques. There are two main goals in studying complicated microbialites with a combination of 2D and 3D methods. First, one can better understand microbialite growth by identifying important structures and structural relationships. Once structures are identified, the order in which the structures formed and how they are related can be inferred from observations of crosscutting relationships. Second, it is important to use both 2D and 3D methods to correlate 3D observations with those in 2D that are more common in the field. Combining analysis provides significantly more insight into the 3D morphology of microbial structures. In our studies, 2D analysis consisted of describing polished slabs and serial sections created by grinding down the rock 100 microns at a time. 3D analysis was performed on serial sections visualized in 3D using Vrui and 3DVisualizer software developed at KeckCAVES, UCD (http://keckcaves.org). Data were visualized on a laptop and in an immersive cave system. Both samples contain microbial laminae and more vertically orients microbial "walls" called supports. The relationships between these features created voids now filled with herringbone and blocky calcite crystals. DK20, a classic plumose structure, contains two types of support structures. Both are 1st order structures (1st order structures with organic inclusions and 1st without organic inclusions) interpreted as planar features based on 2D analysis. In the 2D analysis the 1st order structures show v branching relationships as well as single cuspate relationships (two 1st order structures with inclusions merging upward), and single tented relationships (three supports

  2. Recognition technology research based on 3D fingerprint

    NASA Astrophysics Data System (ADS)

    Tian, Qianxiao; Huang, Shujun; Zhang, Zonghua

    2014-11-01

    Fingerprint has been widely studied and applied to personal recognition in both forensics and civilian. However, the current widespread used fingerprint is identified by 2D (two-dimensional) fingerprint image and the mapping from 3D (three-dimensional) to 2D loses 1D information, which leads to low accurate and even wrong recognition. This paper presents a 3D fingerprint recognition method based on the fringe projection technique. A series of fringe patterns generated by software are projected onto a finger surface through a projecting system. From another viewpoint, the fringe patterns are deformed by the finger surface and captured by a CCD camera. The deformed fringe pattern images give the 3D shape data of the finger and the 3D fingerprint features. Through converting the 3D fingerprints to 2D space, traditional 2D fingerprint recognition method can be used to 3D fingerprints recognition. Experimental results on measuring and recognizing some 3D fingerprints show the accuracy and availability of the developed 3D fingerprint system.

  3. Optoplasmonics: hybridization in 3D

    NASA Astrophysics Data System (ADS)

    Rosa, L.; Gervinskas, G.; Žukauskas, A.; Malinauskas, M.; Brasselet, E.; Juodkazis, S.

    2013-12-01

    Femtosecond laser fabrication has been used to make hybrid refractive and di ractive micro-optical elements in photo-polymer SZ2080. For applications in micro- uidics, axicon lenses were fabricated (both single and arrays), for generation of light intensity patterns extending through the entire depth of a typically tens-of-micrometers deep channel. Further hybridisation of an axicon with a plasmonic slot is fabricated and demonstrated nu- merically. Spiralling chiral grooves were inscribed into a 100-nm-thick gold coating sputtered over polymerized micro-axicon lenses, using a focused ion beam. This demonstrates possibility of hybridisation between optical and plasmonic 3D micro-optical elements. Numerical modelling of optical performance by 3D-FDTD method is presented.

  4. 3-D Relativistic MHD Simulations

    NASA Astrophysics Data System (ADS)

    Nishikawa, K.-I.; Frank, J.; Koide, S.; Sakai, J.-I.; Christodoulou, D. M.; Sol, H.; Mutel, R. L.

    1998-12-01

    We present 3-D numerical simulations of moderately hot, supersonic jets propagating initially along or obliquely to the field lines of a denser magnetized background medium with Lorentz factors of W = 4.56 and evolving in a four-dimensional spacetime. The new results are understood as follows: Relativistic simulations have consistently shown that these jets are effectively heavy and so they do not suffer substantial momentum losses and are not decelerated as efficiently as their nonrelativistic counterparts. In addition, the ambient magnetic field, however strong, can be pushed aside with relative ease by the beam, provided that the degrees of freedom associated with all three spatial dimensions are followed self-consistently in the simulations. This effect is analogous to pushing Japanese ``noren'' or vertical Venetian blinds out of the way while the slats are allowed to bend in 3-D space rather than as a 2-D slab structure.

  5. Forensic 3D Scene Reconstruction

    SciTech Connect

    LITTLE,CHARLES Q.; PETERS,RALPH R.; RIGDON,J. BRIAN; SMALL,DANIEL E.

    1999-10-12

    Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a feasible prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.

  6. Forensic 3D scene reconstruction

    NASA Astrophysics Data System (ADS)

    Little, Charles Q.; Small, Daniel E.; Peters, Ralph R.; Rigdon, J. B.

    2000-05-01

    Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a fieldable prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.

  7. 360-degree 3D profilometry

    NASA Astrophysics Data System (ADS)

    Song, Yuanhe; Zhao, Hong; Chen, Wenyi; Tan, Yushan

    1997-12-01

    A new method of 360 degree turning 3D shape measurement in which light sectioning and phase shifting techniques are both used is presented in this paper. A sine light field is applied in the projected light stripe, meanwhile phase shifting technique is used to calculate phases of the light slit. Thereafter wrapped phase distribution of the slit is formed and the unwrapping process is made by means of the height information based on the light sectioning method. Therefore phase measuring results with better precision can be obtained. At last the target 3D shape data can be produced according to geometric relationships between phases and the object heights. The principles of this method are discussed in detail and experimental results are shown in this paper.

  8. 3D Printable Graphene Composite.

    PubMed

    Wei, Xiaojun; Li, Dong; Jiang, Wei; Gu, Zheming; Wang, Xiaojuan; Zhang, Zengxing; Sun, Zhengzong

    2015-01-01

    In human being's history, both the Iron Age and Silicon Age thrived after a matured massive processing technology was developed. Graphene is the most recent superior material which could potentially initialize another new material Age. However, while being exploited to its full extent, conventional processing methods fail to provide a link to today's personalization tide. New technology should be ushered in. Three-dimensional (3D) printing fills the missing linkage between graphene materials and the digital mainstream. Their alliance could generate additional stream to push the graphene revolution into a new phase. Here we demonstrate for the first time, a graphene composite, with a graphene loading up to 5.6 wt%, can be 3D printable into computer-designed models. The composite's linear thermal coefficient is below 75 ppm·°C(-1) from room temperature to its glass transition temperature (Tg), which is crucial to build minute thermal stress during the printing process. PMID:26153673

  9. 3D Printed Robotic Hand

    NASA Technical Reports Server (NTRS)

    Pizarro, Yaritzmar Rosario; Schuler, Jason M.; Lippitt, Thomas C.

    2013-01-01

    Dexterous robotic hands are changing the way robots and humans interact and use common tools. Unfortunately, the complexity of the joints and actuations drive up the manufacturing cost. Some cutting edge and commercially available rapid prototyping machines now have the ability to print multiple materials and even combine these materials in the same job. A 3D model of a robotic hand was designed using Creo Parametric 2.0. Combining "hard" and "soft" materials, the model was printed on the Object Connex350 3D printer with the purpose of resembling as much as possible the human appearance and mobility of a real hand while needing no assembly. After printing the prototype, strings where installed as actuators to test mobility. Based on printing materials, the manufacturing cost of the hand was $167, significantly lower than other robotic hands without the actuators since they have more complex assembly processes.

  10. 3D light scanning macrography.

    PubMed

    Huber, D; Keller, M; Robert, D

    2001-08-01

    The technique of 3D light scanning macrography permits the non-invasive surface scanning of small specimens at magnifications up to 200x. Obviating both the problem of limited depth of field inherent to conventional close-up macrophotography and the metallic coating required by scanning electron microscopy, 3D light scanning macrography provides three-dimensional digital images of intact specimens without the loss of colour, texture and transparency information. This newly developed technique offers a versatile, portable and cost-efficient method for the non-invasive digital and photographic documentation of small objects. Computer controlled device operation and digital image acquisition facilitate fast and accurate quantitative morphometric investigations, and the technique offers a broad field of research and educational applications in biological, medical and materials sciences. PMID:11489078

  11. 3D-graphite structure

    SciTech Connect

    Belenkov, E. A. Ali-Pasha, V. A.

    2011-01-15

    The structure of clusters of some new carbon 3D-graphite phases have been calculated using the molecular-mechanics methods. It is established that 3D-graphite polytypes {alpha}{sub 1,1}, {alpha}{sub 1,3}, {alpha}{sub 1,5}, {alpha}{sub 2,1}, {alpha}{sub 2,3}, {alpha}{sub 3,1}, {beta}{sub 1,2}, {beta}{sub 1,4}, {beta}{sub 1,6}, {beta}{sub 2,1}, and {beta}{sub 3,2} consist of sp{sup 2}-hybridized atoms, have hexagonal unit cells, and differ in regards to the structure of layers and order of their alternation. A possible way to experimentally synthesize new carbon phases is proposed: the polymerization and carbonization of hydrocarbon molecules.

  12. Restructuring of RELAP5-3D

    SciTech Connect

    George Mesina; Joshua Hykes

    2005-09-01

    The RELAP5-3D source code is unstructured with many interwoven logic flow paths. By restructuring the code, it becomes easier to read and understand, which reduces the time and money required for code development, debugging, and maintenance. A structured program is comprised of blocks of code with one entry and exit point and downward logic flow. IF tests and DO loops inherently create structured code, while GOTO statements are the main cause of unstructured code. FOR_STRUCT is a commercial software package that converts unstructured FORTRAN into structured programming; it was used to restructure individual subroutines. Primarily it transforms GOTO statements, ARITHMETIC IF statements, and COMPUTED GOTO statements into IF-ELSEIF-ELSE tests and DO loops. The complexity of RELAP5-3D complicated the task. First, FOR_STRUCT cannot completely restructure all the complex coding contained in RELAP5-3D. An iterative approach of multiple FOR_STRUCT applications gave some additional improvements. Second, FOR_STRUCT cannot restructure FORTRAN 90 coding, and RELAP5-3D is partially written in FORTRAN 90. Unix scripts for pre-processing subroutines into coding that FOR_STRUCT could handle and post-processing it back into FORTRAN 90 were written. Finally, FOR_STRUCT does not have the ability to restructure the RELAP5-3D code which contains pre-compiler directives. Variations of a file were processed with different pre-compiler options switched on or off, ensuring that every block of code was restructured. Then the variations were recombined to create a completely restructured source file. Unix scripts were written to perform these tasks, as well as to make some minor formatting improvements. In total, 447 files comprising some 180,000 lines of FORTRAN code were restructured. These showed significant reduction in the number of logic jumps contained as measured by reduction in the number of GOTO statements and line labels. The average number of GOTO statements per subroutine

  13. E3D, the Euro3D visualization tool II: Mosaics, VIMOS data and large IFUs of the future

    NASA Astrophysics Data System (ADS)

    Sánchez, S. F.; Becker, T.; Kelz, A.

    2004-03-01

    In this paper, we describe the capabilities of E3D, the Euro3D visualization tool, to handle and display data created by large Integral Field Units (IFUs) and by mosaics consisting of multiple pointings. The reliability of the software has been tested with real data, originating from the PMAS instrument in mosaic mode and from the VIMOS instrument, which features the largest IFU currently available. The capabilities and limitations of the current software are examined in view of future large IFUs, which will produce extremely large datasets.

  14. Exposure Render: An Interactive Photo-Realistic Volume Rendering Framework

    PubMed Central

    Kroes, Thomas; Post, Frits H.; Botha, Charl P.

    2012-01-01

    The field of volume visualization has undergone rapid development during the past years, both due to advances in suitable computing hardware and due to the increasing availability of large volume datasets. Recent work has focused on increasing the visual realism in Direct Volume Rendering (DVR) by integrating a number of visually plausible but often effect-specific rendering techniques, for instance modeling of light occlusion and depth of field. Besides yielding more attractive renderings, especially the more realistic lighting has a positive effect on perceptual tasks. Although these new rendering techniques yield impressive results, they exhibit limitations in terms of their exibility and their performance. Monte Carlo ray tracing (MCRT), coupled with physically based light transport, is the de-facto standard for synthesizing highly realistic images in the graphics domain, although usually not from volumetric data. Due to the stochastic sampling of MCRT algorithms, numerous effects can be achieved in a relatively straight-forward fashion. For this reason, we have developed a practical framework that applies MCRT techniques also to direct volume rendering (DVR). With this work, we demonstrate that a host of realistic effects, including physically based lighting, can be simulated in a generic and flexible fashion, leading to interactive DVR with improved realism. In the hope that this improved approach to DVR will see more use in practice, we have made available our framework under a permissive open source license. PMID:22768292

  15. [Real time 3D echocardiography].

    PubMed

    Bauer, F; Shiota, T; Thomas, J D

    2001-07-01

    Three-dimensional representation of the heart is an old concern. Usually, 3D reconstruction of the cardiac mass is made by successive acquisition of 2D sections, the spatial localisation and orientation of which require complex guiding systems. More recently, the concept of volumetric acquisition has been introduced. A matricial emitter-receiver probe complex with parallel data processing provides instantaneous of a pyramidal 64 degrees x 64 degrees volume. The image is restituted in real time and is composed of 3 planes (planes B and C) which can be displaced in all spatial directions at any time during acquisition. The flexibility of this system of acquisition allows volume and mass measurement with greater accuracy and reproducibility, limiting inter-observer variability. Free navigation of the planes of investigation allows reconstruction for qualitative and quantitative analysis of valvular heart disease and other pathologies. Although real time 3D echocardiography is ready for clinical usage, some improvements are still necessary to improve its conviviality. Then real time 3D echocardiography could be the essential tool for understanding, diagnosis and management of patients. PMID:11494630

  16. [Real time 3D echocardiography

    NASA Technical Reports Server (NTRS)

    Bauer, F.; Shiota, T.; Thomas, J. D.

    2001-01-01

    Three-dimensional representation of the heart is an old concern. Usually, 3D reconstruction of the cardiac mass is made by successive acquisition of 2D sections, the spatial localisation and orientation of which require complex guiding systems. More recently, the concept of volumetric acquisition has been introduced. A matricial emitter-receiver probe complex with parallel data processing provides instantaneous of a pyramidal 64 degrees x 64 degrees volume. The image is restituted in real time and is composed of 3 planes (planes B and C) which can be displaced in all spatial directions at any time during acquisition. The flexibility of this system of acquisition allows volume and mass measurement with greater accuracy and reproducibility, limiting inter-observer variability. Free navigation of the planes of investigation allows reconstruction for qualitative and quantitative analysis of valvular heart disease and other pathologies. Although real time 3D echocardiography is ready for clinical usage, some improvements are still necessary to improve its conviviality. Then real time 3D echocardiography could be the essential tool for understanding, diagnosis and management of patients.

  17. Comparison of 3D Reconstructive Technologies Used for Morphometric Research and the Translation of Knowledge Using a Decision Matrix

    ERIC Educational Resources Information Center

    Martin, Charys M.; Roach, Victoria A.; Nguyen, Ngan; Rice, Charles L.; Wilson, Timothy D.

    2013-01-01

    The use of three-dimensional (3D) models for education, pre-operative assessment, presurgical planning, and measurement have become more prevalent. With the increase in prevalence of 3D models there has also been an increase in 3D reconstructive software programs that are used to create these models. These software programs differ in…

  18. Human efficiency for recognizing 3-D objects in luminance noise.

    PubMed

    Tjan, B S; Braje, W L; Legge, G E; Kersten, D

    1995-11-01

    The purpose of this study was to establish how efficiently humans use visual information to recognize simple 3-D objects. The stimuli were computer-rendered images of four simple 3-D objects--wedge, cone, cylinder, and pyramid--each rendered from 8 randomly chosen viewing positions as shaded objects, line drawings, or silhouettes. The objects were presented in static, 2-D Gaussian luminance noise. The observer's task was to indicate which of the four objects had been presented. We obtained human contrast thresholds for recognition, and compared these to an ideal observer's thresholds to obtain efficiencies. In two auxiliary experiments, we measured efficiencies for object detection and letter recognition. Our results showed that human object-recognition efficiency is low (3-8%) when compared to efficiencies reported for some other visual-information processing tasks. The low efficiency means that human recognition performance is limited primarily by factors intrinsic to the observer rather than the information content of the stimuli. We found three factors that play a large role in accounting for low object-recognition efficiency: stimulus size, spatial uncertainty, and detection efficiency. Four other factors play a smaller role in limiting object-recognition efficiency: observers' internal noise, stimulus rendering condition, stimulus familiarity, and categorization across views. PMID:8533342

  19. Perspective volume rendering on Parallel Algebraic Logic (PAL) computer

    NASA Astrophysics Data System (ADS)

    Li, Hongzheng; Shi, Hongchi

    1998-09-01

    We propose a perspective volume graphics rendering algorithm on SIMD mesh-connected computers and implement the algorithm on the Parallel Algebraic Logic computer. The algorithm is a parallel ray casting algorithm. It decomposes the 3D perspective projection into two transformations that can be implemented in the SIMD fashion to solve the data redistribution problem caused by non-regular data access patterns in the perspective projection.

  20. Volume Visual Attention Maps (VVAM) in ray-casting rendering.

    PubMed

    Beristain, Andoni; Congote, John; Ruiz, Oscar

    2012-01-01

    This paper presents an extension visual attention maps for volume data visualization, where eye fixation points become rays in the 3D space, and the visual attention map becomes a volume. This Volume Visual Attention Map (VVAM) is used to interactively enhance a ray-casting based direct volume rendering (DVR) visualization. The practical application of this idea into the biomedical image visualization field is explored for interactive visualization. PMID:22356956

  1. Automatic Texture Mapping of Architectural and Archaeological 3d Models

    NASA Astrophysics Data System (ADS)

    Kersten, T. P.; Stallmann, D.

    2012-07-01

    Today, detailed, complete and exact 3D models with photo-realistic textures are increasingly demanded for numerous applications in architecture and archaeology. Manual texture mapping of 3D models by digital photographs with software packages, such as Maxon Cinema 4D, Autodesk 3Ds Max or Maya, still requires a complex and time-consuming workflow. So, procedures for automatic texture mapping of 3D models are in demand. In this paper two automatic procedures are presented. The first procedure generates 3D surface models with textures by web services, while the second procedure textures already existing 3D models with the software tmapper. The program tmapper is based on the Multi Layer 3D image (ML3DImage) algorithm and developed in the programming language C++. The studies showing that the visibility analysis using the ML3DImage algorithm is not sufficient to obtain acceptable results of automatic texture mapping. To overcome the visibility problem the Point Cloud Painter algorithm in combination with the Z-buffer-procedure will be applied in the future.

  2. Visualization of 3D optical lattices

    NASA Astrophysics Data System (ADS)

    Lee, Hoseong; Clemens, James

    2016-05-01

    We describe the visualization of 3D optical lattices based on Sisyphus cooling implemented with open source software. We plot the adiabatic light shift potentials found by diagonalizing the effective Hamiltonian for the light shift operator. Our program incorporates a variety of atomic ground state configurations with total angular momentum ranging from j = 1 / 2 to j = 4 and a variety of laser beam configurations including the two-beam lin ⊥ lin configuration, the four-beam umbrella configuration, and four beams propagating in two orthogonal planes. In addition to visualizing the lattice the program also evaluates lattice parameters such as the oscillation frequency for atoms trapped deep in the wells. The program is intended to help guide experimental implementations of optical lattices.

  3. Programming standards for effective S-3D game development

    NASA Astrophysics Data System (ADS)

    Schneider, Neil; Matveev, Alexander

    2008-02-01

    When a video game is in development, more often than not it is being rendered in three dimensions - complete with volumetric depth. It's the PC monitor that is taking this three-dimensional information, and artificially displaying it in a flat, two-dimensional format. Stereoscopic drivers take the three-dimensional information captured from DirectX and OpenGL calls and properly display it with a unique left and right sided view for each eye so a proper stereoscopic 3D image can be seen by the gamer. The two-dimensional limitation of how information is displayed on screen has encouraged programming short-cuts and work-arounds that stifle this stereoscopic 3D effect, and the purpose of this guide is to outline techniques to get the best of both worlds. While the programming requirements do not significantly add to the game development time, following these guidelines will greatly enhance your customer's stereoscopic 3D experience, increase your likelihood of earning Meant to be Seen certification, and give you instant cost-free access to the industry's most valued consumer base. While this outline is mostly based on NVIDIA's programming guide and iZ3D resources, it is designed to work with all stereoscopic 3D hardware solutions and is not proprietary in any way.

  4. Advanced 3D Sensing and Visualization System for Unattended Monitoring

    SciTech Connect

    Carlson, J.J.; Little, C.Q.; Nelson, C.L.

    1999-01-01

    The purpose of this project was to create a reliable, 3D sensing and visualization system for unattended monitoring. The system provides benefits for several of Sandia's initiatives including nonproliferation, treaty verification, national security and critical infrastructure surety. The robust qualities of the system make it suitable for both interior and exterior monitoring applications. The 3D sensing system combines two existing sensor technologies in a new way to continuously maintain accurate 3D models of both static and dynamic components of monitored areas (e.g., portions of buildings, roads, and secured perimeters in addition to real-time estimates of the shape, location, and motion of humans and moving objects). A key strength of this system is the ability to monitor simultaneous activities on a continuous basis, such as several humans working independently within a controlled workspace, while also detecting unauthorized entry into the workspace. Data from the sensing system is used to identi~ activities or conditions that can signi~ potential surety (safety, security, and reliability) threats. The system could alert a security operator of potential threats or could be used to cue other detection, inspection or warning systems. An interactive, Web-based, 3D visualization capability was also developed using the Virtual Reality Modeling Language (VRML). The intex%ace allows remote, interactive inspection of a monitored area (via the Internet or Satellite Links) using a 3D computer model of the area that is rendered from actual sensor data.

  5. Faster, higher quality volume visualization for 3D medical imaging

    NASA Astrophysics Data System (ADS)

    Kalvin, Alan D.; Laine, Andrew F.; Song, Ting

    2008-03-01

    The two major volume visualization methods used in biomedical applications are Maximum Intensity Projection (MIP) and Volume Rendering (VR), both of which involve the process of creating sets of 2D projections from 3D images. We have developed a new method for very fast, high-quality volume visualization of 3D biomedical images, based on the fact that the inverse of this process (transforming 2D projections into a 3D image) is essentially equivalent to tomographic image reconstruction. This new method uses the 2D projections acquired by the scanner, thereby obviating the need for the two computationally expensive steps currently required in the complete process of biomedical visualization, that is, (i) reconstructing the 3D image from 2D projection data, and (ii) computing the set of 2D projections from the reconstructed 3D image As well as improvements in computation speed, this method also results in improvements in visualization quality, and in the case of x-ray CT we can exploit this quality improvement to reduce radiation dosage. In this paper, demonstrate the benefits of developing biomedical visualization techniques by directly processing the sensor data acquired by body scanners, rather than by processing the image data reconstructed from the sensor data. We show results of using this approach for volume visualization for tomographic modalities, like x-ray CT, and as well as for MRI.

  6. Is 3D true non linear traveltime tomography reasonable ?

    NASA Astrophysics Data System (ADS)

    Herrero, A.; Virieux, J.

    2003-04-01

    The data sets requiring 3D analysis tools in the context of seismic exploration (both onshore and offshore experiments) or natural seismicity (micro seismicity surveys or post event measurements) are more and more numerous. Classical linearized tomographies and also earthquake localisation codes need an accurate 3D background velocity model. However, if the medium is complex and a priori information not available, a 1D analysis is not able to provide an adequate background velocity image. Moreover, the design of the acquisition layouts is often intrinsically 3D and renders difficult even 2D approaches, especially in natural seismicity cases. Thus, the solution relies on the use of a 3D true non linear approach, which allows to explore the model space and to identify an optimal velocity image. The problem becomes then practical and its feasibility depends on the available computing resources (memory and time). In this presentation, we show that facing a 3D traveltime tomography problem with an extensive non-linear approach combining fast travel time estimators based on level set methods and optimisation techniques such as multiscale strategy is feasible. Moreover, because management of inhomogeneous inversion parameters is more friendly in a non linear approach, we describe how to perform a jointly non-linear inversion for the seismic velocities and the sources locations.

  7. VPython: Writing Real-time 3D Physics Programs

    NASA Astrophysics Data System (ADS)

    Chabay, Ruth

    2001-06-01

    VPython (http://cil.andrew.cmu.edu/projects/visual) combines the Python programming language with an innovative 3D graphics module called Visual, developed by David Scherer. Designed to make 3D physics simulations accessible to novice programmers, VPython allows the programmer to write a purely computational program without any graphics code, and produces an interactive realtime 3D graphical display. In a program 3D objects are created and their positions modified by computational algorithms. Running in a separate thread, the Visual module monitors the positions of these objects and renders them many times per second. Using the mouse, one can zoom and rotate to navigate through the scene. After one hour of instruction, students in an introductory physics course at Carnegie Mellon University, including those who have never programmed before, write programs in VPython to model the behavior of physical systems and to visualize fields in 3D. The Numeric array processing module allows the construction of more sophisticated simulations and models as well. VPython is free and open source. The Visual module is based on OpenGL, and runs on Windows, Linux, and Macintosh.

  8. VPython: Python plus Animations in Stereo 3D

    NASA Astrophysics Data System (ADS)

    Sherwood, Bruce

    2004-03-01

    Python is a modern object-oriented programming language. VPython (http://vpython.org) is a combination of Python (http://python.org), the Numeric module from LLNL (http://www.pfdubois.com/numpy), and the Visual module created by David Scherer, all of which have been under continuous development as open source projects. VPython makes it easy to write programs that generate real-time, navigable 3D animations. The Visual module includes a set of 3D objects (sphere, cylinder, arrow, etc.), tools for creating other shapes, and support for vector algebra. The 3D renderer runs in a parallel thread, and animations are produced as a side effect of computations, freeing the programmer to concentrate on the physics. Applications include educational and research visualization. In the Fall of 2003 Hugh Fisher at the Australian National University, John Zelle at Wartburg College, and I contributed to a new stereo capability of VPython. By adding a single statement to an existing VPython program, animations can be viewed in true stereo 3D. One can choose several modes: active shutter glasses, passive polarized glasses, or colored glasses (e.g. red-cyan). The talk will demonstrate the new stereo capability and discuss the pros and cons of various schemes for display of stereo 3D for a large audience. Supported in part by NSF grant DUE-0237132.

  9. New impressive capabilities of SE-workbench for EO/IR real-time rendering of animated scenarios including flares

    NASA Astrophysics Data System (ADS)

    Le Goff, Alain; Cathala, Thierry; Latger, Jean

    2015-10-01

    To provide technical assessments of EO/IR flares and self-protection systems for aircraft, DGA Information superiority resorts to synthetic image generation to model the operational battlefield of an aircraft, as viewed by EO/IR threats. For this purpose, it completed the SE-Workbench suite from OKTAL-SE with functionalities to predict a realistic aircraft IR signature and is yet integrating the real-time EO/IR rendering engine of SE-Workbench called SE-FAST-IR. This engine is a set of physics-based software and libraries that allows preparing and visualizing a 3D scene for the EO/IR domain. It takes advantage of recent advances in GPU computing techniques. The recent past evolutions that have been performed concern mainly the realistic and physical rendering of reflections, the rendering of both radiative and thermal shadows, the use of procedural techniques for the managing and the rendering of very large terrains, the implementation of Image- Based Rendering for dynamic interpolation of plume static signatures and lastly for aircraft the dynamic interpolation of thermal states. The next step is the representation of the spectral, directional, spatial and temporal signature of flares by Lacroix Defense using OKTAL-SE technology. This representation is prepared from experimental data acquired during windblast tests and high speed track tests. It is based on particle system mechanisms to model the different components of a flare. The validation of a flare model will comprise a simulation of real trials and a comparison of simulation outputs to experimental results concerning the flare signature and above all the behavior of the stimulated threat.

  10. A software system for evaluation and training of spatial reasoning and neuroanatomical knowledge in a virtual environment.

    PubMed

    Armstrong, Ryan; de Ribaupierre, Sandrine; Eagleson, Roy

    2014-04-01

    This paper describes the design and development of a software tool for the evaluation and training of surgical residents using an interactive, immersive, virtual environment. Our objective was to develop a tool to evaluate user spatial reasoning skills and knowledge in a neuroanatomical context, as well as to augment their performance through interactivity. In the visualization, manually segmented anatomical surface images of MRI scans of the brain were rendered using a stereo display to improve depth cues. A magnetically tracked wand was used as a 3D input device for localization tasks within the brain. The movement of the wand was made to correspond to movement of a spherical cursor within the rendered scene, providing a reference for localization. Users can be tested on their ability to localize structures within the 3D scene, and their ability to place anatomical features at the appropriate locations within the rendering. PMID:24524753

  11. Assessment of rhinoplasty techniques by overlay of before-and-after 3D images.

    PubMed

    Toriumi, Dean M; Dixon, Tatiana K

    2011-11-01

    This article describes the equipment and software used to create facial 3D imaging and discusses the validation and reliability of the objective assessments done using this equipment. By overlaying preoperative and postoperative 3D images, it is possible to assess the surgical changes in 3D. Methods are described to assess the 3D changes from the rhinoplasty techniques of nasal dorsal augmentation, increasing tip projection, narrowing the nose, and nasal lengthening. PMID:22004862

  12. Discrete Method of Images for 3D Radio Propagation Modeling

    NASA Astrophysics Data System (ADS)

    Novak, Roman

    2016-09-01

    Discretization by rasterization is introduced into the method of images (MI) in the context of 3D deterministic radio propagation modeling as a way to exploit spatial coherence of electromagnetic propagation for fine-grained parallelism. Traditional algebraic treatment of bounding regions and surfaces is replaced by computer graphics rendering of 3D reflections and double refractions while building the image tree. The visibility of reception points and surfaces is also resolved by shader programs. The proposed rasterization is shown to be of comparable run time to that of the fundamentally parallel shooting and bouncing rays. The rasterization does not affect the signal evaluation backtracking step, thus preserving its advantage over the brute force ray-tracing methods in terms of accuracy. Moreover, the rendering resolution may be scaled back for a given level of scenario detail with only marginal impact on the image tree size. This allows selection of scene optimized execution parameters for faster execution, giving the method a competitive edge. The proposed variant of MI can be run on any GPU that supports real-time 3D graphics.

  13. 3D geometric modelling of hand-woven textile

    NASA Astrophysics Data System (ADS)

    Shidanshidi, H.; Naghdy, F.; Naghdy, G.; Conroy, D. Wood

    2008-02-01

    Geometric modeling and haptic rendering of textile has attracted significant interest over the last decade. A haptic representation is created by adding the physical properties of an object to its geometric configuration. While research has been conducted into geometric modeling of fabric, current systems require time-consuming manual recognition of textile specifications and data entry. The development of a generic approach for construction of the 3D geometric model of a woven textile is pursued in this work. The geometric model would be superimposed by a haptic model in the future work. The focus at this stage is on hand-woven textile artifacts for display in museums. A fuzzy rule based algorithm is applied to the still images of the artifacts to generate the 3D model. The derived model is exported as a 3D VRML model of the textile for visual representation and haptic rendering. An overview of the approach is provided and the developed algorithm is described. The approach is validated by applying the algorithm to different textile samples and comparing the produced models with the actual structure and pattern of the samples.

  14. 3D Stereoscopic Visualization of Fenestrated Stent Grafts

    SciTech Connect

    Sun Zhonghua; Squelch, Andrew; Bartlett, Andrew; Cunningham, Kylie; Lawrence-Brown, Michael

    2009-09-15

    The purpose of this study was to present a technique of stereoscopic visualization in the evaluation of patients with abdominal aortic aneurysm treated with fenestrated stent grafts compared with conventional 2D visualizations. Two patients with abdominal aortic aneurysm undergoing fenestrated stent grafting were selected for inclusion in the study. Conventional 2D views including axial, multiplanar reformation, maximum-intensity projection, and volume rendering and 3D stereoscopic visualizations were assessed by two experienced reviewers independently with regard to the treatment outcomes of fenestrated repair. Interobserver agreement was assessed with Kendall's W statistic. Multiplanar reformation and maximum-intensity projection visualizations were scored the highest in the evaluation of parameters related to the fenestrated stent grafting, while 3D stereoscopic visualization was scored as valuable in the evaluation of appearance (any distortions) of the fenestrated stent. Volume rendering was found to play a limited role in the follow-up of fenestrated stent grafting. 3D stereoscopic visualization adds additional information that assists endovascular specialists to identify any distortions of the fenestrated stents when compared with 2D visualizations.

  15. Computer-aided detection of colonic polyps using volume rendering

    NASA Astrophysics Data System (ADS)

    Hong, Wei; Qiu, Feng; Marino, Joseph; Kaufman, Arie

    2007-03-01

    This work utilizes a novel pipeline for the computer-aided detection (CAD) of colonic polyps, assisting radiologists in locating polyps when using a virtual colonoscopy system. Our CAD pipeline automatically detects polyps while reducing the number of false positives (FPs). It integrates volume rendering and conformal colon flattening with texture and shape analysis. The colon is first digitally cleansed, segmented, and extracted from the CT dataset of the abdomen. The colon surface is then mapped to a 2D rectangle using conformal mapping. Using this colon flattening method, the CAD problem is converted from 3D into 2D. The flattened image is rendered using a direct volume rendering of the 3D colon dataset with a translucent transfer function. Suspicious polyps are detected by applying a clustering method on the 2D volume rendered image. The FPs are reduced by analyzing shape and texture features of the suspicious areas detected by the clustering step. Compared with shape-based methods, ours is much faster and much more efficient as it avoids computing curvature and other shape parameters for the whole colon wall. We tested our method with 178 datasets and found it to be 100% sensitive to adenomatous polyps with a low rate of FPs. The CAD results are seamlessly integrated into a virtual colonoscopy system, providing the radiologists with visual cues and likelihood indicators of areas likely to contain polyps, and allowing them to quickly inspect the suspicious areas and further exploit the flattened colon view for easy navigation and bookmark placement.

  16. Efficient space-leaping method for volume rendering

    NASA Astrophysics Data System (ADS)

    Cho, Sungup; Kim, Hyeongdo; Kim, Myeongsun; Jeong, Changsung

    1999-03-01

    Volume rendering is a technique that visualize 2D image of object from 3D volume data on image screen. Ray casting algorithm, one of popular volume rendering techniques, generate image with detail and high quality compared with other volume rendering algorithms but since this is a highly time consuming process given large number of voxels, many acceleration techniques have been developed. Here we introduce new acceleration technique, efficient space leaping method. Our new space leaping method traverse volume data and projects 3D location of voxel onto image screen to find pixels that have non-zero value in final volume image and locations of non-empty voxels that are closest to ray. During this process, adaptive run-length encoding and line drawing algorithm are used to traverse volume data and find pixels with non-zero value efficiently. Then we cast rays not through entire screen pixel but only through projected screen pixels and start rendering process from non-empty voxel location directly. This new method shows significant time savings applied to surface extraction without loss of image quality.

  17. A Clean Adirondack (3-D)

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This is a 3-D anaglyph showing a microscopic image taken of an area measuring 3 centimeters (1.2 inches) across on the rock called Adirondack. The image was taken at Gusev Crater on the 33rd day of the Mars Exploration Rover Spirit's journey (Feb. 5, 2004), after the rover used its rock abrasion tool brush to clean the surface of the rock. Dust, which was pushed off to the side during cleaning, can still be seen to the left and in low areas of the rock.

  18. Making Inexpensive 3-D Models

    NASA Astrophysics Data System (ADS)

    Manos, Harry

    2016-03-01

    Visual aids are important to student learning, and they help make the teacher's job easier. Keeping with the TPT theme of "The Art, Craft, and Science of Physics Teaching," the purpose of this article is to show how teachers, lacking equipment and funds, can construct a durable 3-D model reference frame and a model gravity well tailored to specific class lessons. Most of the supplies are readily available in the home or at school: rubbing alcohol, a rag, two colors of spray paint, art brushes, and masking tape. The cost of these supplies, if you don't have them, is less than 20.

  19. What Lies Ahead (3-D)

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This 3-D cylindrical-perspective mosaic taken by the navigation camera on the Mars Exploration Rover Spirit on sol 82 shows the view south of the large crater dubbed 'Bonneville.' The rover will travel toward the Columbia Hills, seen here at the upper left. The rock dubbed 'Mazatzal' and the hole the rover drilled in to it can be seen at the lower left. The rover's position is referred to as 'Site 22, Position 32.' This image was geometrically corrected to make the horizon appear flat.

  20. Vacant Lander in 3-D

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This 3-D image captured by the Mars Exploration Rover Opportunity's rear hazard-identification camera shows the now-empty lander that carried the rover 283 million miles to Meridiani Planum, Mars. Engineers received confirmation that Opportunity's six wheels successfully rolled off the lander and onto martian soil at 3:01 a.m. PST, January 31, 2004, on the seventh martian day, or sol, of the mission. The rover is approximately 1 meter (3 feet) in front of the lander, facing north.

  1. Large area 3-D optical coherence tomography imaging of lumpectomy specimens for radiation treatment planning

    NASA Astrophysics Data System (ADS)

    Wang, Cuihuan; Kim, Leonard; Barnard, Nicola; Khan, Atif; Pierce, Mark C.

    2016-02-01

    Our long term goal is to develop a high-resolution imaging method for comprehensive assessment of tissue removed during lumpectomy procedures. By identifying regions of high-grade disease within the excised specimen, we aim to develop patient-specific post-operative radiation treatment regimens. We have assembled a benchtop spectral-domain optical coherence tomography (SD-OCT) system with 1320 nm center wavelength. Automated beam scanning enables "sub-volumes" spanning 5 mm x 5 mm x 2 mm (500 A-lines x 500 B-scans x 2 mm in depth) to be collected in under 15 seconds. A motorized sample positioning stage enables multiple sub-volumes to be acquired across an entire tissue specimen. Sub-volumes are rendered from individual B-scans in 3D Slicer software and en face (XY) images are extracted at specific depths. These images are then tiled together using MosaicJ software to produce a large area en face view (up to 40 mm x 25 mm). After OCT imaging, specimens were sectioned and stained with HE, allowing comparison between OCT image features and disease markers on histopathology. This manuscript describes the technical aspects of image acquisition and reconstruction, and reports initial qualitative comparison between large area en face OCT images and HE stained tissue sections. Future goals include developing image reconstruction algorithms for mapping an entire sample, and registering OCT image volumes with clinical CT and MRI images for post-operative treatment planning.

  2. Using 3D Geometric Models to Teach Spatial Geometry Concepts.

    ERIC Educational Resources Information Center

    Bertoline, Gary R.

    1991-01-01

    An explanation of 3-D Computer Aided Design (CAD) usage to teach spatial geometry concepts using nontraditional techniques is presented. The software packages CADKEY and AutoCAD are described as well as their usefulness in solving space geometry problems. (KR)

  3. 3D optical see-through head-mounted display based augmented reality system and its application

    NASA Astrophysics Data System (ADS)

    Zhang, Zhenliang; Weng, Dongdong; Liu, Yue; Xiang, Li

    2015-07-01

    The combination of health and entertainment becomes possible due to the development of wearable augmented reality equipment and corresponding application software. In this paper, we implemented a fast calibration extended from SPAAM for an optical see-through head-mounted display (OSTHMD) which was made in our lab. During the calibration, the tracking and recognition techniques upon natural targets were used, and the spatial corresponding points had been set in dispersed and well-distributed positions. We evaluated the precision of this calibration, in which the view angle ranged from 0 degree to 70 degrees. Relying on the results above, we calculated the position of human eyes relative to the world coordinate system and rendered 3D objects in real time with arbitrary complexity on OSTHMD, which accurately matched the real world. Finally, we gave the degree of satisfaction about our device in the combination of entertainment and prevention of cervical vertebra diseases through user feedbacks.

  4. New techniques in 3D scalar and vector field visualization

    SciTech Connect

    Max, N.; Crawfis, R.; Becker, B.

    1993-05-05

    At Lawrence Livermore National Laboratory (LLNL) we have recently developed several techniques for volume visualization of scalar and vector fields, all of which use back-to-front compositing. The first renders volume density clouds by compositing polyhedral volume cells or their faces. The second is a ``splatting`` scheme which composites textures used to reconstruct the scalar or vector fields. One version calculates the necessary texture values in software, and another takes advantage of hardware texture mapping. The next technique renders contour surface polygons using semi-transparent textures, which adjust appropriately when the surfaces deform in a flow, or change topology. The final one renders the ``flow volume`` of smoke or dye tracer swept out by a fluid flowing through a small generating polygon. All of these techniques are applied to a climate model data set, to visualize cloud density and wind velocity.

  5. Transforming 2d Cadastral Data Into a Dynamic Smart 3d Model

    NASA Astrophysics Data System (ADS)

    Tsiliakou, E.; Labropoulos, T.; Dimopoulou, E.

    2013-08-01

    3D property registration has become an imperative need in order to optimally reflect all complex cases of the multilayer reality of property rights and restrictions, revealing their vertical component. This paper refers to the potentials and multiple applications of 3D cadastral systems and explores the current state-of-the art, especially the available software with which 3D visualization can be achieved. Within this context, the Hellenic Cadastre's current state is investigated, in particular its data modeling frame. Presenting the methodologies and specifications addressing the registration of 3D properties, the operating cadastral system's shortcomings and merits are pointed out. Nonetheless, current technological advances as well as the availability of sophisticated software packages (proprietary or open source) call for 3D modeling. In order to register and visualize the complex reality in 3D, Esri's CityEngine modeling software has been used, which is specialized in the generation of 3D urban environments, transforming 2D GIS Data into Smart 3D City Models. The application of the 3D model concerns the Campus of the National Technical University of Athens, in which a complex ownership status is established along with approved special zoning regulations. The 3D model was built using different parameters based on input data, derived from cadastral and urban planning datasets, as well as legal documents and architectural plans. The process resulted in a final 3D model, optimally describing the cadastral situation and built environment and proved to be a good practice example of 3D visualization.

  6. Computing Radiative Transfer in a 3D Medium

    NASA Technical Reports Server (NTRS)

    Von Allmen, Paul; Lee, Seungwon

    2012-01-01

    A package of software computes the time-dependent propagation of a narrow laser beam in an arbitrary three- dimensional (3D) medium with absorption and scattering, using the transient-discrete-ordinates method and a direct integration method. Unlike prior software that utilizes a Monte Carlo method, this software enables simulation at very small signal-to-noise ratios. The ability to simulate propagation of a narrow laser beam in a 3D medium is an improvement over other discrete-ordinate software. Unlike other direct-integration software, this software is not limited to simulation of propagation of thermal radiation with broad angular spread in three dimensions or of a laser pulse with narrow angular spread in two dimensions. Uses for this software include (1) computing scattering of a pulsed laser beam on a material having given elastic scattering and absorption profiles, and (2) evaluating concepts for laser-based instruments for sensing oceanic turbulence and related measurements of oceanic mixed-layer depths. With suitable augmentation, this software could be used to compute radiative transfer in ultrasound imaging in biological tissues, radiative transfer in the upper Earth crust for oil exploration, and propagation of laser pulses in telecommunication applications.

  7. Improved Surgery Planning Using 3-D Printing: a Case Study.

    PubMed

    Singhal, A J; Shetty, V; Bhagavan, K R; Ragothaman, Ananthan; Shetty, V; Koneru, Ganesh; Agarwala, M

    2016-04-01

    The role of 3-D printing is presented for improved patient-specific surgery planning. Key benefits are time saved and surgery outcome. Two hard-tissue surgery models were 3-D printed, for orthopedic, pelvic surgery, and craniofacial surgery. We discuss software data conversion in computed tomography (CT)/magnetic resonance (MR) medical image for 3-D printing. 3-D printed models save time in surgery planning and help visualize complex pre-operative anatomy. Time saved in surgery planning can be as much as two thirds. In addition to improved surgery accuracy, 3-D printing presents opportunity in materials research. Other hard-tissue and soft-tissue cases in maxillofacial, abdominal, thoracic, cardiac, orthodontics, and neurosurgery are considered. We recommend using 3-D printing as standard protocol for surgery planning and for teaching surgery practices. A quick turnaround time of a 3-D printed surgery model, in improved accuracy in surgery planning, is helpful for the surgery team. It is recommended that these costs be within 20 % of the total surgery budget. PMID:27303117

  8. Multi-camera system for 3D forensic documentation.

    PubMed

    Leipner, Anja; Baumeister, Rilana; Thali, Michael J; Braun, Marcel; Dobler, Erika; Ebert, Lars C

    2016-04-01

    Three-dimensional (3D) surface documentation is well established in forensic documentation. The most common systems include laser scanners and surface scanners with optical 3D cameras. An additional documentation tool is photogrammetry. This article introduces the botscan© (botspot GmbH, Berlin, Germany) multi-camera system for the forensic markerless photogrammetric whole body 3D surface documentation of living persons in standing posture. We used the botscan© multi-camera system to document a person in 360°. The system has a modular design and works with 64 digital single-lens reflex (DSLR) cameras. The cameras were evenly distributed in a circular chamber. We generated 3D models from the photographs using the PhotoScan© (Agisoft LLC, St. Petersburg, Russia) software. Our results revealed that the botscan© and PhotoScan© produced 360° 3D models with detailed textures. The 3D models had very accurate geometries and could be scaled to full size with the help of scale bars. In conclusion, this multi-camera system provided a rapid and simple method for documenting the whole body of a person to generate 3D data with Photoscan©. PMID:26921815

  9. Accuracy of 3D scanners in tooth mark analysis.

    PubMed

    Molina, Ana; Martin-de-las-Heras, Stella

    2015-01-01

    The objective of this study was to compare the accuracy of contact and laser 3D scanners in tooth mark analysis. Ten dental casts were scanned with both 3D scanners. Seven linear measurements were made from the 3D images of dental casts and biting edges generated with DentalPrint© software (University of Granada, Granada, Spain). The uncertainty value for contact 3D scanning was 0.833 for the upper dental cast and 0.660 mm for the lower cast; similar uncertainty values were found for 3D-laser scanning. Slightly higher uncertainty values were obtained for the 3D biting edges generated. The uncertainty values for single measurements ranged from 0.1 to 0.3 mm with the exception of the intercanine distance, in which higher values were obtained. Knowledge of the error rate in the 3D scanning of dental casts and biting edges is especially relevant to be applied in practical forensic cases. PMID:25388960

  10. 3D Geo-Structures Visualization Education Project (3dgeostructuresvis.ucdavis.edu)

    NASA Astrophysics Data System (ADS)

    Billen, M. I.

    2014-12-01

    Students of field-based geology must master a suite of challenging skills from recognizing rocks, to measuring orientations of features in the field, to finding oneself (and the outcrop) on a map and placing structural information on maps. Students must then synthesize this information to derive meaning from the observations and ultimately to determine the three-dimensional (3D) shape of the deformed structures and their kinematic history. Synthesizing this kind of information requires sophisticated visualizations skills in order to extrapolate observations into the subsurface or missing (eroded) material. The good news is that students can learn 3D visualization skills through practice, and virtual tools can help provide some of that practice. Here I present a suite of learning modules focused at developing students' ability to imagine (visualize) complex 3D structures and their exposure through digital topographic surfaces. Using the software 3DVisualizer, developed by KeckCAVES (keckcaves.org) we have developed visualizations of common geologic structures (e.g., syncline, dipping fold) in which the rock is represented by originally flat-lying layers of sediment, each with a different color, which have been subsequently deformed. The exercises build up in complexity, first focusing on understanding the structure in 3D (penetrative understanding), and then moving to the exposure of the structure at a topographic surface. Individual layers can be rendered as a transparent feature to explore how the layer extends above and below the topographic surface (e.g., to follow an eroded fold limb across a valley). The exercises are provided using either movies of the visualization (which can also be used for examples during lectures), or the data and software can be downloaded to allow for more self-driven exploration and learning. These virtual field models and exercises can be used as "practice runs" before going into the field, as make-up assignments, as a field

  11. 3D Printable Graphene Composite

    NASA Astrophysics Data System (ADS)

    Wei, Xiaojun; Li, Dong; Jiang, Wei; Gu, Zheming; Wang, Xiaojuan; Zhang, Zengxing; Sun, Zhengzong

    2015-07-01

    In human being’s history, both the Iron Age and Silicon Age thrived after a matured massive processing technology was developed. Graphene is the most recent superior material which could potentially initialize another new material Age. However, while being exploited to its full extent, conventional processing methods fail to provide a link to today’s personalization tide. New technology should be ushered in. Three-dimensional (3D) printing fills the missing linkage between graphene materials and the digital mainstream. Their alliance could generate additional stream to push the graphene revolution into a new phase. Here we demonstrate for the first time, a graphene composite, with a graphene loading up to 5.6 wt%, can be 3D printable into computer-designed models. The composite’s linear thermal coefficient is below 75 ppm·°C-1 from room temperature to its glass transition temperature (Tg), which is crucial to build minute thermal stress during the printing process.

  12. 3D acoustic atmospheric tomography

    NASA Astrophysics Data System (ADS)

    Rogers, Kevin; Finn, Anthony

    2014-10-01

    This paper presents a method for tomographically reconstructing spatially varying 3D atmospheric temperature profiles and wind velocity fields based. Measurements of the acoustic signature measured onboard a small Unmanned Aerial Vehicle (UAV) are compared to ground-based observations of the same signals. The frequency-shifted signal variations are then used to estimate the acoustic propagation delay between the UAV and the ground microphones, which are also affected by atmospheric temperature and wind speed vectors along each sound ray path. The wind and temperature profiles are modelled as the weighted sum of Radial Basis Functions (RBFs), which also allow local meteorological measurements made at the UAV and ground receivers to supplement any acoustic observations. Tomography is used to provide a full 3D reconstruction/visualisation of the observed atmosphere. The technique offers observational mobility under direct user control and the capacity to monitor hazardous atmospheric environments, otherwise not justifiable on the basis of cost or risk. This paper summarises the tomographic technique and reports on the results of simulations and initial field trials. The technique has practical applications for atmospheric research, sound propagation studies, boundary layer meteorology, air pollution measurements, analysis of wind shear, and wind farm surveys.

  13. 3D Printed Bionic Ears

    PubMed Central

    Mannoor, Manu S.; Jiang, Ziwen; James, Teena; Kong, Yong Lin; Malatesta, Karen A.; Soboyejo, Winston O.; Verma, Naveen; Gracias, David H.; McAlpine, Michael C.

    2013-01-01

    The ability to three-dimensionally interweave biological tissue with functional electronics could enable the creation of bionic organs possessing enhanced functionalities over their human counterparts. Conventional electronic devices are inherently two-dimensional, preventing seamless multidimensional integration with synthetic biology, as the processes and materials are very different. Here, we present a novel strategy for overcoming these difficulties via additive manufacturing of biological cells with structural and nanoparticle derived electronic elements. As a proof of concept, we generated a bionic ear via 3D printing of a cell-seeded hydrogel matrix in the precise anatomic geometry of a human ear, along with an intertwined conducting polymer consisting of infused silver nanoparticles. This allowed for in vitro culturing of cartilage tissue around an inductive coil antenna in the ear, which subsequently enables readout of inductively-coupled signals from cochlea-shaped electrodes. The printed ear exhibits enhanced auditory sensing for radio frequency reception, and complementary left and right ears can listen to stereo audio music. Overall, our approach suggests a means to intricately merge biologic and nanoelectronic functionalities via 3D printing. PMID:23635097

  14. 3-D Relativistic MHD Simulations

    NASA Astrophysics Data System (ADS)

    Nishikaw, K.-I.; Frank, J.; Christodoulou, D. M.; Koide, S.; Sakai, J.-I.; Sol, H.; Mutel, R. L.

    1998-12-01

    We present 3-D numerical simulations of moderately hot, supersonic jets propagating initially along or obliquely to the field lines of a denser magnetized background medium with Lorentz factors of W=4.56 and evolving in a four-dimensional spacetime. The new results are understood as follows: Relativistic simulations have consistently shown that these jets are effectively heavy and so they do not suffer substantial momentum losses and are not decelerated as efficiently as their nonrelativistic counterparts. In addition, the ambient magnetic field, however strong, can be pushed aside with relative ease by the beam, provided that the degrees of freedom associated with all three spatial dimensions are followed self-consistently in the simulations. This effect is analogous to pushing Japanese ``noren'' or vertical Venetian blinds out of the way while the slats are allowed to bend in 3-D space rather than as a 2-D slab structure. We also simulate jets with the more realistic initial conditions for injecting jets for helical mangetic field, perturbed density, velocity, and internal energy, which are supposed to be caused in the process of jet generation. Three possible explanations for the observed variability are (i) tidal disruption of a star falling into the black hole, (ii) instabilities in the relativistic accretion disk, and (iii) jet-related PRocesses. New results will be reported at the meeting.

  15. 3D printed bionic ears.

    PubMed

    Mannoor, Manu S; Jiang, Ziwen; James, Teena; Kong, Yong Lin; Malatesta, Karen A; Soboyejo, Winston O; Verma, Naveen; Gracias, David H; McAlpine, Michael C

    2013-06-12

    The ability to three-dimensionally interweave biological tissue with functional electronics could enable the creation of bionic organs possessing enhanced functionalities over their human counterparts. Conventional electronic devices are inherently two-dimensional, preventing seamless multidimensional integration with synthetic biology, as the processes and materials are very different. Here, we present a novel strategy for overcoming these difficulties via additive manufacturing of biological cells with structural and nanoparticle derived electronic elements. As a proof of concept, we generated a bionic ear via 3D printing of a cell-seeded hydrogel matrix in the anatomic geometry of a human ear, along with an intertwined conducting polymer consisting of infused silver nanoparticles. This allowed for in vitro culturing of cartilage tissue around an inductive coil antenna in the ear, which subsequently enables readout of inductively-coupled signals from cochlea-shaped electrodes. The printed ear exhibits enhanced auditory sensing for radio frequency reception, and complementary left and right ears can listen to stereo audio music. Overall, our approach suggests a means to intricately merge biologic and nanoelectronic functionalities via 3D printing. PMID:23635097

  16. 3D Printable Graphene Composite

    PubMed Central

    Wei, Xiaojun; Li, Dong; Jiang, Wei; Gu, Zheming; Wang, Xiaojuan; Zhang, Zengxing; Sun, Zhengzong

    2015-01-01

    In human being’s history, both the Iron Age and Silicon Age thrived after a matured massive processing technology was developed. Graphene is the most recent superior material which could potentially initialize another new material Age. However, while being exploited to its full extent, conventional processing methods fail to provide a link to today’s personalization tide. New technology should be ushered in. Three-dimensional (3D) printing fills the missing linkage between graphene materials and the digital mainstream. Their alliance could generate additional stream to push the graphene revolution into a new phase. Here we demonstrate for the first time, a graphene composite, with a graphene loading up to 5.6 wt%, can be 3D printable into computer-designed models. The composite’s linear thermal coefficient is below 75 ppm·°C−1 from room temperature to its glass transition temperature (Tg), which is crucial to build minute thermal stress during the printing process. PMID:26153673

  17. 3D medical thermography device

    NASA Astrophysics Data System (ADS)

    Moghadam, Peyman

    2015-05-01

    In this paper, a novel handheld 3D medical thermography system is introduced. The proposed system consists of a thermal-infrared camera, a color camera and a depth camera rigidly attached in close proximity and mounted on an ergonomic handle. As a practitioner holding the device smoothly moves it around the human body parts, the proposed system generates and builds up a precise 3D thermogram model by incorporating information from each new measurement in real-time. The data is acquired in motion, thus it provides multiple points of view. When processed, these multiple points of view are adaptively combined by taking into account the reliability of each individual measurement which can vary due to a variety of factors such as angle of incidence, distance between the device and the subject and environmental sensor data or other factors influencing a confidence of the thermal-infrared data when captured. Finally, several case studies are presented to support the usability and performance of the proposed system.

  18. 3D Ion Temperature Reconstruction

    NASA Astrophysics Data System (ADS)

    Tanabe, Hiroshi; You, Setthivoine; Balandin, Alexander; Inomoto, Michiaki; Ono, Yasushi

    2009-11-01

    The TS-4 experiment at the University of Tokyo collides two spheromaks to form a single high-beta compact toroid. Magnetic reconnection during the merging process heats and accelerates the plasma in toroidal and poloidal directions. The reconnection region has a complex 3D topology determined by the pitch of the spheromak magnetic fields at the merging plane. A pair of multichord passive spectroscopic diagnostics have been established to measure the ion temperature and velocity in the reconnection volume. One setup measures spectral lines across a poloidal plane, retrieving velocity and temperature from Abel inversion. The other, novel setup records spectral lines across another section of the plasma and reconstructs velocity and temperature from 3D vector and 2D scalar tomography techniques. The magnetic field linking both measurement planes is determined from in situ magnetic probe arrays. The ion temperature is then estimated within the volume between the two measurement planes and at the reconnection region. The measurement is followed over several repeatable discharges to follow the heating and acceleration process during the merging reconnection.

  19. 3D Stratigraphic Modeling of Central Aachen

    NASA Astrophysics Data System (ADS)

    Dong, M.; Neukum, C.; Azzam, R.; Hu, H.

    2010-05-01

    Since 1980s, advanced computer hardware and software technologies, as well as multidisciplinary research have provided possibilities to develop advanced three dimensional (3D) simulation software for geosciences application. Some countries, such as USA1) and Canada2) 3), have built up regional 3D geological models based on archival geological data. Such models have played huge roles in engineering geology2), hydrogeology2) 3), geothermal industry1) and so on. In cooperating with the Municipality of Aachen, the Department of Engineering Geology of RWTH Aachen University have built up a computer-based 3D stratigraphic model of 50 meter' depth for the center of Aachen, which is a 5 km by 7 km geologically complex area. The uncorrelated data from multi-resources, discontinuous nature and unconformable connection of the units are main challenges for geological modeling in this area. The reliability of 3D geological models largely depends on the quality and quantity of data. Existing 1D and 2D geological data were collected, including 1) approximately 6970 borehole data of different depth compiled in Microsoft Access database and MapInfo database; 2) a Digital Elevation Model (DEM); 3) geological cross sections; and 4) stratigraphic maps in 1m, 2m and 5m depth. Since acquired data are of variable origins, they were managed step by step. The main processes are described below: 1) Typing errors of borehole data were identified and the corrected data were exported to Variowin2.2 to distinguish duplicate points; 2) The surface elevation of borehole data was compared to the DEM, and differences larger than 3m were eliminated. Moreover, where elevation data missed, it was read from the DEM; 3) Considerable data were collected from municipal constructions, such as residential buildings, factories, and roads. Therefore, many boreholes are spatially clustered, and only one or two representative points were picked out in such areas; After above procedures, 5839 boreholes with -x

  20. Real-time volume rendering of digital medical images on an iOS device

    NASA Astrophysics Data System (ADS)

    Noon, Christian; Holub, Joseph; Winer, Eliot

    2013-03-01

    Performing high quality 3D visualizations on mobile devices, while tantalizingly close in many areas, is still a quite difficult task. This is especially true for 3D volume rendering of digital medical images. Allowing this would empower medical personnel a powerful tool to diagnose and treat patients and train the next generation of physicians. This research focuses on performing real time volume rendering of digital medical images on iOS devices using custom developed GPU shaders for orthogonal texture slicing. An interactive volume renderer was designed and developed with several new features including dynamic modification of render resolutions, an incremental render loop, a shader-based clipping algorithm to support OpenGL ES 2.0, and an internal backface culling algorithm for properly sorting rendered geometry with alpha blending. The application was developed using several application programming interfaces (APIs) such as OpenSceneGraph (OSG) as the primary graphics renderer coupled with iOS Cocoa Touch for user interaction, and DCMTK for DICOM I/O. The developed application rendered volume datasets over 450 slices up to 50-60 frames per second, depending on the specific model of the iOS device. All rendering is done locally on the device so no Internet connection is required.

  1. Evaluation of free non-diagnostic DICOM software tools

    NASA Astrophysics Data System (ADS)

    Liao, Wei; Deserno, Thomas M.; Spitzer, Klaus

    2008-03-01

    A variety of software exists to interpret files or directories compliant to the Digital Imaging and Communications in Medicine (DICOM) standard and display them as individual images or volume rendered objects. Some of them offer further processing and analysis features. The surveys that have been published so far are partly not up-to-date anymore, and neither a detailed description of the software functions nor a comprehensive comparison is given. This paper aims at evaluation and comparison of freely available, non-diagnostic DICOM software with respect to the following aspects: (i) data import; (ii) data export; (iii) header viewing; (iv) 2D image viewing; (v) 3D volume viewing; (vi) support; (vii) portability; (viii) workability; and (ix) usability. In total, 21 tools were included: 3D Slicer, AMIDE, BioImage Suite, DicomWorks, EViewBox, ezDICOM, FPImage, ImageJ, JiveX, Julius, MedImaView, MedINRIA, MicroView, MIPAV, MRIcron, Osiris, PMSDView, Syngo FastView, TomoVision, UniViewer, and XMedCon. Our results in table form can ease the selection of appropriate DICOM software tools. In particular, we discuss use cases for the inexperienced user, data conversion, and volume rendering, and suggest Syngo FastView or PMSDView, DicomWorks or XMedCon, and ImageJ or UniViewer, respectively.

  2. Text Rendering: Beginning Literary Response.

    ERIC Educational Resources Information Center

    Robertson, Sandra L.

    1990-01-01

    Argues that "text rendering"--responding to oral readings by saying back remembered words or phrases--forces students to prolong their initial responses to texts and opens initial response to the influence of other readers. Argues that silence following oral readings allows words to sink into students' minds, creating individual images and…

  3. 3D Printing of Graphene Aerogels.

    PubMed

    Zhang, Qiangqiang; Zhang, Feng; Medarametla, Sai Pradeep; Li, Hui; Zhou, Chi; Lin, Dong

    2016-04-01

    3D printing of a graphene aerogel with true 3D overhang structures is highlighted. The aerogel is fabricated by combining drop-on-demand 3D printing and freeze casting. The water-based GO ink is ejected and freeze-cast into designed 3D structures. The lightweight (<10 mg cm(-3) ) 3D printed graphene aerogel presents superelastic and high electrical conduction. PMID:26861680

  4. Real-Time 3D Visualization

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Butler Hine, former director of the Intelligent Mechanism Group (IMG) at Ames Research Center, and five others partnered to start Fourth Planet, Inc., a visualization company that specializes in the intuitive visual representation of dynamic, real-time data over the Internet and Intranet. Over a five-year period, the then NASA researchers performed ten robotic field missions in harsh climes to mimic the end- to-end operations of automated vehicles trekking across another world under control from Earth. The core software technology for these missions was the Virtual Environment Vehicle Interface (VEVI). Fourth Planet has released VEVI4, the fourth generation of the VEVI software, and NetVision. VEVI4 is a cutting-edge computer graphics simulation and remote control applications tool. The NetVision package allows large companies to view and analyze in virtual 3D space such things as the health or performance of their computer network or locate a trouble spot on an electric power grid. Other products are forthcoming. Fourth Planet is currently part of the NASA/Ames Technology Commercialization Center, a business incubator for start-up companies.

  5. ShowMe3D

    2012-01-05

    ShowMe3D is a data visualization graphical user interface specifically designed for use with hyperspectral image obtained from the Hyperspectral Confocal Microscope. The program allows the user to select and display any single image from a three dimensional hyperspectral image stack. By moving a slider control, the user can easily move between images of the stack. The user can zoom into any region of the image. The user can select any pixel or region from themore » displayed image and display the fluorescence spectrum associated with that pixel or region. The user can define up to 3 spectral filters to apply to the hyperspectral image and view the image as it would appear from a filter-based confocal microscope. The user can also obtain statistics such as intensity average and variance from selected regions.« less

  6. ShowMe3D

    SciTech Connect

    Sinclair, Michael B

    2012-01-05

    ShowMe3D is a data visualization graphical user interface specifically designed for use with hyperspectral image obtained from the Hyperspectral Confocal Microscope. The program allows the user to select and display any single image from a three dimensional hyperspectral image stack. By moving a slider control, the user can easily move between images of the stack. The user can zoom into any region of the image. The user can select any pixel or region from the displayed image and display the fluorescence spectrum associated with that pixel or region. The user can define up to 3 spectral filters to apply to the hyperspectral image and view the image as it would appear from a filter-based confocal microscope. The user can also obtain statistics such as intensity average and variance from selected regions.

  7. 3D Elastic Wavefield Tomography

    NASA Astrophysics Data System (ADS)

    Guasch, L.; Warner, M.; Stekl, I.; Umpleby, A.; Shah, N.

    2010-12-01

    Wavefield tomography, or waveform inversion, aims to extract the maximum information from seismic data by matching trace by trace the response of the solid earth to seismic waves using numerical modelling tools. Its first formulation dates from the early 80's, when Albert Tarantola developed a solid theoretical basis that is still used today with little change. Due to computational limitations, the application of the method to 3D problems has been unaffordable until a few years ago, and then only under the acoustic approximation. Although acoustic wavefield tomography is widely used, a complete solution of the seismic inversion problem requires that we account properly for the physics of wave propagation, and so must include elastic effects. We have developed a 3D tomographic wavefield inversion code that incorporates the full elastic wave equation. The bottle neck of the different implementations is the forward modelling algorithm that generates the synthetic data to be compared with the field seismograms as well as the backpropagation of the residuals needed to form the direction update of the model parameters. Furthermore, one or two extra modelling runs are needed in order to calculate the step-length. Our approach uses a FD scheme explicit time-stepping by finite differences that are 4th order in space and 2nd order in time, which is a 3D version of the one developed by Jean Virieux in 1986. We chose the time domain because an explicit time scheme is much less demanding in terms of memory than its frequency domain analogue, although the discussion of wich domain is more efficient still remains open. We calculate the parameter gradients for Vp and Vs by correlating the normal and shear stress wavefields respectively. A straightforward application would lead to the storage of the wavefield at all grid points at each time-step. We tackled this problem using two different approaches. The first one makes better use of resources for small models of dimension equal

  8. Ideal Positions: 3D Sonography, Medical Visuality, Popular Culture.

    PubMed

    Seiber, Tim

    2016-03-01

    As digital technologies are integrated into medical environments, they continue to transform the experience of contemporary health care. Importantly, medicine is increasingly visual. In the history of sonography, visibility has played an important role in accessing fetal bodies for diagnostic and entertainment purposes. With the advent of three-dimensional (3D) rendering, sonography presents the fetus visually as already a child. The aesthetics of this process and the resulting imagery, made possible in digital networks, discloses important changes in the relationship between technology and biology, reproductive health and political debates, and biotechnology and culture. PMID:26164291

  9. Fast volume rendering algorithm in a virtual endoscopy system

    NASA Astrophysics Data System (ADS)

    Kim, Sang H.; Kim, Jin K.; Ra, Jong Beom

    2002-05-01

    Recently, 3D virtual endoscopy has been used as an alternative noninvasive procedure for visualization of a hollow organ. In this paper, we propose a fast volume rendering scheme based on perspective ray casting for virtual endoscopy. As a pre-processing step, the algorithm divides a volume into hierarchical blocks and classifies them into opaque or transparent blocks. Then, the rendering procedure is as follows. In the first step, we perform ray casting only for sub-sampled pixels on the image plane, and determine their pixel values and depth information. In the second step, by reducing the sub-sampling factor by half, we repeat ray casting for newly added pixels, and their pixel values and depth information are determined. Here, the previously obtained depth information is utilized to reduce the processing time. This step is performed recursively until the full-size rendering image is acquired. Experiments conducted on a PC shows that the proposed algorithm can reduce the rendering time by 70-80% for the bronchus and colon endoscopy, compared with the brute-force ray casting scheme. Thereby, interactive rendering becomes more realizable in a PC environment without any specific hardware.

  10. Joint 3d Estimation of Vehicles and Scene Flow

    NASA Astrophysics Data System (ADS)

    Menze, M.; Heipke, C.; Geiger, A.

    2015-08-01

    driving. While much progress has been made in recent years, imaging conditions in natural outdoor environments are still very challenging for current reconstruction and recognition methods. In this paper, we propose a novel unified approach which reasons jointly about 3D scene flow as well as the pose, shape and motion of vehicles in the scene. Towards this goal, we incorporate a deformable CAD model into a slanted-plane conditional random field for scene flow estimation and enforce shape consistency between the rendered 3D models and the parameters of all superpixels in the image. The association of superpixels to objects is established by an index variable which implicitly enables model selection. We evaluate our approach on the challenging KITTI scene flow dataset in terms of object and scene flow estimation. Our results provide a prove of concept and demonstrate the usefulness of our method.

  11. 3D Game Content Distributed Adaptation in Heterogeneous Environments

    NASA Astrophysics Data System (ADS)

    Morán, Francisco; Preda, Marius; Lafruit, Gauthier; Villegas, Paulo; Berretty, Robert-Paul

    2007-12-01

    Most current multiplayer 3D games can only be played on a single dedicated platform (a particular computer, console, or cell phone), requiring specifically designed content and communication over a predefined network. Below we show how, by using signal processing techniques such as multiresolution representation and scalable coding for all the components of a 3D graphics object (geometry, texture, and animation), we enable online dynamic content adaptation, and thus delivery of the same content over heterogeneous networks to terminals with very different profiles, and its rendering on them. We present quantitative results demonstrating how the best displayed quality versus computational complexity versus bandwidth tradeoffs have been achieved, given the distributed resources available over the end-to-end content delivery chain. Additionally, we use state-of-the-art, standardised content representation and compression formats (MPEG-4 AFX, JPEG 2000, XML), enabling deployment over existing infrastructure, while keeping hooks to well-established practices in the game industry.

  12. Use Models like Maps in a 3D SDI

    NASA Astrophysics Data System (ADS)

    Gietzel, Jan; Gabriel, Paul; Schaeben, Helmut; Le, Hai Ha

    2013-04-01

    Digital geological applications have become 3D up to 4D modelling of the underground. The modellers are working very heterogeneously in terms of its applied software systems. On the other hand the 3D/4D modelling of the subsurface has become part of the geological surveys all around the world. This implies a wide spread group of users working in different institutions aiming to work together on one subsurface model. Established 3D/4D-modelling software systems mainly use a file based approach to store data, which is in a high contrast to the needs of a central administrated and network based data transfer approach. At the department of geophysics and geo information sciences at the Technical University Bergakademie Freiberg, the GST system for managing 3D and 4D geosciences data in a databases system was developed and is now continued by the company GiGa infosystems. The GST-Framework includes a storage engine, a web service for sharing and a number of client software including a browser based client interface for visualising, accessing and manipulating geological CAD data. Including a check out system GST supports multi user editing on huge models, designed to manage seamless high resolution models of the subsurface. While working on complex projects various software is used for the creation of the model, the prediction of properties and final simulation. A problem rising from the use of several software is the interoperability of the models. Due to conversion errors different working groups use mainly different raw data. This results in different models, which have to be corrected with additional effort. One platform sharing the models is strongly demanded. One high potential solution is a centralized and software independent storage, which will be presented.

  13. Compressive rendering: a rendering application of compressed sensing.

    PubMed

    Sen, Pradeep; Darabi, Soheil

    2011-04-01

    Recently, there has been growing interest in compressed sensing (CS), the new theory that shows how a small set of linear measurements can be used to reconstruct a signal if it is sparse in a transform domain. Although CS has been applied to many problems in other fields, in computer graphics, it has only been used so far to accelerate the acquisition of light transport. In this paper, we propose a novel application of compressed sensing by using it to accelerate ray-traced rendering in a manner that exploits the sparsity of the final image in the wavelet basis. To do this, we raytrace only a subset of the pixel samples in the spatial domain and use a simple, greedy CS-based algorithm to estimate the wavelet transform of the image during rendering. Since the energy of the image is concentrated more compactly in the wavelet domain, less samples are required for a result of given quality than with conventional spatial-domain rendering. By taking the inverse wavelet transform of the result, we compute an accurate reconstruction of the desired final image. Our results show that our framework can achieve high-quality images with approximately 75 percent of the pixel samples using a nonadaptive sampling scheme. In addition, we also perform better than other algorithms that might be used to fill in the missing pixel data, such as interpolation or inpainting. Furthermore, since the algorithm works in image space, it is completely independent of scene complexity. PMID:21311092

  14. 3D surface digitizing and modeling development at ITRI

    NASA Astrophysics Data System (ADS)

    Hsueh, Wen-Jean

    2000-06-01

    This paper gives an overview of the research and development activities in 3D surface digitizing and modeling conducted at the Industrial Technology Research Institute (ITRI) of Taiwan in the past decade. As a major technology and consulting service provider of the area, ITRI has developed 3D laser scanning digitizers ranging from low-cost compacts, industrial CAD/CAM digitizing, to large human body scanner, with in-house 3D surface modeling software to provide total solution in reverse engineering that requires processing capabilities of large number of 3D data. Based on both hardware and software technologies in scanning, merging, registration, surface fitting, reconstruction, and compression, ITRI is now exploring innovative methodologies that provide higher performances, including hardware-based correlation algorithms with advanced camera designs, animation surface model reconstruction, and optical tracking for motion capture. It is expected that the need for easy and fast high-quality 3D information in the near future will grow exponentially, at the same amazing rate as the internet and the human desire for realistic and natural images.

  15. An image encryption algorithm based on 3D cellular automata and chaotic maps

    NASA Astrophysics Data System (ADS)

    Del Rey, A. Martín; Sánchez, G. Rodríguez

    2015-05-01

    A novel encryption algorithm to cipher digital images is presented in this work. The digital image is rendering into a three-dimensional (3D) lattice and the protocol consists of two phases: the confusion phase where 24 chaotic Cat maps are applied and the diffusion phase where a 3D cellular automata is evolved. The encryption method is shown to be secure against the most important cryptanalytic attacks.

  16. 3D Fiber Orientation Simulation for Plastic Injection Molding

    NASA Astrophysics Data System (ADS)

    Lin, Baojiu; Jin, Xiaoshi; Zheng, Rong; Costa, Franco S.; Fan, Zhiliang

    2004-06-01

    Glass fiber reinforced polymer is widely used in the products made using injection molding processing. The distribution of fiber orientation inside plastic parts has direct effects on quality of molded parts. Using computer simulation to predict fiber orientation distribution is one of most efficient ways to assist engineers to do warpage analysis and to find a good design solution to produce high quality plastic parts. Fiber orientation simulation software based on 2-1/2D (midplane /Dual domain mesh) techniques has been used in industry for a decade. However, the 2-1/2D technique is based on the planar Hele-Shaw approximation and it is not suitable when the geometry has complex three-dimensional features which cannot be well approximated by 2D shells. Recently, a full 3D simulation software for fiber orientation has been developed and integrated into Moldflow Plastics Insight 3D simulation software. The theory for this new 3D fiber orientation calculation module is described in this paper. Several examples are also presented to show the benefit in using 3D fiber orientation simulation.

  17. Impact of the 3-D model strategy on science learning of the solar system

    NASA Astrophysics Data System (ADS)

    Alharbi, Mohammed

    The purpose of this mixed method study, quantitative and descriptive, was to determine whether the first-middle grade (seventh grade) students at Saudi schools are able to learn and use the Autodesk Maya software to interact and create their own 3-D models and animations and whether their use of the software influences their study habits and their understanding of the school subject matter. The study revealed that there is value to the science students regarding the use of 3-D software to create 3-D models to complete science assignments. Also, this study aimed to address the middle-school students' ability to learn 3-D software in art class, and then ultimately use it in their science class. The success of this study may open the way to consider the impact of 3-D modeling on other school subjects, such as mathematics, art, and geography. When the students start using graphic design, including 3-D software, at a young age, they tend to develop personal creativity and skills. The success of this study, if applied in schools, will provide the community with skillful young designers and increase awareness of graphic design and the new 3-D technology. Experimental method was used to answer the quantitative research question, are there significant differences applying the learning method using 3-D models (no 3-D, premade 3-D, and create 3-D) in a science class being taught about the solar system and its impact on the students' science achievement scores? Descriptive method was used to answer the qualitative research questions that are about the difficulty of learning and using Autodesk Maya software, time that students take to use the basic levels of Polygon and Animation parts of the Autodesk Maya software, and level of students' work quality.

  18. Microbial pathogen quality criteria of rendered products.

    PubMed

    Pandey, Pramod K; Biswas, Sagor; Kass, Philip

    2016-06-01

    The North American rendering industry processes approximately 24 million metric tons (Mt) of raw materials and produces more than 8 million Mt of rendered products. More than 85 % of rendered products produced annually in the USA are used for producing animal feed. Pathogen contamination in rendered products is an important and topical issue. Although elevated temperatures (115-140 °C) for 40-90 min during the standard rendering processes are mathematically sufficient to completely destroy commonly found pathogens, the presence of pathogens in rendered products has nevertheless been reported. Increased concern over the risk of microbial contamination in rendered products may require additional safeguards for producing pathogen-free rendered products. This study provides an overview of rendered products, existing microbial pathogen quality criteria of rendered products (MPQCR), limitations, and the scope of improving the MPQCR. PMID:27121572

  19. Stereoscopic contents authoring system for 3D DMB data service

    NASA Astrophysics Data System (ADS)

    Lee, BongHo; Yun, Kugjin; Hur, Namho; Kim, Jinwoong; Lee, SooIn

    2009-02-01

    This paper presents a stereoscopic contents authoring system that covers the creation and editing of stereoscopic multimedia contents for the 3D DMB (Digital Multimedia Broadcasting) data services. The main concept of 3D DMB data service is that, instead of full 3D video, partial stereoscopic objects (stereoscopic JPEG, PNG and MNG) are stereoscopically displayed on the 2D background video plane. In order to provide stereoscopic objects, we design and implement a 3D DMB content authoring system which provides the convenient and straightforward contents creation and editing functionalities. For the creation of stereoscopic contents, we mainly focused on two methods: CG (Computer Graphics) based creation and real image based creation. In the CG based creation scenario where the generated CG data from the conventional MAYA or 3DS MAX tool is rendered to generate the stereoscopic images by applying the suitable disparity and camera parameters, we use X-file for the direct conversion to stereoscopic objects, so called 3D DMB objects. In the case of real image based creation, the chroma-key method is applied to real video sequences to acquire the alpha-mapped images which are in turn directly converted to stereoscopic objects. The stereoscopic content editing module includes the timeline editor for both the stereoscopic video and stereoscopic objects. For the verification of created stereoscopic contents, we implemented the content verification module to verify and modify the contents by adjusting the disparity. The proposed system will leverage the power of stereoscopic contents creation for mobile 3D data service especially targeted for T-DMB with the capabilities of CG and real image based contents creation, timeline editing and content verification.

  20. Non Destructive High-Resolution 3D Investigation of Vesicle Textures in Pumice and Scoria by Synchrotron X-Ray Computed Microtomography

    NASA Astrophysics Data System (ADS)

    Polacci, M.; Baker, D.; Mancini, L.; Tromba, G.; Zanini, F.

    2005-12-01

    High resolution X-ray computed microtomography was applied to investigate the 3D structure of pyroclastic material from different active, explosive, hazardous volcanic areas. The experiments were performed at the SYRMEP beamline of the ELETTRA synchrotron radiation facility in Trieste (Italy). The 2D image slices resulting from tomography of selected pumice and scoria samples were transformed into volume renderings via specific tomographic software. The reconstructed volumes allowed us to test the applicability of this technique, novel in the field of volcanology, to volcanic specimens with different textural characteristics. The use of a third generation synchrotron radiation facility allowed optimal visualization of vesicle and crystal geometry in the reconstructed volume where conventional X-ray methods are strongly limited. The BLOB3D software package was used to accomplish quantitative descriptions of vesicle textures in terms of vesicularity, number density, volume and connectivity. The results exhibited complex patterns of the vesicle content, size, shape and distribution within the different pyroclasts and allowed us to track the degassing history of each single clast. With this preliminary study we demonstrate that computed microtomography is a feasible tool complementary to conventional microscopy methods for the full 3D textural characterization of volcanic clasts, and that it may be used to provide further constraints to models of degassing at active volcanoes.

  1. 3D multiplexed immunoplasmonics microscopy

    NASA Astrophysics Data System (ADS)

    Bergeron, Éric; Patskovsky, Sergiy; Rioux, David; Meunier, Michel

    2016-07-01

    Selective labelling, identification and spatial distribution of cell surface biomarkers can provide important clinical information, such as distinction between healthy and diseased cells, evolution of a disease and selection of the optimal patient-specific treatment. Immunofluorescence is the gold standard for efficient detection of biomarkers expressed by cells. However, antibodies (Abs) conjugated to fluorescent dyes remain limited by their photobleaching, high sensitivity to the environment, low light intensity, and wide absorption and emission spectra. Immunoplasmonics is a novel microscopy method based on the visualization of Abs-functionalized plasmonic nanoparticles (fNPs) targeting cell surface biomarkers. Tunable fNPs should provide higher multiplexing capacity than immunofluorescence since NPs are photostable over time, strongly scatter light at their plasmon peak wavelengths and can be easily functionalized. In this article, we experimentally demonstrate accurate multiplexed detection based on the immunoplasmonics approach. First, we achieve the selective labelling of three targeted cell surface biomarkers (cluster of differentiation 44 (CD44), epidermal growth factor receptor (EGFR) and voltage-gated K+ channel subunit KV1.1) on human cancer CD44+ EGFR+ KV1.1+ MDA-MB-231 cells and reference CD44- EGFR- KV1.1+ 661W cells. The labelling efficiency with three stable specific immunoplasmonics labels (functionalized silver nanospheres (CD44-AgNSs), gold (Au) NSs (EGFR-AuNSs) and Au nanorods (KV1.1-AuNRs)) detected by reflected light microscopy (RLM) is similar to the one with immunofluorescence. Second, we introduce an improved method for 3D localization and spectral identification of fNPs based on fast z-scanning by RLM with three spectral filters corresponding to the plasmon peak wavelengths of the immunoplasmonics labels in the cellular environment (500 nm for 80 nm AgNSs, 580 nm for 100 nm AuNSs and 700 nm for 40 nm × 92 nm AuNRs). Third, the developed

  2. Human tooth pulp anatomy visualization by 3D magnetic resonance microscopy

    PubMed Central

    Sustercic, Dusan; Sersa, Igor

    2012-01-01

    Background Precise assessment of dental pulp anatomy is of an extreme importance for a successful endodontic treatment. As standard radiographs of teeth provide very limited information on dental pulp anatomy, more capable methods are highly appreciated. One of these is 3D magnetic resonance (MR) microscopy of which diagnostic capabilities in terms of a better dental pulp anatomy assessment were evaluated in the study. Materials and methods Twenty extracted human teeth were scanned on a 2.35 T MRI system for MR microscopy using the 3D spin-echo method that enabled image acquisition with isotropic resolution of 100 μm. The 3D images were then post processed by ImageJ program (NIH) to obtain advanced volume rendered views of dental pulps. Results MR microscopy at 2.35 T provided accurate data on dental pulp anatomy in vitro. The data were presented as a sequence of thin 2D slices through the pulp in various orientations or as volume rendered 3D images reconstructed form arbitrary view-points. Sequential 2D images enabled only an approximate assessment of the pulp, while volume rendered 3D images were more precise in visualization of pulp anatomy and clearly showed pulp diverticles, number of pulp canals and root canal anastomosis. Conclusions This in vitro study demonstrated that MR microscopy could provide very accurate 3D visualization of dental pulp anatomy. A possible future application of the method in vivo may be of a great importance for the endodontic treatment. PMID:22933973

  3. NIF Ignition Target 3D Point Design

    SciTech Connect

    Jones, O; Marinak, M; Milovich, J; Callahan, D

    2008-11-05

    We have developed an input file for running 3D NIF hohlraums that is optimized such that it can be run in 1-2 days on parallel computers. We have incorporated increasing levels of automation into the 3D input file: (1) Configuration controlled input files; (2) Common file for 2D and 3D, different types of capsules (symcap, etc.); and (3) Can obtain target dimensions, laser pulse, and diagnostics settings automatically from NIF Campaign Management Tool. Using 3D Hydra calculations to investigate different problems: (1) Intrinsic 3D asymmetry; (2) Tolerance to nonideal 3D effects (e.g. laser power balance, pointing errors); and (3) Synthetic diagnostics.

  4. 3D multiplexed immunoplasmonics microscopy.

    PubMed

    Bergeron, Éric; Patskovsky, Sergiy; Rioux, David; Meunier, Michel

    2016-07-21

    Selective labelling, identification and spatial distribution of cell surface biomarkers can provide important clinical information, such as distinction between healthy and diseased cells, evolution of a disease and selection of the optimal patient-specific treatment. Immunofluorescence is the gold standard for efficient detection of biomarkers expressed by cells. However, antibodies (Abs) conjugated to fluorescent dyes remain limited by their photobleaching, high sensitivity to the environment, low light intensity, and wide absorption and emission spectra. Immunoplasmonics is a novel microscopy method based on the visualization of Abs-functionalized plasmonic nanoparticles (fNPs) targeting cell surface biomarkers. Tunable fNPs should provide higher multiplexing capacity than immunofluorescence since NPs are photostable over time, strongly scatter light at their plasmon peak wavelengths and can be easily functionalized. In this article, we experimentally demonstrate accurate multiplexed detection based on the immunoplasmonics approach. First, we achieve the selective labelling of three targeted cell surface biomarkers (cluster of differentiation 44 (CD44), epidermal growth factor receptor (EGFR) and voltage-gated K(+) channel subunit KV1.1) on human cancer CD44(+) EGFR(+) KV1.1(+) MDA-MB-231 cells and reference CD44(-) EGFR(-) KV1.1(+) 661W cells. The labelling efficiency with three stable specific immunoplasmonics labels (functionalized silver nanospheres (CD44-AgNSs), gold (Au) NSs (EGFR-AuNSs) and Au nanorods (KV1.1-AuNRs)) detected by reflected light microscopy (RLM) is similar to the one with immunofluorescence. Second, we introduce an improved method for 3D localization and spectral identification of fNPs based on fast z-scanning by RLM with three spectral filters corresponding to the plasmon peak wavelengths of the immunoplasmonics labels in the cellular environment (500 nm for 80 nm AgNSs, 580 nm for 100 nm AuNSs and 700 nm for 40 nm × 92 nm AuNRs). Third

  5. Software systems for modeling articulated figures

    NASA Technical Reports Server (NTRS)

    Phillips, Cary B.

    1989-01-01

    Research in computer animation and simulation of human task performance requires sophisticated geometric modeling and user interface tools. The software for a research environment should present the programmer with a powerful but flexible substrate of facilities for displaying and manipulating geometric objects, yet insure that future tools have a consistent and friendly user interface. Jack is a system which provides a flexible and extensible programmer and user interface for displaying and manipulating complex geometric figures, particularly human figures in a 3D working environment. It is a basic software framework for high-performance Silicon Graphics IRIS workstations for modeling and manipulating geometric objects in a general but powerful way. It provides a consistent and user-friendly interface across various applications in computer animation and simulation of human task performance. Currently, Jack provides input and control for applications including lighting specification and image rendering, anthropometric modeling, figure positioning, inverse kinematics, dynamic simulation, and keyframe animation.

  6. View synthesis techniques for 3D video

    NASA Astrophysics Data System (ADS)

    Tian, Dong; Lai, Po-Lin; Lopez, Patrick; Gomila, Cristina

    2009-08-01

    To facilitate new video applications such as three-dimensional video (3DV) and free-viewpoint video (FVV), multiple view plus depth format (MVD), which consists of both video views and the corresponding per-pixel depth images, is being investigated. Virtual views can be generated using depth image based rendering (DIBR), which takes video and the corresponding depth images as input. This paper discusses view synthesis techniques based on DIBR, which includes forward warping, blending and hole filling. Especially, we will emphasize on the techniques brought to the MPEG view synthesis reference software (VSRS). Unlike the case in the field of computer graphics, the ground truth depth images for nature content are very difficult to obtain. The estimated depth images used for view synthesis typically contain different types of noises. Some robust synthesis modes to combat against the depth errors are also presented in this paper. In addition, we briefly discuss how to use synthesis techniques with minor modifications to generate the occlusion layer information for layered depth video (LDV) data, which is another potential format for 3DV applications.

  7. 2D/3D Synthetic Vision Navigation Display

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Kramer, Lynda J.; Arthur, J. J., III; Bailey, Randall E.; Sweeters, jason L.

    2008-01-01

    Flight-deck display software was designed and developed at NASA Langley Research Center to provide two-dimensional (2D) and three-dimensional (3D) terrain, obstacle, and flight-path perspectives on a single navigation display. The objective was to optimize the presentation of synthetic vision (SV) system technology that permits pilots to view multiple perspectives of flight-deck display symbology and 3D terrain information. Research was conducted to evaluate the efficacy of the concept. The concept has numerous unique implementation features that would permit enhanced operational concepts and efficiencies in both current and future aircraft.

  8. DspaceOgre 3D Graphics Visualization Tool

    NASA Technical Reports Server (NTRS)

    Jain, Abhinandan; Myin, Steven; Pomerantz, Marc I.

    2011-01-01

    This general-purpose 3D graphics visualization C++ tool is designed for visualization of simulation and analysis data for articulated mechanisms. Examples of such systems are vehicles, robotic arms, biomechanics models, and biomolecular structures. DspaceOgre builds upon the open-source Ogre3D graphics visualization library. It provides additional classes to support the management of complex scenes involving multiple viewpoints and different scene groups, and can be used as a remote graphics server. This software provides improved support for adding programs at the graphics processing unit (GPU) level for improved performance. It also improves upon the messaging interface it exposes for use as a visualization server.

  9. Geological interpretation and analysis of surface based, spatially referenced planetary imagery data using PRoGIS 2.0 and Pro3D.

    NASA Astrophysics Data System (ADS)

    Barnes, R.; Gupta, S.; Giordano, M.; Morley, J. G.; Muller, J. P.; Tao, Y.; Sprinks, J.; Traxler, C.; Hesina, G.; Ortner, T.; Sander, K.; Nauschnegg, B.; Paar, G.; Willner, K.; Pajdla, T.

    2015-10-01

    We apply the capabilities of the geosp