Parallel Rendering of Large Time-Varying Volume Data
NASA Technical Reports Server (NTRS)
Garbutt, Alexander E.
2005-01-01
Interactive visualization of large time-varying 3D volume datasets has been and still is a great challenge to the modem computational world. It stretches the limits of the memory capacity, the disk space, the network bandwidth and the CPU speed of a conventional computer. In this SURF project, we propose to develop a parallel volume rendering program on SGI's Prism, a cluster computer equipped with state-of-the-art graphic hardware. The proposed program combines both parallel computing and hardware rendering in order to achieve an interactive rendering rate. We use 3D texture mapping and a hardware shader to implement 3D volume rendering on each workstation. We use SGI's VisServer to enable remote rendering using Prism's graphic hardware. And last, we will integrate this new program with ParVox, a parallel distributed visualization system developed at JPL. At the end of the project, we Will demonstrate remote interactive visualization using this new hardware volume renderer on JPL's Prism System using a time-varying dataset from selected JPL applications.
Real-time volume rendering of digital medical images on an iOS device
NASA Astrophysics Data System (ADS)
Noon, Christian; Holub, Joseph; Winer, Eliot
2013-03-01
Performing high quality 3D visualizations on mobile devices, while tantalizingly close in many areas, is still a quite difficult task. This is especially true for 3D volume rendering of digital medical images. Allowing this would empower medical personnel a powerful tool to diagnose and treat patients and train the next generation of physicians. This research focuses on performing real time volume rendering of digital medical images on iOS devices using custom developed GPU shaders for orthogonal texture slicing. An interactive volume renderer was designed and developed with several new features including dynamic modification of render resolutions, an incremental render loop, a shader-based clipping algorithm to support OpenGL ES 2.0, and an internal backface culling algorithm for properly sorting rendered geometry with alpha blending. The application was developed using several application programming interfaces (APIs) such as OpenSceneGraph (OSG) as the primary graphics renderer coupled with iOS Cocoa Touch for user interaction, and DCMTK for DICOM I/O. The developed application rendered volume datasets over 450 slices up to 50-60 frames per second, depending on the specific model of the iOS device. All rendering is done locally on the device so no Internet connection is required.
NASA Astrophysics Data System (ADS)
Macready, Hugh; Kim, Jinman; Feng, David; Cai, Weidong
2006-03-01
Dual-modality imaging scanners combining functional PET and anatomical CT constitute a challenge in volumetric visualization that can be limited by the high computational demand and expense. This study aims at providing physicians with multi-dimensional visualization tools, in order to navigate and manipulate the data running on a consumer PC. We have maximized the utilization of pixel-shader architecture of the low-cost graphic hardware and the texture-based volume rendering to provide visualization tools with high degree of interactivity. All the software was developed using OpenGL and Silicon Graphics Inc. Volumizer, tested on a Pentium mobile CPU on a PC notebook with 64M graphic memory. We render the individual modalities separately, and performing real-time per-voxel fusion. We designed a novel "alpha-spike" transfer function to interactively identify structure of interest from volume rendering of PET/CT. This works by assigning a non-linear opacity to the voxels, thus, allowing the physician to selectively eliminate or reveal information from the PET/CT volumes. As the PET and CT are rendered independently, manipulations can be applied to individual volumes, for instance, the application of transfer function to CT to reveal the lung boundary while adjusting the fusion ration between the CT and PET to enhance the contrast of a tumour region, with the resultant manipulated data sets fused together in real-time as the adjustments are made. In addition to conventional navigation and manipulation tools, such as scaling, LUT, volume slicing, and others, our strategy permits efficient visualization of PET/CT volume rendering which can potentially aid in interpretation and diagnosis.
An Analysis of Scalable GPU-Based Ray-Guided Volume Rendering
Fogal, Thomas; Schiewe, Alexander; Krüger, Jens
2014-01-01
Volume rendering continues to be a critical method for analyzing large-scale scalar fields, in disciplines as diverse as biomedical engineering and computational fluid dynamics. Commodity desktop hardware has struggled to keep pace with data size increases, challenging modern visualization software to deliver responsive interactions for O(N3) algorithms such as volume rendering. We target the data type common in these domains: regularly-structured data. In this work, we demonstrate that the major limitation of most volume rendering approaches is their inability to switch the data sampling rate (and thus data size) quickly. Using a volume renderer inspired by recent work, we demonstrate that the actual amount of visualizable data for a scene is typically bound considerably lower than the memory available on a commodity GPU. Our instrumented renderer is used to investigate design decisions typically swept under the rug in volume rendering literature. The renderer is freely available, with binaries for all major platforms as well as full source code, to encourage reproduction and comparison with future research. PMID:25506079
Remote volume rendering pipeline for mHealth applications
NASA Astrophysics Data System (ADS)
Gutenko, Ievgeniia; Petkov, Kaloian; Papadopoulos, Charilaos; Zhao, Xin; Park, Ji Hwan; Kaufman, Arie; Cha, Ronald
2014-03-01
We introduce a novel remote volume rendering pipeline for medical visualization targeted for mHealth (mobile health) applications. The necessity of such a pipeline stems from the large size of the medical imaging data produced by current CT and MRI scanners with respect to the complexity of the volumetric rendering algorithms. For example, the resolution of typical CT Angiography (CTA) data easily reaches 512^3 voxels and can exceed 6 gigabytes in size by spanning over the time domain while capturing a beating heart. This explosion in data size makes data transfers to mobile devices challenging, and even when the transfer problem is resolved the rendering performance of the device still remains a bottleneck. To deal with this issue, we propose a thin-client architecture, where the entirety of the data resides on a remote server where the image is rendered and then streamed to the client mobile device. We utilize the display and interaction capabilities of the mobile device, while performing interactive volume rendering on a server capable of handling large datasets. Specifically, upon user interaction the volume is rendered on the server and encoded into an H.264 video stream. H.264 is ubiquitously hardware accelerated, resulting in faster compression and lower power requirements. The choice of low-latency CPU- and GPU-based encoders is particularly important in enabling the interactive nature of our system. We demonstrate a prototype of our framework using various medical datasets on commodity tablet devices.
A Distributed GPU-Based Framework for Real-Time 3D Volume Rendering of Large Astronomical Data Cubes
NASA Astrophysics Data System (ADS)
Hassan, A. H.; Fluke, C. J.; Barnes, D. G.
2012-05-01
We present a framework to volume-render three-dimensional data cubes interactively using distributed ray-casting and volume-bricking over a cluster of workstations powered by one or more graphics processing units (GPUs) and a multi-core central processing unit (CPU). The main design target for this framework is to provide an in-core visualization solution able to provide three-dimensional interactive views of terabyte-sized data cubes. We tested the presented framework using a computing cluster comprising 64 nodes with a total of 128GPUs. The framework proved to be scalable to render a 204GB data cube with an average of 30 frames per second. Our performance analyses also compare the use of NVIDIA Tesla 1060 and 2050GPU architectures and the effect of increasing the visualization output resolution on the rendering performance. Although our initial focus, as shown in the examples presented in this work, is volume rendering of spectral data cubes from radio astronomy, we contend that our approach has applicability to other disciplines where close to real-time volume rendering of terabyte-order three-dimensional data sets is a requirement.
Exposure Render: An Interactive Photo-Realistic Volume Rendering Framework
Kroes, Thomas; Post, Frits H.; Botha, Charl P.
2012-01-01
The field of volume visualization has undergone rapid development during the past years, both due to advances in suitable computing hardware and due to the increasing availability of large volume datasets. Recent work has focused on increasing the visual realism in Direct Volume Rendering (DVR) by integrating a number of visually plausible but often effect-specific rendering techniques, for instance modeling of light occlusion and depth of field. Besides yielding more attractive renderings, especially the more realistic lighting has a positive effect on perceptual tasks. Although these new rendering techniques yield impressive results, they exhibit limitations in terms of their exibility and their performance. Monte Carlo ray tracing (MCRT), coupled with physically based light transport, is the de-facto standard for synthesizing highly realistic images in the graphics domain, although usually not from volumetric data. Due to the stochastic sampling of MCRT algorithms, numerous effects can be achieved in a relatively straight-forward fashion. For this reason, we have developed a practical framework that applies MCRT techniques also to direct volume rendering (DVR). With this work, we demonstrate that a host of realistic effects, including physically based lighting, can be simulated in a generic and flexible fashion, leading to interactive DVR with improved realism. In the hope that this improved approach to DVR will see more use in practice, we have made available our framework under a permissive open source license. PMID:22768292
Real-time volume rendering of 4D image using 3D texture mapping
NASA Astrophysics Data System (ADS)
Hwang, Jinwoo; Kim, June-Sic; Kim, Jae Seok; Kim, In Young; Kim, Sun Il
2001-05-01
Four dimensional image is 3D volume data that varies with time. It is used to express deforming or moving object in virtual surgery of 4D ultrasound. It is difficult to render 4D image by conventional ray-casting or shear-warp factorization methods because of their time-consuming rendering time or pre-processing stage whenever the volume data are changed. Even 3D texture mapping is used, repeated volume loading is also time-consuming in 4D image rendering. In this study, we propose a method to reduce data loading time using coherence between currently loaded volume and previously loaded volume in order to achieve real time rendering based on 3D texture mapping. Volume data are divided into small bricks and each brick being loaded is tested for similarity to one which was already loaded in memory. If the brick passed the test, it is defined as 3D texture by OpenGL functions. Later, the texture slices of the brick are mapped into polygons and blended by OpenGL blending functions. All bricks undergo this test. Continuously deforming fifty volumes are rendered in interactive time with SGI ONYX. Real-time volume rendering based on 3D texture mapping is currently available on PC.
PRISM: An open source framework for the interactive design of GPU volume rendering shaders.
Drouin, Simon; Collins, D Louis
2018-01-01
Direct volume rendering has become an essential tool to explore and analyse 3D medical images. Despite several advances in the field, it remains a challenge to produce an image that highlights the anatomy of interest, avoids occlusion of important structures, provides an intuitive perception of shape and depth while retaining sufficient contextual information. Although the computer graphics community has proposed several solutions to address specific visualization problems, the medical imaging community still lacks a general volume rendering implementation that can address a wide variety of visualization use cases while avoiding complexity. In this paper, we propose a new open source framework called the Programmable Ray Integration Shading Model, or PRISM, that implements a complete GPU ray-casting solution where critical parts of the ray integration algorithm can be replaced to produce new volume rendering effects. A graphical user interface allows clinical users to easily experiment with pre-existing rendering effect building blocks drawn from an open database. For programmers, the interface enables real-time editing of the code inside the blocks. We show that in its default mode, the PRISM framework produces images very similar to those produced by a widely-adopted direct volume rendering implementation in VTK at comparable frame rates. More importantly, we demonstrate the flexibility of the framework by showing how several volume rendering techniques can be implemented in PRISM with no more than a few lines of code. Finally, we demonstrate the simplicity of our system in a usability study with 5 medical imaging expert subjects who have none or little experience with volume rendering. The PRISM framework has the potential to greatly accelerate development of volume rendering for medical applications by promoting sharing and enabling faster development iterations and easier collaboration between engineers and clinical personnel.
PRISM: An open source framework for the interactive design of GPU volume rendering shaders
Collins, D. Louis
2018-01-01
Direct volume rendering has become an essential tool to explore and analyse 3D medical images. Despite several advances in the field, it remains a challenge to produce an image that highlights the anatomy of interest, avoids occlusion of important structures, provides an intuitive perception of shape and depth while retaining sufficient contextual information. Although the computer graphics community has proposed several solutions to address specific visualization problems, the medical imaging community still lacks a general volume rendering implementation that can address a wide variety of visualization use cases while avoiding complexity. In this paper, we propose a new open source framework called the Programmable Ray Integration Shading Model, or PRISM, that implements a complete GPU ray-casting solution where critical parts of the ray integration algorithm can be replaced to produce new volume rendering effects. A graphical user interface allows clinical users to easily experiment with pre-existing rendering effect building blocks drawn from an open database. For programmers, the interface enables real-time editing of the code inside the blocks. We show that in its default mode, the PRISM framework produces images very similar to those produced by a widely-adopted direct volume rendering implementation in VTK at comparable frame rates. More importantly, we demonstrate the flexibility of the framework by showing how several volume rendering techniques can be implemented in PRISM with no more than a few lines of code. Finally, we demonstrate the simplicity of our system in a usability study with 5 medical imaging expert subjects who have none or little experience with volume rendering. The PRISM framework has the potential to greatly accelerate development of volume rendering for medical applications by promoting sharing and enabling faster development iterations and easier collaboration between engineers and clinical personnel. PMID:29534069
Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering.
Sicat, Ronell; Krüger, Jens; Möller, Torsten; Hadwiger, Markus
2014-12-01
This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs.
NASA Technical Reports Server (NTRS)
Saracino, G.; Greenberg, N. L.; Shiota, T.; Corsi, C.; Lamberti, C.; Thomas, J. D.
2002-01-01
Real-time three-dimensional echocardiography (RT3DE) is an innovative cardiac imaging modality. However, partly due to lack of user-friendly software, RT3DE has not been widely accepted as a clinical tool. The object of this study was to develop and implement a fast and interactive volume renderer of RT3DE datasets designed for a clinical environment where speed and simplicity are not secondary to accuracy. Thirty-six patients (20 regurgitation, 8 normal, 8 cardiomyopathy) were imaged using RT3DE. Using our newly developed software, all 3D data sets were rendered in real-time throughout the cardiac cycle and assessment of cardiac function and pathology was performed for each case. The real-time interactive volume visualization system is user friendly and instantly provides consistent and reliable 3D images without expensive workstations or dedicated hardware. We believe that this novel tool can be used clinically for dynamic visualization of cardiac anatomy.
Direct Visuo-Haptic 4D Volume Rendering Using Respiratory Motion Models.
Fortmeier, Dirk; Wilms, Matthias; Mastmeyer, Andre; Handels, Heinz
2015-01-01
This article presents methods for direct visuo-haptic 4D volume rendering of virtual patient models under respiratory motion. Breathing models are computed based on patient-specific 4D CT image data sequences. Virtual patient models are visualized in real-time by ray casting based rendering of a reference CT image warped by a time-variant displacement field, which is computed using the motion models at run-time. Furthermore, haptic interaction with the animated virtual patient models is provided by using the displacements computed at high rendering rates to translate the position of the haptic device into the space of the reference CT image. This concept is applied to virtual palpation and the haptic simulation of insertion of a virtual bendable needle. To this aim, different motion models that are applicable in real-time are presented and the methods are integrated into a needle puncture training simulation framework, which can be used for simulated biopsy or vessel puncture in the liver. To confirm real-time applicability, a performance analysis of the resulting framework is given. It is shown that the presented methods achieve mean update rates around 2,000 Hz for haptic simulation and interactive frame rates for volume rendering and thus are well suited for visuo-haptic rendering of virtual patients under respiratory motion.
Wan, Yong; Otsuna, Hideo; Holman, Holly A; Bagley, Brig; Ito, Masayoshi; Lewis, A Kelsey; Colasanto, Mary; Kardon, Gabrielle; Ito, Kei; Hansen, Charles
2017-05-26
Image segmentation and registration techniques have enabled biologists to place large amounts of volume data from fluorescence microscopy, morphed three-dimensionally, onto a common spatial frame. Existing tools built on volume visualization pipelines for single channel or red-green-blue (RGB) channels have become inadequate for the new challenges of fluorescence microscopy. For a three-dimensional atlas of the insect nervous system, hundreds of volume channels are rendered simultaneously, whereas fluorescence intensity values from each channel need to be preserved for versatile adjustment and analysis. Although several existing tools have incorporated support of multichannel data using various strategies, the lack of a flexible design has made true many-channel visualization and analysis unavailable. The most common practice for many-channel volume data presentation is still converting and rendering pseudosurfaces, which are inaccurate for both qualitative and quantitative evaluations. Here, we present an alternative design strategy that accommodates the visualization and analysis of about 100 volume channels, each of which can be interactively adjusted, selected, and segmented using freehand tools. Our multichannel visualization includes a multilevel streaming pipeline plus a triple-buffer compositing technique. Our method also preserves original fluorescence intensity values on graphics hardware, a crucial feature that allows graphics-processing-unit (GPU)-based processing for interactive data analysis, such as freehand segmentation. We have implemented the design strategies as a thorough restructuring of our original tool, FluoRender. The redesign of FluoRender not only maintains the existing multichannel capabilities for a greatly extended number of volume channels, but also enables new analysis functions for many-channel data from emerging biomedical-imaging techniques.
Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering
Sicat, Ronell; Krüger, Jens; Möller, Torsten; Hadwiger, Markus
2015-01-01
This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs. PMID:26146475
Chen, Xin; Zhang, Ye; Zhang, Jingna; Li, Ying; Mo, Xuemei; Chen, Wei
2017-01-01
This study aimed to propose a pure web-based solution to serve users to access large-scale 3D medical volume anywhere with good user experience and complete details. A novel solution of the Master-Slave interaction mode was proposed, which absorbed advantages of remote volume rendering and surface rendering. On server side, we designed a message-responding mechanism to listen to interactive requests from clients (Slave model) and to guide Master volume rendering. On client side, we used HTML5 to normalize user-interactive behaviors on Slave model and enhance the accuracy of behavior request and user-friendly experience. The results showed that more than four independent tasks (each with a data size of 249.4 MB) could be simultaneously carried out with a 100-KBps client bandwidth (extreme test); the first loading time was <12 s, and the response time of each behavior request for final high quality image remained at approximately 1 s, while the peak value of bandwidth was <50-KBps. Meanwhile, the FPS value for each client was ≥40. This solution could serve the users by rapidly accessing the application via one URL hyperlink without special software and hardware requirement in a diversified network environment and could be easily integrated into other telemedical systems seamlessly. PMID:28638406
Qiao, Liang; Chen, Xin; Zhang, Ye; Zhang, Jingna; Wu, Yi; Li, Ying; Mo, Xuemei; Chen, Wei; Xie, Bing; Qiu, Mingguo
2017-01-01
This study aimed to propose a pure web-based solution to serve users to access large-scale 3D medical volume anywhere with good user experience and complete details. A novel solution of the Master-Slave interaction mode was proposed, which absorbed advantages of remote volume rendering and surface rendering. On server side, we designed a message-responding mechanism to listen to interactive requests from clients ( Slave model) and to guide Master volume rendering. On client side, we used HTML5 to normalize user-interactive behaviors on Slave model and enhance the accuracy of behavior request and user-friendly experience. The results showed that more than four independent tasks (each with a data size of 249.4 MB) could be simultaneously carried out with a 100-KBps client bandwidth (extreme test); the first loading time was <12 s, and the response time of each behavior request for final high quality image remained at approximately 1 s, while the peak value of bandwidth was <50-KBps. Meanwhile, the FPS value for each client was ≥40. This solution could serve the users by rapidly accessing the application via one URL hyperlink without special software and hardware requirement in a diversified network environment and could be easily integrated into other telemedical systems seamlessly.
GPU-based multi-volume ray casting within VTK for medical applications.
Bozorgi, Mohammadmehdi; Lindseth, Frank
2015-03-01
Multi-volume visualization is important for displaying relevant information in multimodal or multitemporal medical imaging studies. The main objective with the current study was to develop an efficient GPU-based multi-volume ray caster (MVRC) and validate the proposed visualization system in the context of image-guided surgical navigation. Ray casting can produce high-quality 2D images from 3D volume data but the method is computationally demanding, especially when multiple volumes are involved, so a parallel GPU version has been implemented. In the proposed MVRC, imaginary rays are sent through the volumes (one ray for each pixel in the view), and at equal and short intervals along the rays, samples are collected from each volume. Samples from all the volumes are composited using front to back α-blending. Since all the rays can be processed simultaneously, the MVRC was implemented in parallel on the GPU to achieve acceptable interactive frame rates. The method is fully integrated within the visualization toolkit (VTK) pipeline with the ability to apply different operations (e.g., transformations, clipping, and cropping) on each volume separately. The implemented method is cross-platform (Windows, Linux and Mac OSX) and runs on different graphics card (NVidia and AMD). The speed of the MVRC was tested with one to five volumes of varying sizes: 128(3), 256(3), and 512(3). A Tesla C2070 GPU was used, and the output image size was 600 × 600 pixels. The original VTK single-volume ray caster and the MVRC were compared when rendering only one volume. The multi-volume rendering system achieved an interactive frame rate (> 15 fps) when rendering five small volumes (128 (3) voxels), four medium-sized volumes (256(3) voxels), and two large volumes (512(3) voxels). When rendering single volumes, the frame rate of the MVRC was comparable to the original VTK ray caster for small and medium-sized datasets but was approximately 3 frames per second slower for large datasets. The MVRC was successfully integrated in an existing surgical navigation system and was shown to be clinically useful during an ultrasound-guided neurosurgical tumor resection. A GPU-based MVRC for VTK is a useful tool in medical visualization. The proposed multi-volume GPU-based ray caster for VTK provided high-quality images at reasonable frame rates. The MVRC was effective when used in a neurosurgical navigation application.
Ambient occlusion effects for combined volumes and tubular geometry.
Schott, Mathias; Martin, Tobias; Grosset, A V Pascal; Smith, Sean T; Hansen, Charles D
2013-06-01
This paper details a method for interactive direct volume rendering that computes ambient occlusion effects for visualizations that combine both volumetric and geometric primitives, specifically tube-shaped geometric objects representing streamlines, magnetic field lines or DTI fiber tracts. The algorithm extends the recently presented the directional occlusion shading model to allow the rendering of those geometric shapes in combination with a context providing 3D volume, considering mutual occlusion between structures represented by a volume or geometry. Stream tube geometries are computed using an effective spline-based interpolation and approximation scheme that avoids self-intersection and maintains coherent orientation of the stream tube segments to avoid surface deforming twists. Furthermore, strategies to reduce the geometric and specular aliasing of the stream tubes are discussed.
Ambient Occlusion Effects for Combined Volumes and Tubular Geometry
Schott, Mathias; Martin, Tobias; Grosset, A.V. Pascal; Smith, Sean T.; Hansen, Charles D.
2013-01-01
This paper details a method for interactive direct volume rendering that computes ambient occlusion effects for visualizations that combine both volumetric and geometric primitives, specifically tube-shaped geometric objects representing streamlines, magnetic field lines or DTI fiber tracts. The algorithm extends the recently presented the directional occlusion shading model to allow the rendering of those geometric shapes in combination with a context providing 3D volume, considering mutual occlusion between structures represented by a volume or geometry. Stream tube geometries are computed using an effective spline-based interpolation and approximation scheme that avoids self-intersection and maintains coherent orientation of the stream tube segments to avoid surface deforming twists. Furthermore, strategies to reduce the geometric and specular aliasing of the stream tubes are discussed. PMID:23559506
Ray Casting of Large Multi-Resolution Volume Datasets
NASA Astrophysics Data System (ADS)
Lux, C.; Fröhlich, B.
2009-04-01
High quality volume visualization through ray casting on graphics processing units (GPU) has become an important approach for many application domains. We present a GPU-based, multi-resolution ray casting technique for the interactive visualization of massive volume data sets commonly found in the oil and gas industry. Large volume data sets are represented as a multi-resolution hierarchy based on an octree data structure. The original volume data is decomposed into small bricks of a fixed size acting as the leaf nodes of the octree. These nodes are the highest resolution of the volume. Coarser resolutions are represented through inner nodes of the hierarchy which are generated by down sampling eight neighboring nodes on a finer level. Due to limited memory resources of current desktop workstations and graphics hardware only a limited working set of bricks can be locally maintained for a frame to be displayed. This working set is chosen to represent the whole volume at different local resolution levels depending on the current viewer position, transfer function and distinct areas of interest. During runtime the working set of bricks is maintained in CPU- and GPU memory and is adaptively updated by asynchronously fetching data from external sources like hard drives or a network. The CPU memory hereby acts as a secondary level cache for these sources from which the GPU representation is updated. Our volume ray casting algorithm is based on a 3D texture-atlas in GPU memory. This texture-atlas contains the complete working set of bricks of the current multi-resolution representation of the volume. This enables the volume ray casting algorithm to access the whole working set of bricks through only a single 3D texture. For traversing rays through the volume, information about the locations and resolution levels of visited bricks are required for correct compositing computations. We encode this information into a small 3D index texture which represents the current octree subdivision on its finest level and spatially organizes the bricked data. This approach allows us to render a bricked multi-resolution volume data set utilizing only a single rendering pass with no loss of compositing precision. In contrast most state-of-the art volume rendering systems handle the bricked data as individual 3D textures, which are rendered one at a time while the results are composited into a lower precision frame buffer. Furthermore, our method enables us to integrate advanced volume rendering techniques like empty-space skipping, adaptive sampling and preintegrated transfer functions in a very straightforward manner with virtually no extra costs. Our interactive volume ray tracing implementation allows high quality visualizations of massive volume data sets of tens of Gigabytes in size on standard desktop workstations.
Efficient visibility-driven medical image visualisation via adaptive binned visibility histogram.
Jung, Younhyun; Kim, Jinman; Kumar, Ashnil; Feng, David Dagan; Fulham, Michael
2016-07-01
'Visibility' is a fundamental optical property that represents the observable, by users, proportion of the voxels in a volume during interactive volume rendering. The manipulation of this 'visibility' improves the volume rendering processes; for instance by ensuring the visibility of regions of interest (ROIs) or by guiding the identification of an optimal rendering view-point. The construction of visibility histograms (VHs), which represent the distribution of all the visibility of all voxels in the rendered volume, enables users to explore the volume with real-time feedback about occlusion patterns among spatially related structures during volume rendering manipulations. Volume rendered medical images have been a primary beneficiary of VH given the need to ensure that specific ROIs are visible relative to the surrounding structures, e.g. the visualisation of tumours that may otherwise be occluded by neighbouring structures. VH construction and its subsequent manipulations, however, are computationally expensive due to the histogram binning of the visibilities. This limits the real-time application of VH to medical images that have large intensity ranges and volume dimensions and require a large number of histogram bins. In this study, we introduce an efficient adaptive binned visibility histogram (AB-VH) in which a smaller number of histogram bins are used to represent the visibility distribution of the full VH. We adaptively bin medical images by using a cluster analysis algorithm that groups the voxels according to their intensity similarities into a smaller subset of bins while preserving the distribution of the intensity range of the original images. We increase efficiency by exploiting the parallel computation and multiple render targets (MRT) extension of the modern graphical processing units (GPUs) and this enables efficient computation of the histogram. We show the application of our method to single-modality computed tomography (CT), magnetic resonance (MR) imaging and multi-modality positron emission tomography-CT (PET-CT). In our experiments, the AB-VH markedly improved the computational efficiency for the VH construction and thus improved the subsequent VH-driven volume manipulations. This efficiency was achieved without major degradation in the VH visually and numerical differences between the AB-VH and its full-bin counterpart. We applied several variants of the K-means clustering algorithm with varying Ks (the number of clusters) and found that higher values of K resulted in better performance at a lower computational gain. The AB-VH also had an improved performance when compared to the conventional method of down-sampling of the histogram bins (equal binning) for volume rendering visualisation. Copyright © 2016 Elsevier Ltd. All rights reserved.
Earthscape, a Multi-Purpose Interactive 3d Globe Viewer for Hybrid Data Visualization and Analysis
NASA Astrophysics Data System (ADS)
Sarthou, A.; Mas, S.; Jacquin, M.; Moreno, N.; Salamon, A.
2015-08-01
The hybrid visualization and interaction tool EarthScape is presented here. The software is able to display simultaneously LiDAR point clouds, draped videos with moving footprint, volume scientific data (using volume rendering, isosurface and slice plane), raster data such as still satellite images, vector data and 3D models such as buildings or vehicles. The application runs on touch screen devices such as tablets. The software is based on open source libraries, such as OpenSceneGraph, osgEarth and OpenCV, and shader programming is used to implement volume rendering of scientific data. The next goal of EarthScape is to perform data analysis using ENVI Services Engine, a cloud data analysis solution. EarthScape is also designed to be a client of Jagwire which provides multisource geo-referenced video fluxes. When all these components will be included, EarthScape will be a multi-purpose platform that will provide at the same time data analysis, hybrid visualization and complex interactions. The software is available on demand for free at france@exelisvis.com.
Efficient visibility encoding for dynamic illumination in direct volume rendering.
Kronander, Joel; Jönsson, Daniel; Löw, Joakim; Ljung, Patric; Ynnerman, Anders; Unger, Jonas
2012-03-01
We present an algorithm that enables real-time dynamic shading in direct volume rendering using general lighting, including directional lights, point lights, and environment maps. Real-time performance is achieved by encoding local and global volumetric visibility using spherical harmonic (SH) basis functions stored in an efficient multiresolution grid over the extent of the volume. Our method enables high-frequency shadows in the spatial domain, but is limited to a low-frequency approximation of visibility and illumination in the angular domain. In a first pass, level of detail (LOD) selection in the grid is based on the current transfer function setting. This enables rapid online computation and SH projection of the local spherical distribution of visibility information. Using a piecewise integration of the SH coefficients over the local regions, the global visibility within the volume is then computed. By representing the light sources using their SH projections, the integral over lighting, visibility, and isotropic phase functions can be efficiently computed during rendering. The utility of our method is demonstrated in several examples showing the generality and interactive performance of the approach.
A Parallel Pipelined Renderer for the Time-Varying Volume Data
NASA Technical Reports Server (NTRS)
Chiueh, Tzi-Cker; Ma, Kwan-Liu
1997-01-01
This paper presents a strategy for efficiently rendering time-varying volume data sets on a distributed-memory parallel computer. Time-varying volume data take large storage space and visualizing them requires reading large files continuously or periodically throughout the course of the visualization process. Instead of using all the processors to collectively render one volume at a time, a pipelined rendering process is formed by partitioning processors into groups to render multiple volumes concurrently. In this way, the overall rendering time may be greatly reduced because the pipelined rendering tasks are overlapped with the I/O required to load each volume into a group of processors; moreover, parallelization overhead may be reduced as a result of partitioning the processors. We modify an existing parallel volume renderer to exploit various levels of rendering parallelism and to study how the partitioning of processors may lead to optimal rendering performance. Two factors which are important to the overall execution time are re-source utilization efficiency and pipeline startup latency. The optimal partitioning configuration is the one that balances these two factors. Tests on Intel Paragon computers show that in general optimal partitionings do exist for a given rendering task and result in 40-50% saving in overall rendering time.
Transform coding for hardware-accelerated volume rendering.
Fout, Nathaniel; Ma, Kwan-Liu
2007-01-01
Hardware-accelerated volume rendering using the GPU is now the standard approach for real-time volume rendering, although limited graphics memory can present a problem when rendering large volume data sets. Volumetric compression in which the decompression is coupled to rendering has been shown to be an effective solution to this problem; however, most existing techniques were developed in the context of software volume rendering, and all but the simplest approaches are prohibitive in a real-time hardware-accelerated volume rendering context. In this paper we present a novel block-based transform coding scheme designed specifically with real-time volume rendering in mind, such that the decompression is fast without sacrificing compression quality. This is made possible by consolidating the inverse transform with dequantization in such a way as to allow most of the reprojection to be precomputed. Furthermore, we take advantage of the freedom afforded by off-line compression in order to optimize the encoding as much as possible while hiding this complexity from the decoder. In this context we develop a new block classification scheme which allows us to preserve perceptually important features in the compression. The result of this work is an asymmetric transform coding scheme that allows very large volumes to be compressed and then decompressed in real-time while rendering on the GPU.
Levin, David; Aladl, Usaf; Germano, Guido; Slomka, Piotr
2005-09-01
We exploit consumer graphics hardware to perform real-time processing and visualization of high-resolution, 4D cardiac data. We have implemented real-time, realistic volume rendering, interactive 4D motion segmentation of cardiac data, visualization of multi-modality cardiac data and 3D display of multiple series cardiac MRI. We show that an ATI Radeon 9700 Pro can render a 512x512x128 cardiac Computed Tomography (CT) study at 0.9 to 60 frames per second (fps) depending on rendering parameters and that 4D motion based segmentation can be performed in real-time. We conclude that real-time rendering and processing of cardiac data can be implemented on consumer graphics cards.
NASA Astrophysics Data System (ADS)
Wu, S.; Yan, Y.; Du, Z.; Zhang, F.; Liu, R.
2017-10-01
The ocean carbon cycle has a significant influence on global climate, and is commonly evaluated using time-series satellite-derived CO2 flux data. Location-aware and globe-based visualization is an important technique for analyzing and presenting the evolution of climate change. To achieve realistic simulation of the spatiotemporal dynamics of ocean carbon, a cloud-driven digital earth platform is developed to support the interactive analysis and display of multi-geospatial data, and an original visualization method based on our digital earth is proposed to demonstrate the spatiotemporal variations of carbon sinks and sources using time-series satellite data. Specifically, a volume rendering technique using half-angle slicing and particle system is implemented to dynamically display the released or absorbed CO2 gas. To enable location-aware visualization within the virtual globe, we present a 3D particlemapping algorithm to render particle-slicing textures onto geospace. In addition, a GPU-based interpolation framework using CUDA during real-time rendering is designed to obtain smooth effects in both spatial and temporal dimensions. To demonstrate the capabilities of the proposed method, a series of satellite data is applied to simulate the air-sea carbon cycle in the China Sea. The results show that the suggested strategies provide realistic simulation effects and acceptable interactive performance on the digital earth.
Efficient Encoding and Rendering of Time-Varying Volume Data
NASA Technical Reports Server (NTRS)
Ma, Kwan-Liu; Smith, Diann; Shih, Ming-Yun; Shen, Han-Wei
1998-01-01
Visualization of time-varying volumetric data sets, which may be obtained from numerical simulations or sensing instruments, provides scientists insights into the detailed dynamics of the phenomenon under study. This paper describes a coherent solution based on quantization, coupled with octree and difference encoding for visualizing time-varying volumetric data. Quantization is used to attain voxel-level compression and may have a significant influence on the performance of the subsequent encoding and visualization steps. Octree encoding is used for spatial domain compression, and difference encoding for temporal domain compression. In essence, neighboring voxels may be fused into macro voxels if they have similar values, and subtrees at consecutive time steps may be merged if they are identical. The software rendering process is tailored according to the tree structures and the volume visualization process. With the tree representation, selective rendering may be performed very efficiently. Additionally, the I/O costs are reduced. With these combined savings, a higher level of user interactivity is achieved. We have studied a variety of time-varying volume datasets, performed encoding based on data statistics, and optimized the rendering calculations wherever possible. Preliminary tests on workstations have shown in many cases tremendous reduction by as high as 90% in both storage space and inter-frame delay.
Foundations for Measuring Volume Rendering Quality
NASA Technical Reports Server (NTRS)
Williams, Peter L.; Uselton, Samuel P.; Chancellor, Marisa K. (Technical Monitor)
1997-01-01
The goal of this paper is to provide a foundation for objectively comparing volume rendered images. The key elements of the foundation are: (1) a rigorous specification of all the parameters that need to be specified to define the conditions under which a volume rendered image is generated; (2) a methodology for difference classification, including a suite of functions or metrics to quantify and classify the difference between two volume rendered images that will support an analysis of the relative importance of particular differences. The results of this method can be used to study the changes caused by modifying particular parameter values, to compare and quantify changes between images of similar data sets rendered in the same way, and even to detect errors in the design, implementation or modification of a volume rendering system. If one has a benchmark image, for example one created by a high accuracy volume rendering system, the method can be used to evaluate the accuracy of a given image.
Seemann, M D; Claussen, C D
2001-06-01
A hybrid rendering method which combines a color-coded surface rendering method and a volume rendering method is described, which enables virtual endoscopic examinations using different representation models. 14 patients with malignancies of the lung and mediastinum (n=11) and lung transplantation (n=3) underwent thin-section spiral computed tomography. The tracheobronchial system and anatomical and pathological features of the chest were segmented using an interactive threshold interval volume-growing segmentation algorithm and visualized with a color-coded surface rendering method. The structures of interest were then superimposed on a volume rendering of the other thoracic structures. For the virtual endoscopy of the tracheobronchial system, a shaded-surface model without color coding, a transparent color-coded shaded-surface model and a triangle-surface model were tested and compared. The hybrid rendering technique exploit the advantages of both rendering methods, provides an excellent overview of the tracheobronchial system and allows a clear depiction of the complex spatial relationships of anatomical and pathological features. Virtual bronchoscopy with a transparent color-coded shaded-surface model allows both a simultaneous visualization of an airway, an airway lesion and mediastinal structures and a quantitative assessment of the spatial relationship between these structures, thus improving confidence in the diagnosis of endotracheal and endobronchial diseases. Hybrid rendering and virtual endoscopy obviate the need for time consuming detailed analysis and presentation of axial source images. Virtual bronchoscopy with a transparent color-coded shaded-surface model offers a practical alternative to fiberoptic bronchoscopy and is particularly promising for patients in whom fiberoptic bronchoscopy is not feasible, contraindicated or refused. Furthermore, it can be used as a complementary procedure to fiberoptic bronchoscopy in evaluating airway stenosis and guiding bronchoscopic biopsy, surgical intervention and palliative therapy and is likely to be increasingly accepted as a screening method for people with suspected endobronchial malignancy and as control examination in the aftercare of patients with malignant diseases.
The physics of volume rendering
NASA Astrophysics Data System (ADS)
Peters, Thomas
2014-11-01
Radiation transfer is an important topic in several physical disciplines, probably most prominently in astrophysics. Computer scientists use radiation transfer, among other things, for the visualization of complex data sets with direct volume rendering. In this article, I point out the connection between physical radiation transfer and volume rendering, and I describe an implementation of direct volume rendering in the astrophysical radiation transfer code RADMC-3D. I show examples for the use of this module on analytical models and simulation data.
Three Dimensional Projection Environment for Molecular Design and Surgical Simulation
2011-08-01
bypasses the cumbersome meshing process . The deformation model is only comprised of mass nodes, which are generated by sampling the object volume before...force should minimize the penetration volume, the haptic feedback force is derived directly. Additionally, a post- processing technique is developed to...render distinct physi-cal tissue properties across different interaction areas. The proposed approach does not require any pre- processing and is
Elasticity-based three dimensional ultrasound real-time volume rendering
NASA Astrophysics Data System (ADS)
Boctor, Emad M.; Matinfar, Mohammad; Ahmad, Omar; Rivaz, Hassan; Choti, Michael; Taylor, Russell H.
2009-02-01
Volumetric ultrasound imaging has not gained wide recognition, despite the availability of real-time 3D ultrasound scanners and the anticipated potential of 3D ultrasound imaging in diagnostic and interventional radiology. Their use, however, has been hindered by the lack of real-time visualization methods that are capable of producing high quality 3D rendering of the target/surface of interest. Volume rendering is a known visualization method, which can display clear surfaces out of the acquired volumetric data, and has an increasing number of applications utilizing CT and MRI data. The key element of any volume rendering pipeline is the ability to classify the target/surface of interest by setting an appropriate opacity function. Practical and successful real-time 3D ultrasound volume rendering can be achieved in Obstetrics and Angio applications where setting these opacity functions can be done rapidly, and reliably. Unfortunately, 3D ultrasound volume rendering of soft tissues is a challenging task due to the presence of significant amount of noise and speckle. Recently, several research groups have shown the feasibility of producing 3D elasticity volume from two consecutive 3D ultrasound scans. This report describes a novel volume rendering pipeline utilizing elasticity information. The basic idea is to compute B-mode voxel opacity from the rapidly calculated strain values, which can also be mixed with conventional gradient based opacity function. We have implemented the volume renderer using GPU unit, which gives an update rate of 40 volume/sec.
Fortmeier, Dirk; Mastmeyer, Andre; Schröder, Julian; Handels, Heinz
2016-01-01
This study presents a new visuo-haptic virtual reality (VR) training and planning system for percutaneous transhepatic cholangio-drainage (PTCD) based on partially segmented virtual patient models. We only use partially segmented image data instead of a full segmentation and circumvent the necessity of surface or volume mesh models. Haptic interaction with the virtual patient during virtual palpation, ultrasound probing and needle insertion is provided. Furthermore, the VR simulator includes X-ray and ultrasound simulation for image-guided training. The visualization techniques are GPU-accelerated by implementation in Cuda and include real-time volume deformations computed on the grid of the image data. Computation on the image grid enables straightforward integration of the deformed image data into the visualization components. To provide shorter rendering times, the performance of the volume deformation algorithm is improved by a multigrid approach. To evaluate the VR training system, a user evaluation has been performed and deformation algorithms are analyzed in terms of convergence speed with respect to a fully converged solution. The user evaluation shows positive results with increased user confidence after a training session. It is shown that using partially segmented patient data and direct volume rendering is suitable for the simulation of needle insertion procedures such as PTCD.
High-quality and interactive animations of 3D time-varying vector fields.
Helgeland, Anders; Elboth, Thomas
2006-01-01
In this paper, we present an interactive texture-based method for visualizing three-dimensional unsteady vector fields. The visualization method uses a sparse and global representation of the flow, such that it does not suffer from the same perceptual issues as is the case for visualizing dense representations. The animation is made by injecting a collection of particles evenly distributed throughout the physical domain. These particles are then tracked along their path lines. At each time step, these particles are used as seed points to generate field lines using any vector field such as the velocity field or vorticity field. In this way, the animation shows the advection of particles while each frame in the animation shows the instantaneous vector field. In order to maintain a coherent particle density and to avoid clustering as time passes, we have developed a novel particle advection strategy which produces approximately evenly-spaced field lines at each time step. To improve rendering performance, we decouple the rendering stage from the preceding stages of the visualization method. This allows interactive exploration of multiple fields simultaneously, which sets the stage for a more complete analysis of the flow field. The final display is rendered using texture-based direct volume rendering.
A spatially augmented reality sketching interface for architectural daylighting design.
Sheng, Yu; Yapo, Theodore C; Young, Christopher; Cutler, Barbara
2011-01-01
We present an application of interactive global illumination and spatially augmented reality to architectural daylight modeling that allows designers to explore alternative designs and new technologies for improving the sustainability of their buildings. Images of a model in the real world, captured by a camera above the scene, are processed to construct a virtual 3D model. To achieve interactive rendering rates, we use a hybrid rendering technique, leveraging radiosity to simulate the interreflectance between diffuse patches and shadow volumes to generate per-pixel direct illumination. The rendered images are then projected on the real model by four calibrated projectors to help users study the daylighting illumination. The virtual heliodon is a physical design environment in which multiple designers, a designer and a client, or a teacher and students can gather to experience animated visualizations of the natural illumination within a proposed design by controlling the time of day, season, and climate. Furthermore, participants may interactively redesign the geometry and materials of the space by manipulating physical design elements and see the updated lighting simulation. © 2011 IEEE Published by the IEEE Computer Society
Three-dimensional spiral CT during arterial portography: comparison of three rendering techniques.
Heath, D G; Soyer, P A; Kuszyk, B S; Bliss, D F; Calhoun, P S; Bluemke, D A; Choti, M A; Fishman, E K
1995-07-01
The three most common techniques for three-dimensional reconstruction are surface rendering, maximum-intensity projection (MIP), and volume rendering. Surface-rendering algorithms model objects as collections of geometric primitives that are displayed with surface shading. The MIP algorithm renders an image by selecting the voxel with the maximum intensity signal along a line extended from the viewer's eye through the data volume. Volume-rendering algorithms sum the weighted contributions of all voxels along the line. Each technique has advantages and shortcomings that must be considered during selection of one for a specific clinical problem and during interpretation of the resulting images. With surface rendering, sharp-edged, clear three-dimensional reconstruction can be completed on modest computer systems; however, overlapping structures cannot be visualized and artifacts are a problem. MIP is computationally a fast technique, but it does not allow depiction of overlapping structures, and its images are three-dimensionally ambiguous unless depth cues are provided. Both surface rendering and MIP use less than 10% of the image data. In contrast, volume rendering uses nearly all of the data, allows demonstration of overlapping structures, and engenders few artifacts, but it requires substantially more computer power than the other techniques.
Four-dimensional ultrasonography of the fetal heart with spatiotemporal image correlation.
Gonçalves, Luís F; Lee, Wesley; Chaiworapongsa, Tinnakorn; Espinoza, Jimmy; Schoen, Mary Lou; Falkensammer, Peter; Treadwell, Marjorie; Romero, Roberto
2003-12-01
This study was undertaken to describe a new technique for the examination of the fetal heart using four-dimensional ultrasonography with spatiotemporal image correlation (STIC). Volume data sets of the fetal heart were acquired with a new cardiac gating technique (STIC), which uses automated transverse and longitudinal sweeps of the anterior chest wall. These volumes were obtained from 69 fetuses: 35 normal, 16 with congenital anomalies not affecting the cardiovascular system, and 18 with cardiac abnormalities. Dynamic multiplanar slicing and surface rendering of cardiac structures were performed. To illustrate the STIC technique, two representative volumes from a normal fetus were compared with volumes obtained from fetuses with the following congenital heart anomalies: atrioventricular septal defect, tricuspid stenosis, tricuspid atresia, and interrupted inferior vena cava with abnormal venous drainage. Volume datasets obtained with a transverse sweep were utilized to demonstrate the cardiac chambers, moderator band, interatrial and interventricular septae, atrioventricular valves, pulmonary veins, and outflow tracts. With the use of a reference dot to navigate the four-chamber view, intracardiac structures could be simultaneously studied in three orthogonal planes. The same volume dataset was used for surface rendering of the atrioventricular valves. The aortic and ductal arches were best visualized when the original plane of acquisition was sagittal. Volumes could be interactively manipulated to simultaneously visualize both outflow tracts, in addition to the aortic and ductal arches. Novel views of specific structures were generated. For example, the location and extent of a ventricular septal defect was imaged in a sagittal view of the interventricular septum. Furthermore, surface-rendered images of the atrioventricular valves were employed to distinguish between normal and pathologic conditions. Representative video clips were posted on the Journal's Web site to demonstrate the diagnostic capabilities of this new technique. Dynamic multiplanar slicing and surface rendering of the fetal heart are feasible with STIC technology. One good quality volume dataset, obtained from a transverse sweep, can be used to examine the four-chamber view and the outflow tracts. This novel method may assist in the evaluation of fetal cardiac anatomy.
NASA Astrophysics Data System (ADS)
Mori, Kensaku; Suenaga, Yasuhito; Toriwaki, Jun-ichiro
2003-05-01
This paper describes a software-based fast volume rendering (VolR) method on a PC platform by using multimedia instructions, such as SIMD instructions, which are currently available in PCs' CPUs. This method achieves fast rendering speed through highly optimizing software rather than an improved rendering algorithm. In volume rendering using a ray casting method, the system requires fast execution of the following processes: (a) interpolation of voxel or color values at sample points, (b) computation of normal vectors (gray-level gradient vectors), (c) calculation of shaded values obtained by dot-products of normal vectors and light source direction vectors, (d) memory access to a huge area, and (e) efficient ray skipping at translucent regions. The proposed software implements these fundamental processes in volume rending by using special instruction sets for multimedia processing. The proposed software can generate virtual endoscopic images of a 3-D volume of 512x512x489 voxel size by volume rendering with perspective projection, specular reflection, and on-the-fly normal vector computation on a conventional PC without any special hardware at thirteen frames per second. Semi-translucent display is also possible.
Rapid Decimation for Direct Volume Rendering
NASA Technical Reports Server (NTRS)
Gibbs, Jonathan; VanGelder, Allen; Verma, Vivek; Wilhelms, Jane
1997-01-01
An approach for eliminating unnecessary portions of a volume when producing a direct volume rendering is described. This reduction in volume size sacrifices some image quality in the interest of rendering speed. Since volume visualization is often used as an exploratory visualization technique, it is important to reduce rendering times, so the user can effectively explore the volume. The methods presented can speed up rendering by factors of 2 to 3 with minor image degradation. A family of decimation algorithms to reduce the number of primitives in the volume without altering the volume's grid in any way is introduced. This allows the decimation to be computed rapidly, making it easier to change decimation levels on the fly. Further, because very little extra space is required, this method is suitable for the very large volumes that are becoming common. The method is also grid-independent, so it is suitable for multiple overlapping curvilinear and unstructured, as well as regular, grids. The decimation process can proceed automatically, or can be guided by the user so that important regions of the volume are decimated less than unimportant regions. A formal error measure is described based on a three-dimensional analog of the Radon transform. Decimation methods are evaluated based on this metric and on direct comparison with reference images.
Spatio-temporal visualization of air-sea CO2 flux and carbon budget using volume rendering
NASA Astrophysics Data System (ADS)
Du, Zhenhong; Fang, Lei; Bai, Yan; Zhang, Feng; Liu, Renyi
2015-04-01
This paper presents a novel visualization method to show the spatio-temporal dynamics of carbon sinks and sources, and carbon fluxes in the ocean carbon cycle. The air-sea carbon budget and its process of accumulation are demonstrated in the spatial dimension, while the distribution pattern and variation of CO2 flux are expressed by color changes. In this way, we unite spatial and temporal characteristics of satellite data through visualization. A GPU-based direct volume rendering technique using half-angle slicing is adopted to dynamically visualize the released or absorbed CO2 gas with shadow effects. A data model is designed to generate four-dimensional (4D) data from satellite-derived air-sea CO2 flux products, and an out-of-core scheduling strategy is also proposed for on-the-fly rendering of time series of satellite data. The presented 4D visualization method is implemented on graphics cards with vertex, geometry and fragment shaders. It provides a visually realistic simulation and user interaction for real-time rendering. This approach has been integrated into the Information System of Ocean Satellite Monitoring for Air-sea CO2 Flux (IssCO2) for the research and assessment of air-sea CO2 flux in the China Seas.
Direct Volume Rendering with Shading via Three-Dimensional Textures
NASA Technical Reports Server (NTRS)
VanGelder, Allen; Kim, Kwansik
1996-01-01
A new and easy-to-implement method for direct volume rendering that uses 3D texture maps for acceleration, and incorporates directional lighting, is described. The implementation, called Voltx, produces high-quality images at nearly interactive speeds on workstations with hardware support for three-dimensional texture maps. Previously reported methods did not incorporate a light model, and did not address issues of multiple texture maps for large volumes. Our research shows that these extensions impact performance by about a factor of ten. Voltx supports orthographic, perspective, and stereo views. This paper describes the theory and implementation of this technique, and compares it to the shear-warp factorization approach. A rectilinear data set is converted into a three-dimensional texture map containing color and opacity information. Quantized normal vectors and a lookup table provide efficiency. A new tesselation of the sphere is described, which serves as the basis for normal-vector quantization. A new gradient-based shading criterion is described, in which the gradient magnitude is interpreted in the context of the field-data value and the material classification parameters, and not in isolation. In the rendering phase, the texture map is applied to a stack of parallel planes, which effectively cut the texture into many slabs. The slabs are composited to form an image.
Accelerating Time-Varying Hardware Volume Rendering Using TSP Trees and Color-Based Error Metrics
NASA Technical Reports Server (NTRS)
Ellsworth, David; Chiang, Ling-Jen; Shen, Han-Wei; Kwak, Dochan (Technical Monitor)
2000-01-01
This paper describes a new hardware volume rendering algorithm for time-varying data. The algorithm uses the Time-Space Partitioning (TSP) tree data structure to identify regions within the data that have spatial or temporal coherence. By using this coherence, the rendering algorithm can improve performance when the volume data is larger than the texture memory capacity by decreasing the amount of textures required. This coherence can also allow improved speed by appropriately rendering flat-shaded polygons instead of textured polygons, and by not rendering transparent regions. To reduce the polygonization overhead caused by the use of the hierarchical data structure, we introduce an optimization method using polygon templates. The paper also introduces new color-based error metrics, which more accurately identify coherent regions compared to the earlier scalar-based metrics. By showing experimental results from runs using different data sets and error metrics, we demonstrate that the new methods give substantial improvements in volume rendering performance.
Distributed shared memory for roaming large volumes.
Castanié, Laurent; Mion, Christophe; Cavin, Xavier; Lévy, Bruno
2006-01-01
We present a cluster-based volume rendering system for roaming very large volumes. This system allows to move a gigabyte-sized probe inside a total volume of several tens or hundreds of gigabytes in real-time. While the size of the probe is limited by the total amount of texture memory on the cluster, the size of the total data set has no theoretical limit. The cluster is used as a distributed graphics processing unit that both aggregates graphics power and graphics memory. A hardware-accelerated volume renderer runs in parallel on the cluster nodes and the final image compositing is implemented using a pipelined sort-last rendering algorithm. Meanwhile, volume bricking and volume paging allow efficient data caching. On each rendering node, a distributed hierarchical cache system implements a global software-based distributed shared memory on the cluster. In case of a cache miss, this system first checks page residency on the other cluster nodes instead of directly accessing local disks. Using two Gigabit Ethernet network interfaces per node, we accelerate data fetching by a factor of 4 compared to directly accessing local disks. The system also implements asynchronous disk access and texture loading, which makes it possible to overlap data loading, volume slicing and rendering for optimal volume roaming.
Topology-aware illumination design for volume rendering.
Zhou, Jianlong; Wang, Xiuying; Cui, Hui; Gong, Peng; Miao, Xianglin; Miao, Yalin; Xiao, Chun; Chen, Fang; Feng, Dagan
2016-08-19
Direct volume rendering is one of flexible and effective approaches to inspect large volumetric data such as medical and biological images. In conventional volume rendering, it is often time consuming to set up a meaningful illumination environment. Moreover, conventional illumination approaches usually assign same values of variables of an illumination model to different structures manually and thus neglect the important illumination variations due to structure differences. We introduce a novel illumination design paradigm for volume rendering on the basis of topology to automate illumination parameter definitions meaningfully. The topological features are extracted from the contour tree of an input volumetric data. The automation of illumination design is achieved based on four aspects of attenuation, distance, saliency, and contrast perception. To better distinguish structures and maximize illuminance perception differences of structures, a two-phase topology-aware illuminance perception contrast model is proposed based on the psychological concept of Just-Noticeable-Difference. The proposed approach allows meaningful and efficient automatic generations of illumination in volume rendering. Our results showed that our approach is more effective in depth and shape depiction, as well as providing higher perceptual differences between structures.
Distributed volume rendering and stereoscopic display for radiotherapy treatment planning
NASA Astrophysics Data System (ADS)
Hancock, David J.
The thesis describes attempts to use direct volume rendering techniques to produce visualisations useful in the preparation of radiotherapy treatment plans. The selected algorithms allow the generation of data-rich images which can be used to assist the radiologist in comprehending complicated three-dimensional phenomena. The treatment plans are formulated using a three dimensional model which combines patient data acquired from CT scanning and the results of a simulation of the radiation delivery. Multiple intersecting beams with shaped profiles are used and the region of intersection is designed to closely match the position and shape of the targeted tumour region. The proposed treatment must be evaluated as to how well the target region is enveloped by the high dose occurring where the beams intersect, and also as to whether the treatment is likely to expose non-tumour regions to unacceptably high levels of radiation. Conventionally the plans are reviewed by examining CT images overlaid with contours indicating dose levels. Volume visualisation offers a possible saving in time by presenting the data in three dimensional form thereby removing the need to examine a set of slices. The most difficult aspect is to depict unambiguously the relationships between the different data. For example, if a particular beam configuration results in unintended irradiation of a sensitive organ, then it is essential to ensure that this is clearly displayed, and that the 3D relationships between the beams and other data can be readily perceived in order to decide how to correct the problem. The user interface has been designed to present a unified view of the different techniques available for identifying features of interest within the data. The system differs from those previously reported in that complex visualisations can be constructed incrementally, and several different combinations of features can be viewed simultaneously. To maximise the quantity of relevant data presented in a single view, large regions of the data are rendered very transparently. This is done to ensure that interesting features buried deep within the data are visible from any viewpoint. Rendering images with high degrees of transparency raises a number of problems, primarily the drop in quality of depth cues in the image, but also the increase in computational requirements over surface-based visualisations. One solution to the increase in image generation times is the use of parallel architectures, which are an attractive platform for large visualisation tasks such as this. A parallel implementation of the direct volume rendering algorithm is described and its performance is evaluated. Several issues must be addressed in implementing an interactive rendering system in a distributed computing environment: principally overcoming the latency and limited bandwidth of the typical network connection. This thesis reports a pipelining strategy developed to improve the level of interactivity in such situations. Stereoscopic image presentation offers a method to offset the reduction in clarity of the depth information in the transparent images. The results of an investigation into the effectiveness of stereoscopic display as an aid to perception in highly transparent images are presented. Subjects were shown scenes of a synthetic test data set in which conventional depth cues were very limited. The experiments were designed to discover what effect stereoscopic viewing of the transparent, volume rendered images had on user's depth perception.
Application of volume rendering technique (VRT) for musculoskeletal imaging.
Darecki, Rafał
2002-10-30
A review of the applications of volume rendering technique in musculoskeletal three-dimensional imaging from CT data. General features, potential and indications for applying the method are presented.
Hybrid rendering of the chest and virtual bronchoscopy [corrected].
Seemann, M D; Seemann, O; Luboldt, W; Gebicke, K; Prime, G; Claussen, C D
2000-10-30
Thin-section spiral computed tomography was used to acquire the volume data sets of the thorax. The tracheobronchial system and pathological changes of the chest were visualized using a color-coded surface rendering method. The structures of interest were then superimposed on a volume rendering of the other thoracic structures, thus producing a hybrid rendering. The hybrid rendering technique exploit the advantages of both rendering methods and enable virtual bronchoscopic examinations using different representation models. Virtual bronchoscopic examinations with a transparent color-coded shaded-surface model enables the simultaneous visualization of both the airways and the adjacent structures behind of the tracheobronchial wall and therefore, offers a practical alternative to fiberoptic bronchoscopy. Hybrid rendering and virtual endoscopy obviate the need for time consuming detailed analysis and presentation of axial source images.
Guo, Zhi-Jun; Lin, Qiang; Liu, Hai-Tao; Lu, Jun-Ying; Zeng, Yan-Hong; Meng, Fan-Jie; Cao, Bin; Zi, Xue-Rong; Han, Shu-Ming; Zhang, Yu-Huan
2013-09-01
Using computed tomography (CT) to rapidly and accurately quantify pleural effusion volume benefits medical and scientific research. However, the precise volume of pleural effusions still involves many challenges and currently does not have a recognized accurate measuring. To explore the feasibility of using 64-slice CT volume-rendering technology to accurately measure pleural fluid volume and to then analyze the correlation between the volume of the free pleural effusion and the different diameters of the pleural effusion. The 64-slice CT volume-rendering technique was used to measure and analyze three parts. First, the fluid volume of a self-made thoracic model was measured and compared with the actual injected volume. Second, the pleural effusion volume was measured before and after pleural fluid drainage in 25 patients, and the volume reduction was compared with the actual volume of the liquid extract. Finally, the free pleural effusion volume was measured in 26 patients to analyze the correlation between it and the diameter of the effusion, which was then used to calculate the regression equation. After using the 64-slice CT volume-rendering technique to measure the fluid volume of the self-made thoracic model, the results were compared with the actual injection volume. No significant differences were found, P = 0.836. For the 25 patients with drained pleural effusions, the comparison of the reduction volume with the actual volume of the liquid extract revealed no significant differences, P = 0.989. The following linear regression equation was used to compare the pleural effusion volume (V) (measured by the CT volume-rendering technique) with the pleural effusion greatest depth (d): V = 158.16 × d - 116.01 (r = 0.91, P = 0.000). The following linear regression was used to compare the volume with the product of the pleural effusion diameters (l × h × d): V = 0.56 × (l × h × d) + 39.44 (r = 0.92, P = 0.000). The 64-slice CT volume-rendering technique can accurately measure the volume in pleural effusion patients, and a linear regression equation can be used to estimate the volume of the free pleural effusion.
A concept of volume rendering guided search process to analyze medical data set.
Zhou, Jianlong; Xiao, Chun; Wang, Zhiyan; Takatsuka, Masahiro
2008-03-01
This paper firstly presents an approach of parallel coordinates based parameter control panel (PCP). The PCP is used to control parameters of focal region-based volume rendering (FRVR) during data analysis. It uses a parallel coordinates style interface. Different rendering parameters represented with nodes on each axis, and renditions based on related parameters are connected using polylines to show dependencies between renditions and parameters. Based on the PCP, a concept of volume rendering guided search process is proposed. The search pipeline is divided into four phases. Different parameters of FRVR are recorded and modulated in the PCP during search phases. The concept shows that volume visualization could play the role of guiding a search process in the rendition space to help users to efficiently find local structures of interest. The usability of the proposed approach is evaluated to show its effectiveness.
NASA Astrophysics Data System (ADS)
Yoon, Jayoung; Kim, Gerard J.
2003-04-01
Traditionally, three dimension models have been used for building virtual worlds, and a data structure called the "scene graph" is often employed to organize these 3D objects in the virtual space. On the other hand, image-based rendering has recently been suggested as a probable alternative VR platform for its photo-realism, however, due to limited interactivity, it has only been used for simple navigation systems. To combine the merits of these two approaches to object/scene representations, this paper proposes for a scene graph structure in which both 3D models and various image-based scenes/objects can be defined, traversed, and rendered together. In fact, as suggested by Shade et al., these different representations can be used as different LOD's for a given object. For instance, an object might be rendered using a 3D model at close range, a billboard at an intermediate range, and as part of an environment map at far range. The ultimate objective of this mixed platform is to breath more interactivity into the image based rendered VE's by employing 3D models as well. There are several technical challenges in devising such a platform: designing scene graph nodes for various types of image based techniques, establishing criteria for LOD/representation selection, handling their transitions, implementing appropriate interaction schemes, and correctly rendering the overall scene. Currently, we have extended the scene graph structure of the Sense8's WorldToolKit, to accommodate new node types for environment maps billboards, moving textures and sprites, "Tour-into-the-Picture" structure, and view interpolated objects. As for choosing the right LOD level, the usual viewing distance and image space criteria are used, however, the switching between the image and 3D model occurs at a distance from the user where the user starts to perceive the object's internal depth. Also, during interaction, regardless of the viewing distance, a 3D representation would be used, it if exists. Before rendering, objects are conservatively culled from the view frustum using the representation with the largest volume. Finally, we carried out experiments to verify the theoretical derivation of the switching rule and obtained positive results.
Automatic Perceptual Color Map Generation for Realistic Volume Visualization
Silverstein, Jonathan C.; Parsad, Nigel M.; Tsirline, Victor
2008-01-01
Advances in computed tomography imaging technology and inexpensive high performance computer graphics hardware are making high-resolution, full color (24-bit) volume visualizations commonplace. However, many of the color maps used in volume rendering provide questionable value in knowledge representation and are non-perceptual thus biasing data analysis or even obscuring information. These drawbacks, coupled with our need for realistic anatomical volume rendering for teaching and surgical planning, has motivated us to explore the auto-generation of color maps that combine natural colorization with the perceptual discriminating capacity of grayscale. As evidenced by the examples shown that have been created by the algorithm described, the merging of perceptually accurate and realistically colorized virtual anatomy appears to insightfully interpret and impartially enhance volume rendered patient data. PMID:18430609
MacDougall, Preston J; Henze, Christopher E; Volkov, Anatoliy
2016-11-01
We present a unique platform for molecular visualization and design that uses novel subatomic feature detection software in tandem with 3D hyperwall visualization technology. We demonstrate the fleshing-out of pharmacophores in drug molecules, as well as reactive sites in catalysts, focusing on subatomic features. Topological analysis with picometer resolution, in conjunction with interactive volume-rendering of the Laplacian of the electronic charge density, leads to new insight into docking and catalysis. Visual data-mining is done efficiently and in parallel using a 4×4 3D hyperwall (a tiled array of 3D monitors driven independently by slave GPUs but displaying high-resolution, synchronized and functionally-related images). The visual texture of images for a wide variety of molecular systems are intuitive to experienced chemists but also appealing to neophytes, making the platform simultaneously useful as a tool for advanced research as well as for pedagogical and STEM education outreach purposes. Copyright © 2016. Published by Elsevier Inc.
A data distributed parallel algorithm for ray-traced volume rendering
NASA Technical Reports Server (NTRS)
Ma, Kwan-Liu; Painter, James S.; Hansen, Charles D.; Krogh, Michael F.
1993-01-01
This paper presents a divide-and-conquer ray-traced volume rendering algorithm and a parallel image compositing method, along with their implementation and performance on the Connection Machine CM-5, and networked workstations. This algorithm distributes both the data and the computations to individual processing units to achieve fast, high-quality rendering of high-resolution data. The volume data, once distributed, is left intact. The processing nodes perform local ray tracing of their subvolume concurrently. No communication between processing units is needed during this locally ray-tracing process. A subimage is generated by each processing unit and the final image is obtained by compositing subimages in the proper order, which can be determined a priori. Test results on both the CM-5 and a group of networked workstations demonstrate the practicality of our rendering algorithm and compositing method.
Employing WebGL to develop interactive stereoscopic 3D content for use in biomedical visualization
NASA Astrophysics Data System (ADS)
Johnston, Semay; Renambot, Luc; Sauter, Daniel
2013-03-01
Web Graphics Library (WebGL), the forthcoming web standard for rendering native 3D graphics in a browser, represents an important addition to the biomedical visualization toolset. It is projected to become a mainstream method of delivering 3D online content due to shrinking support for third-party plug-ins. Additionally, it provides a virtual reality (VR) experience to web users accommodated by the growing availability of stereoscopic displays (3D TV, desktop, and mobile). WebGL's value in biomedical visualization has been demonstrated by applications for interactive anatomical models, chemical and molecular visualization, and web-based volume rendering. However, a lack of instructional literature specific to the field prevents many from utilizing this technology. This project defines a WebGL design methodology for a target audience of biomedical artists with a basic understanding of web languages and 3D graphics. The methodology was informed by the development of an interactive web application depicting the anatomy and various pathologies of the human eye. The application supports several modes of stereoscopic displays for a better understanding of 3D anatomical structures.
NASA Astrophysics Data System (ADS)
Hachaj, Tomasz; Ogiela, Marek R.
2012-10-01
The proposed framework for cognitive analysis of perfusion computed tomography images is a fusion of image processing, pattern recognition, and image analysis procedures. The output data of the algorithm consists of: regions of perfusion abnormalities, anatomy atlas description of brain tissues, measures of perfusion parameters, and prognosis for infracted tissues. That information is superimposed onto volumetric computed tomography data and displayed to radiologists. Our rendering algorithm enables rendering large volumes on off-the-shelf hardware. This portability of rendering solution is very important because our framework can be run without using expensive dedicated hardware. The other important factors are theoretically unlimited size of rendered volume and possibility of trading of image quality for rendering speed. Such rendered, high quality visualizations may be further used for intelligent brain perfusion abnormality identification, and computer aided-diagnosis of selected types of pathologies.
Volumetric depth peeling for medical image display
NASA Astrophysics Data System (ADS)
Borland, David; Clarke, John P.; Fielding, Julia R.; TaylorII, Russell M.
2006-01-01
Volumetric depth peeling (VDP) is an extension to volume rendering that enables display of otherwise occluded features in volume data sets. VDP decouples occlusion calculation from the volume rendering transfer function, enabling independent optimization of settings for rendering and occlusion. The algorithm is flexible enough to handle multiple regions occluding the object of interest, as well as object self-occlusion, and requires no pre-segmentation of the data set. VDP was developed as an improvement for virtual arthroscopy for the diagnosis of shoulder-joint trauma, and has been generalized for use in other simple and complex joints, and to enable non-invasive urology studies. In virtual arthroscopy, the surfaces in the joints often occlude each other, allowing limited viewpoints from which to evaluate these surfaces. In urology studies, the physician would like to position the virtual camera outside the kidney collecting system and see inside it. By rendering invisible all voxels between the observer's point of view and objects of interest, VDP enables viewing from unconstrained positions. In essence, VDP can be viewed as a technique for automatically defining an optimal data- and task-dependent clipping surface. Radiologists using VDP display have been able to perform evaluations of pathologies more easily and more rapidly than with clinical arthroscopy, standard volume rendering, or standard MRI/CT slice viewing.
NASA Astrophysics Data System (ADS)
Li, Jing; Wu, Huayi; Yang, Chaowei; Wong, David W.; Xie, Jibo
2011-09-01
Geoscientists build dynamic models to simulate various natural phenomena for a better understanding of our planet. Interactive visualizations of these geoscience models and their outputs through virtual globes on the Internet can help the public understand the dynamic phenomena related to the Earth more intuitively. However, challenges arise when the volume of four-dimensional data (4D), 3D in space plus time, is huge for rendering. Datasets loaded from geographically distributed data servers require synchronization between ingesting and rendering data. Also the visualization capability of display clients varies significantly in such an online visualization environment; some may not have high-end graphic cards. To enhance the efficiency of visualizing dynamic volumetric data in virtual globes, this paper proposes a systematic framework, in which an octree-based multiresolution data structure is implemented to organize time series 3D geospatial data to be used in virtual globe environments. This framework includes a view-dependent continuous level of detail (LOD) strategy formulated as a synchronized part of the virtual globe rendering process. Through the octree-based data retrieval process, the LOD strategy enables the rendering of the 4D simulation at a consistent and acceptable frame rate. To demonstrate the capabilities of this framework, data of a simulated dust storm event are rendered in World Wind, an open source virtual globe. The rendering performances with and without the octree-based LOD strategy are compared. The experimental results show that using the proposed data structure and processing strategy significantly enhances the visualization performance when rendering dynamic geospatial phenomena in virtual globes.
View compensated compression of volume rendered images for remote visualization.
Lalgudi, Hariharan G; Marcellin, Michael W; Bilgin, Ali; Oh, Han; Nadar, Mariappan S
2009-07-01
Remote visualization of volumetric images has gained importance over the past few years in medical and industrial applications. Volume visualization is a computationally intensive process, often requiring hardware acceleration to achieve a real time viewing experience. One remote visualization model that can accomplish this would transmit rendered images from a server, based on viewpoint requests from a client. For constrained server-client bandwidth, an efficient compression scheme is vital for transmitting high quality rendered images. In this paper, we present a new view compensation scheme that utilizes the geometric relationship between viewpoints to exploit the correlation between successive rendered images. The proposed method obviates motion estimation between rendered images, enabling significant reduction to the complexity of a compressor. Additionally, the view compensation scheme, in conjunction with JPEG2000 performs better than AVC, the state of the art video compression standard.
Combined approach of shell and shear-warp rendering for efficient volume visualization
NASA Astrophysics Data System (ADS)
Falcao, Alexandre X.; Rocha, Leonardo M.; Udupa, Jayaram K.
2003-05-01
In Medical Imaging, shell rendering (SR) and shear-warp rendering (SWR) are two ultra-fast and effective methods for volume visualization. We have previously shown that, typically, SWR can be on the average 1.38 times faster than SR, but it requires from 2 to 8 times more memory space than SR. In this paper, we propose an extension of the compact shell data structure utilized in SR to allow shear-warp factorization of the viewing matrix in order to obtain speed up gains for SR, without paying the high storage price of SWR. The new approach is called shear-warp shell rendering (SWSR). The paper describes the methods, points out their major differences in the computational aspects, and presents a comparative analysis of them in terms of speed, storage, and image quality. The experiments involve hard and fuzzy boundaries of 10 different objects of various sizes, shapes, and topologies, rendered on a 1GHz Pentium-III PC with 512MB RAM, utilizing surface and volume rendering strategies. The results indicate that SWSR offers the best speed and storage characteristics compromise among these methods. We also show that SWSR improves the rendition quality over SR, and provides renditions similar to those produced by SWR.
A Graph Based Interface for Representing Volume Visualization Results
NASA Technical Reports Server (NTRS)
Patten, James M.; Ma, Kwan-Liu
1998-01-01
This paper discusses a graph based user interface for representing the results of the volume visualization process. As images are rendered, they are connected to other images in a graph based on their rendering parameters. The user can take advantage of the information in this graph to understand how certain rendering parameter changes affect a dataset, making the visualization process more efficient. Because the graph contains more information than is contained in an unstructured history of images, the image graph is also helpful for collaborative visualization and animation.
Spoerk, Jakob; Gendrin, Christelle; Weber, Christoph; Figl, Michael; Pawiro, Supriyanto Ardjo; Furtado, Hugo; Fabri, Daniella; Bloch, Christoph; Bergmann, Helmar; Gröller, Eduard; Birkfellner, Wolfgang
2012-02-01
A common problem in image-guided radiation therapy (IGRT) of lung cancer as well as other malignant diseases is the compensation of periodic and aperiodic motion during dose delivery. Modern systems for image-guided radiation oncology allow for the acquisition of cone-beam computed tomography data in the treatment room as well as the acquisition of planar radiographs during the treatment. A mid-term research goal is the compensation of tumor target volume motion by 2D/3D Registration. In 2D/3D registration, spatial information on organ location is derived by an iterative comparison of perspective volume renderings, so-called digitally rendered radiographs (DRR) from computed tomography volume data, and planar reference x-rays. Currently, this rendering process is very time consuming, and real-time registration, which should at least provide data on organ position in less than a second, has not come into existence. We present two GPU-based rendering algorithms which generate a DRR of 512×512 pixels size from a CT dataset of 53 MB size at a pace of almost 100 Hz. This rendering rate is feasible by applying a number of algorithmic simplifications which range from alternative volume-driven rendering approaches - namely so-called wobbled splatting - to sub-sampling of the DRR-image by means of specialized raycasting techniques. Furthermore, general purpose graphics processing unit (GPGPU) programming paradigms were consequently utilized. Rendering quality and performance as well as the influence on the quality and performance of the overall registration process were measured and analyzed in detail. The results show that both methods are competitive and pave the way for fast motion compensation by rigid and possibly even non-rigid 2D/3D registration and, beyond that, adaptive filtering of motion models in IGRT. Copyright © 2011. Published by Elsevier GmbH.
Spoerk, Jakob; Gendrin, Christelle; Weber, Christoph; Figl, Michael; Pawiro, Supriyanto Ardjo; Furtado, Hugo; Fabri, Daniella; Bloch, Christoph; Bergmann, Helmar; Gröller, Eduard; Birkfellner, Wolfgang
2012-01-01
A common problem in image-guided radiation therapy (IGRT) of lung cancer as well as other malignant diseases is the compensation of periodic and aperiodic motion during dose delivery. Modern systems for image-guided radiation oncology allow for the acquisition of cone-beam computed tomography data in the treatment room as well as the acquisition of planar radiographs during the treatment. A mid-term research goal is the compensation of tumor target volume motion by 2D/3D registration. In 2D/3D registration, spatial information on organ location is derived by an iterative comparison of perspective volume renderings, so-called digitally rendered radiographs (DRR) from computed tomography volume data, and planar reference x-rays. Currently, this rendering process is very time consuming, and real-time registration, which should at least provide data on organ position in less than a second, has not come into existence. We present two GPU-based rendering algorithms which generate a DRR of 512 × 512 pixels size from a CT dataset of 53 MB size at a pace of almost 100 Hz. This rendering rate is feasible by applying a number of algorithmic simplifications which range from alternative volume-driven rendering approaches – namely so-called wobbled splatting – to sub-sampling of the DRR-image by means of specialized raycasting techniques. Furthermore, general purpose graphics processing unit (GPGPU) programming paradigms were consequently utilized. Rendering quality and performance as well as the influence on the quality and performance of the overall registration process were measured and analyzed in detail. The results show that both methods are competitive and pave the way for fast motion compensation by rigid and possibly even non-rigid 2D/3D registration and, beyond that, adaptive filtering of motion models in IGRT. PMID:21782399
Three-dimensional microscopic tomographic imagings of the cataract in a human lens in vivo
NASA Astrophysics Data System (ADS)
Masters, Barry R.
1998-10-01
The problem of three-dimensional visualization of a human lens in vivo has been solved by a technique of volume rendering a transformed series of 60 rotated Scheimpflug (a dual slit reflected light microscope) digital images. The data set was obtained by rotating the Scheimpflug camera about the optic axis of the lens in 3 degree increments. The transformed set of optical sections were first aligned to correct for small eye movements, and then rendered into a volume reconstruction with volume rendering computer graphics techniques. To help visualize the distribution of lens opacities (cataracts) in the living, human lens the intensity of light scattering was pseudocolor coded and the cataract opacities were displayed as a movie.
Cotter, Meghan M.; Whyms, Brian J.; Kelly, Michael P.; Doherty, Benjamin M.; Gentry, Lindell R.; Bersu, Edward T.; Vorperian, Houri K.
2015-01-01
The hyoid bone anchors and supports the vocal tract. Its complex shape is best studied in three dimensions, but it is difficult to capture on computed tomography (CT) images and three-dimensional volume renderings. The goal of this study was to determine the optimal CT scanning and rendering parameters to accurately measure the growth and developmental anatomy of the hyoid and to determine whether it is feasible and necessary to use these parameters in the measurement of hyoids from in vivo CT scans. Direct linear and volumetric measurements of skeletonized hyoid bone specimens were compared to corresponding CT images to determine the most accurate scanning parameters and three-dimensional rendering techniques. A pilot study was undertaken using in vivo scans from a retrospective CT database to determine feasibility of quantifying hyoid growth. Scanning parameters and rendering technique affected accuracy of measurements. Most linear CT measurements were within 10% of direct measurements; however, volume was overestimated when CT scans were acquired with a slice thickness greater than 1.25 mm. Slice-by-slice thresholding of hyoid images decreased volume overestimation. The pilot study revealed that the linear measurements tested correlate with age. A fine-tuned rendering approach applied to small slice thickness CT scans produces the most accurate measurements of hyoid bones. However, linear measurements can be accurately assessed from in vivo CT scans at a larger slice thickness. Such findings imply that investigation into the growth and development of the hyoid bone, and the vocal tract as a whole, can now be performed using these techniques. PMID:25810349
Cotter, Meghan M; Whyms, Brian J; Kelly, Michael P; Doherty, Benjamin M; Gentry, Lindell R; Bersu, Edward T; Vorperian, Houri K
2015-08-01
The hyoid bone anchors and supports the vocal tract. Its complex shape is best studied in three dimensions, but it is difficult to capture on computed tomography (CT) images and three-dimensional volume renderings. The goal of this study was to determine the optimal CT scanning and rendering parameters to accurately measure the growth and developmental anatomy of the hyoid and to determine whether it is feasible and necessary to use these parameters in the measurement of hyoids from in vivo CT scans. Direct linear and volumetric measurements of skeletonized hyoid bone specimens were compared with corresponding CT images to determine the most accurate scanning parameters and three-dimensional rendering techniques. A pilot study was undertaken using in vivo scans from a retrospective CT database to determine feasibility of quantifying hyoid growth. Scanning parameters and rendering technique affected accuracy of measurements. Most linear CT measurements were within 10% of direct measurements; however, volume was overestimated when CT scans were acquired with a slice thickness greater than 1.25 mm. Slice-by-slice thresholding of hyoid images decreased volume overestimation. The pilot study revealed that the linear measurements tested correlate with age. A fine-tuned rendering approach applied to small slice thickness CT scans produces the most accurate measurements of hyoid bones. However, linear measurements can be accurately assessed from in vivo CT scans at a larger slice thickness. Such findings imply that investigation into the growth and development of the hyoid bone, and the vocal tract as a whole, can now be performed using these techniques. © 2015 Wiley Periodicals, Inc.
Seemann, M D; Gebicke, K; Luboldt, W; Albes, J M; Vollmar, J; Schäfer, J F; Beinert, T; Englmeier, K H; Bitzer, M; Claussen, C D
2001-07-01
The aim of this study was to demonstrate the possibilities of a hybrid rendering method, the combination of a color-coded surface and volume rendering method, with the feasibility of performing surface-based virtual endoscopy with different representation models in the operative and interventional therapy control of the chest. In 6 consecutive patients with partial lung resection (n = 2) and lung transplantation (n = 4) a thin-section spiral computed tomography of the chest was performed. The tracheobronchial system and the introduced metallic stents were visualized using a color-coded surface rendering method. The remaining thoracic structures were visualized using a volume rendering method. For virtual bronchoscopy, the tracheobronchial system was visualized using a triangle surface model, a shaded-surface model and a transparent shaded-surface model. The hybrid 3D visualization uses the advantages of both the color-coded surface and volume rendering methods and facilitates a clear representation of the tracheobronchial system and the complex topographical relationship of morphological and pathological changes without loss of diagnostic information. Performing virtual bronchoscopy with the transparent shaded-surface model facilitates a reasonable to optimal, simultaneous visualization and assessment of the surface structure of the tracheobronchial system and the surrounding mediastinal structures and lesions. Hybrid rendering relieve the morphological assessment of anatomical and pathological changes without the need for time-consuming detailed analysis and presentation of source images. Performing virtual bronchoscopy with a transparent shaded-surface model offers a promising alternative to flexible fiberoptic bronchoscopy.
TransCut: interactive rendering of translucent cutouts.
Li, Dongping; Sun, Xin; Ren, Zhong; Lin, Stephen; Tong, Yiying; Guo, Baining; Zhou, Kun
2013-03-01
We present TransCut, a technique for interactive rendering of translucent objects undergoing fracturing and cutting operations. As the object is fractured or cut open, the user can directly examine and intuitively understand the complex translucent interior, as well as edit material properties through painting on cross sections and recombining the broken pieces—all with immediate and realistic visual feedback. This new mode of interaction with translucent volumes is made possible with two technical contributions. The first is a novel solver for the diffusion equation (DE) over a tetrahedral mesh that produces high-quality results comparable to the state-of-art finite element method (FEM) of Arbree et al. but at substantially higher speeds. This accuracy and efficiency is obtained by computing the discrete divergences of the diffusion equation and constructing the DE matrix using analytic formulas derived for linear finite elements. The second contribution is a multiresolution algorithm to significantly accelerate our DE solver while adapting to the frequent changes in topological structure of dynamic objects. The entire multiresolution DE solver is highly parallel and easily implemented on the GPU. We believe TransCut provides a novel visual effect for heterogeneous translucent objects undergoing fracturing and cutting operations.
Christiansen, Andrew R; Shorti, Rami M; Smith, Cory D; Prows, William C; Bishoff, Jay T
2018-05-01
Despite the increasing use of advanced 3D imaging techniques and 3D printing, these techniques have not yet been comprehensively compared in a surgical setting. The purpose of this study is to explore the effectiveness of five different advanced imaging modalities during a complex renal surgical procedure. A patient with a horseshoe kidney and multiple large, symptomatic stones that had failed Extracorporeal Shock Wave Lithotripsy (ESWL) and ureteroscopy treatment was used for this evaluation. CT data were used to generate five different imaging modalities, including a 3D printed model, three different volume rendered models, and a geometric CAD model. A survey was used to evaluate the quality and breadth of the imaging modalities during four different phases of the laparoscopic procedure. In the case of a complex kidney procedure, the CAD model, 3D print, volume render on an autostereoscopic 3D display, interactive and basic volume render models demonstrated added insight and complemented the surgical procedure. CAD manual segmentation allowed tissue layers and/or kidney stones to be made colorful and semi-transparent, allowing easier navigation through abnormal vasculature. The 3D print allowed for simultaneous visualization of renal pelvis and surrounding vasculature. Our preliminary exploration indicates that various advanced imaging modalities, when properly utilized and supported during surgery, can be useful in complementing the CT data and laparoscopic display. This study suggests that various imaging modalities, such as ones utilized in this case, can be beneficial intraoperatively depending on the surgical step involved and may be more helpful than 3D printed models. We also present factors to consider when evaluating advanced imaging modalities during complex surgery.
Enhanced visualization of MR angiogram with modified MIP and 3D image fusion
NASA Astrophysics Data System (ADS)
Kim, JongHyo; Yeon, Kyoung M.; Han, Man Chung; Lee, Dong Hyuk; Cho, Han I.
1997-05-01
We have developed a 3D image processing and display technique that include image resampling, modification of MIP, volume rendering, and fusion of MIP image with volumetric rendered image. This technique facilitates the visualization of the 3D spatial relationship between vasculature and surrounding organs by overlapping the MIP image on the volumetric rendered image of the organ. We applied this technique to a MR brain image data to produce an MRI angiogram that is overlapped with 3D volume rendered image of brain. MIP technique was used to visualize the vasculature of brain, and volume rendering was used to visualize the other structures of brain. The two images are fused after adjustment of contrast and brightness levels of each image in such a way that both the vasculature and brain structure are well visualized either by selecting the maximum value of each image or by assigning different color table to each image. The resultant image with this technique visualizes both the brain structure and vasculature simultaneously, allowing the physicians to inspect their relationship more easily. The presented technique will be useful for surgical planning for neurosurgery.
Kin, Taichi; Nakatomi, Hirofumi; Shojima, Masaaki; Tanaka, Minoru; Ino, Kenji; Mori, Harushi; Kunimatsu, Akira; Oyama, Hiroshi; Saito, Nobuhito
2012-07-01
In this study, the authors used preoperative simulation employing 3D computer graphics (interactive computer graphics) to fuse all imaging data for brainstem cavernous malformations. The authors evaluated whether interactive computer graphics or 2D imaging correlated better with the actual operative field, particularly in identifying a developmental venous anomaly (DVA). The study population consisted of 10 patients scheduled for surgical treatment of brainstem cavernous malformations. Data from preoperative imaging (MRI, CT, and 3D rotational angiography) were automatically fused using a normalized mutual information method, and then reconstructed by a hybrid method combining surface rendering and volume rendering methods. With surface rendering, multimodality and multithreshold techniques for 1 tissue were applied. The completed interactive computer graphics were used for simulation of surgical approaches and assumed surgical fields. Preoperative diagnostic rates for a DVA associated with brainstem cavernous malformation were compared between conventional 2D imaging and interactive computer graphics employing receiver operating characteristic (ROC) analysis. The time required for reconstruction of 3D images was 3-6 hours for interactive computer graphics. Observation in interactive mode required approximately 15 minutes. Detailed anatomical information for operative procedures, from the craniotomy to microsurgical operations, could be visualized and simulated three-dimensionally as 1 computer graphic using interactive computer graphics. Virtual surgical views were consistent with actual operative views. This technique was very useful for examining various surgical approaches. Mean (±SEM) area under the ROC curve for rate of DVA diagnosis was significantly better for interactive computer graphics (1.000±0.000) than for 2D imaging (0.766±0.091; p<0.001, Mann-Whitney U-test). The authors report a new method for automatic registration of preoperative imaging data from CT, MRI, and 3D rotational angiography for reconstruction into 1 computer graphic. The diagnostic rate of DVA associated with brainstem cavernous malformation was significantly better using interactive computer graphics than with 2D images. Interactive computer graphics was also useful in helping to plan the surgical access corridor.
A parallel coordinates style interface for exploratory volume visualization.
Tory, Melanie; Potts, Simeon; Möller, Torsten
2005-01-01
We present a user interface, based on parallel coordinates, that facilitates exploration of volume data. By explicitly representing the visualization parameter space, the interface provides an overview of rendering options and enables users to easily explore different parameters. Rendered images are stored in an integrated history bar that facilitates backtracking to previous visualization options. Initial usability testing showed clear agreement between users and experts of various backgrounds (usability, graphic design, volume visualization, and medical physics) that the proposed user interface is a valuable data exploration tool.
Anastasi, Giuseppe; Bramanti, Placido; Di Bella, Paolo; Favaloro, Angelo; Trimarchi, Fabio; Magaudda, Ludovico; Gaeta, Michele; Scribano, Emanuele; Bruschetta, Daniele; Milardi, Demetrio
2007-01-01
The choice of medical imaging techniques, for the purpose of the present work aimed at studying the anatomy of the knee, derives from the increasing use of images in diagnostics, research and teaching, and the subsequent importance that these methods are gaining within the scientific community. Medical systems using virtual reality techniques also offer a good alternative to traditional methods, and are considered among the most important tools in the areas of research and teaching. In our work we have shown some possible uses of three-dimensional imaging for the study of the morphology of the normal human knee, and its clinical applications. We used the direct volume rendering technique, and created a data set of images and animations to allow us to visualize the single structures of the human knee in three dimensions. Direct volume rendering makes use of specific algorithms to transform conventional two-dimensional magnetic resonance imaging sets of slices into see-through volume data set images. It is a technique which does not require the construction of intermediate geometric representations, and has the advantage of allowing the visualization of a single image of the full data set, using semi-transparent mapping. Digital images of human structures, and in particular of the knee, offer important information about anatomical structures and their relationships, and are of great value in the planning of surgical procedures. On this basis we studied seven volunteers with an average age of 25 years, who underwent magnetic resonance imaging. After elaboration of the data through post-processing, we analysed the structure of the knee in detail. The aim of our investigation was the three-dimensional image, in order to comprehend better the interactions between anatomical structures. We believe that these results, applied to living subjects, widen the frontiers in the areas of teaching, diagnostics, therapy and scientific research. PMID:17645453
Establishing the 3-D finite element solid model of femurs in partial by volume rendering.
Zhang, Yinwang; Zhong, Wuxue; Zhu, Haibo; Chen, Yun; Xu, Lingjun; Zhu, Jianmin
2013-01-01
It remains rare to report three-dimensional (3-D) finite element solid model of femurs in partial by volume rendering method, though several methods of femoral 3-D finite element modeling are already available. We aim to analyze the advantages of the modeling method by establishing the 3-D finite element solid model of femurs in partial by volume rendering. A 3-D finite element model of the normal human femurs, made up of three anatomic structures: cortical bone, cancellous bone and pulp cavity, was constructed followed by pretreatment of the CT original image. Moreover, the finite-element analysis was carried on different material properties, three types of materials given for cortical bone, six assigned for cancellous bone, and single for pulp cavity. The established 3-D finite element of femurs contains three anatomical structures: cortical bone, cancellous bone, and pulp cavity. The compressive stress primarily concentrated in the medial surfaces of femur, especially in the calcar femorale. Compared with whole modeling by volume rendering method, the 3-D finite element solid model created in partial is more real and fit for finite element analysis. Copyright © 2013 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Birkfellner, Wolfgang; Seemann, Rudolf; Figl, Michael; Hummel, Johann; Ede, Christopher; Homolka, Peter; Yang, Xinhui; Niederer, Peter; Bergmann, Helmar
2005-05-01
3D/2D registration, the automatic assignment of a global rigid-body transformation matching the coordinate systems of patient and preoperative volume scan using projection images, is an important topic in image-guided therapy and radiation oncology. A crucial part of most 3D/2D registration algorithms is the fast computation of digitally rendered radiographs (DRRs) to be compared iteratively to radiographs or portal images. Since registration is an iterative process, fast generation of DRRs—which are perspective summed voxel renderings—is desired. In this note, we present a simple and rapid method for generation of DRRs based on splat rendering. As opposed to conventional splatting, antialiasing of the resulting images is not achieved by means of computing a discrete point spread function (a so-called footprint), but by stochastic distortion of either the voxel positions in the volume scan or by the simulation of a focal spot of the x-ray tube with non-zero diameter. Our method generates slightly blurred DRRs suitable for registration purposes at framerates of approximately 10 Hz when rendering volume images with a size of 30 MB.
High Performance GPU-Based Fourier Volume Rendering.
Abdellah, Marwan; Eldeib, Ayman; Sharawi, Amr
2015-01-01
Fourier volume rendering (FVR) is a significant visualization technique that has been used widely in digital radiography. As a result of its (N (2)logN) time complexity, it provides a faster alternative to spatial domain volume rendering algorithms that are (N (3)) computationally complex. Relying on the Fourier projection-slice theorem, this technique operates on the spectral representation of a 3D volume instead of processing its spatial representation to generate attenuation-only projections that look like X-ray radiographs. Due to the rapid evolution of its underlying architecture, the graphics processing unit (GPU) became an attractive competent platform that can deliver giant computational raw power compared to the central processing unit (CPU) on a per-dollar-basis. The introduction of the compute unified device architecture (CUDA) technology enables embarrassingly-parallel algorithms to run efficiently on CUDA-capable GPU architectures. In this work, a high performance GPU-accelerated implementation of the FVR pipeline on CUDA-enabled GPUs is presented. This proposed implementation can achieve a speed-up of 117x compared to a single-threaded hybrid implementation that uses the CPU and GPU together by taking advantage of executing the rendering pipeline entirely on recent GPU architectures.
Scalable and Interactive Segmentation and Visualization of Neural Processes in EM Datasets
Jeong, Won-Ki; Beyer, Johanna; Hadwiger, Markus; Vazquez, Amelio; Pfister, Hanspeter; Whitaker, Ross T.
2011-01-01
Recent advances in scanning technology provide high resolution EM (Electron Microscopy) datasets that allow neuroscientists to reconstruct complex neural connections in a nervous system. However, due to the enormous size and complexity of the resulting data, segmentation and visualization of neural processes in EM data is usually a difficult and very time-consuming task. In this paper, we present NeuroTrace, a novel EM volume segmentation and visualization system that consists of two parts: a semi-automatic multiphase level set segmentation with 3D tracking for reconstruction of neural processes, and a specialized volume rendering approach for visualization of EM volumes. It employs view-dependent on-demand filtering and evaluation of a local histogram edge metric, as well as on-the-fly interpolation and ray-casting of implicit surfaces for segmented neural structures. Both methods are implemented on the GPU for interactive performance. NeuroTrace is designed to be scalable to large datasets and data-parallel hardware architectures. A comparison of NeuroTrace with a commonly used manual EM segmentation tool shows that our interactive workflow is faster and easier to use for the reconstruction of complex neural processes. PMID:19834227
Image fusion for visualization of hepatic vasculature and tumors
NASA Astrophysics Data System (ADS)
Chou, Jin-Shin; Chen, Shiuh-Yung J.; Sudakoff, Gary S.; Hoffmann, Kenneth R.; Chen, Chin-Tu; Dachman, Abraham H.
1995-05-01
We have developed segmentation and simultaneous display techniques to facilitate the visualization of the three-dimensional spatial relationships between organ structures and organ vasculature. We concentrate on the visualization of the liver based on spiral computed tomography images. Surface-based 3-D rendering and maximal intensity projection algorithms are used for data visualization. To extract the liver in the serial of images accurately and efficiently, we have developed a user-friendly interactive program with a deformable-model segmentation. Surface rendering techniques are used to visualize the extracted structures, adjacent contours are aligned and fitted with a Bezier surface to yield a smooth surface. Visualization of the vascular structures, portal and hepatic veins, is achieved by applying a MIP technique to the extracted liver volume. To integrate the extracted structures they are surface-rendered and their MIP images are aligned and a color table is designed for simultaneous display of the combined liver/tumor and vasculature images. By combining the 3-D surface rendering and MIP techniques, portal veins, hepatic veins, and hepatic tumor can be inspected simultaneously and their spatial relationships can be more easily perceived. The proposed technique will be useful for visualization of both hepatic neoplasm and vasculature in surgical planning for tumor resection or living-donor liver transplantation.
Hierarchical and Parallelizable Direct Volume Rendering for Irregular and Multiple Grids
NASA Technical Reports Server (NTRS)
Wilhelms, Jane; VanGelder, Allen; Tarantino, Paul; Gibbs, Jonathan
1996-01-01
A general volume rendering technique is described that efficiently produces images of excellent quality from data defined over irregular grids having a wide variety of formats. Rendering is done in software, eliminating the need for special graphics hardware, as well as any artifacts associated with graphics hardware. Images of volumes with about one million cells can be produced in one to several minutes on a workstation with a 150 MHz processor. A significant advantage of this method for applications such as computational fluid dynamics is that it can process multiple intersecting grids. Such grids present problems for most current volume rendering techniques. Also, the wide range of cell sizes (by a factor of 10,000 or more), which is typical of such applications, does not present difficulties, as it does for many techniques. A spatial hierarchical organization makes it possible to access data from a restricted region efficiently. The tree has greater depth in regions of greater detail, determined by the number of cells in the region. It also makes it possible to render useful 'preview' images very quickly (about one second for one-million-cell grids) by displaying each region associated with a tree node as one cell. Previews show enough detail to navigate effectively in very large data sets. The algorithmic techniques include use of a kappa-d tree, with prefix-order partitioning of triangles, to reduce the number of primitives that must be processed for one rendering, coarse-grain parallelism for a shared-memory MIMD architecture, a new perspective transformation that achieves greater numerical accuracy, and a scanline algorithm with depth sorting and a new clipping technique.
A service protocol for post-processing of medical images on the mobile device
NASA Astrophysics Data System (ADS)
He, Longjun; Ming, Xing; Xu, Lang; Liu, Qian
2014-03-01
With computing capability and display size growing, the mobile device has been used as a tool to help clinicians view patient information and medical images anywhere and anytime. It is uneasy and time-consuming for transferring medical images with large data size from picture archiving and communication system to mobile client, since the wireless network is unstable and limited by bandwidth. Besides, limited by computing capability, memory and power endurance, it is hard to provide a satisfactory quality of experience for radiologists to handle some complex post-processing of medical images on the mobile device, such as real-time direct interactive three-dimensional visualization. In this work, remote rendering technology is employed to implement the post-processing of medical images instead of local rendering, and a service protocol is developed to standardize the communication between the render server and mobile client. In order to make mobile devices with different platforms be able to access post-processing of medical images, the Extensible Markup Language is taken to describe this protocol, which contains four main parts: user authentication, medical image query/ retrieval, 2D post-processing (e.g. window leveling, pixel values obtained) and 3D post-processing (e.g. maximum intensity projection, multi-planar reconstruction, curved planar reformation and direct volume rendering). And then an instance is implemented to verify the protocol. This instance can support the mobile device access post-processing of medical image services on the render server via a client application or on the web page.
Efficient Stochastic Rendering of Static and Animated Volumes Using Visibility Sweeps.
von Radziewsky, Philipp; Kroes, Thomas; Eisemann, Martin; Eisemann, Elmar
2017-09-01
Stochastically solving the rendering integral (particularly visibility) is the de-facto standard for physically-based light transport but it is computationally expensive, especially when displaying heterogeneous volumetric data. In this work, we present efficient techniques to speed-up the rendering process via a novel visibility-estimation method in concert with an unbiased importance sampling (involving environmental lighting and visibility inside the volume), filtering, and update techniques for both static and animated scenes. Our major contributions include a progressive estimate of partial occlusions based on a fast sweeping-plane algorithm. These occlusions are stored in an octahedral representation, which can be conveniently transformed into a quadtree-based hierarchy suited for a joint importance sampling. Further, we propose sweep-space filtering, which suppresses the occurrence of fireflies and investigate different update schemes for animated scenes. Our technique is unbiased, requires little precomputation, is highly parallelizable, and is applicable to a various volume data sets, dynamic transfer functions, animated volumes and changing environmental lighting.
Scientific Visualization and Simulation for Multi-dimensional Marine Environment Data
NASA Astrophysics Data System (ADS)
Su, T.; Liu, H.; Wang, W.; Song, Z.; Jia, Z.
2017-12-01
As higher attention on the ocean and rapid development of marine detection, there are increasingly demands for realistic simulation and interactive visualization of marine environment in real time. Based on advanced technology such as GPU rendering, CUDA parallel computing and rapid grid oriented strategy, a series of efficient and high-quality visualization methods, which can deal with large-scale and multi-dimensional marine data in different environmental circumstances, has been proposed in this paper. Firstly, a high-quality seawater simulation is realized by FFT algorithm, bump mapping and texture animation technology. Secondly, large-scale multi-dimensional marine hydrological environmental data is virtualized by 3d interactive technologies and volume rendering techniques. Thirdly, seabed terrain data is simulated with improved Delaunay algorithm, surface reconstruction algorithm, dynamic LOD algorithm and GPU programming techniques. Fourthly, seamless modelling in real time for both ocean and land based on digital globe is achieved by the WebGL technique to meet the requirement of web-based application. The experiments suggest that these methods can not only have a satisfying marine environment simulation effect, but also meet the rendering requirements of global multi-dimension marine data. Additionally, a simulation system for underwater oil spill is established by OSG 3D-rendering engine. It is integrated with the marine visualization method mentioned above, which shows movement processes, physical parameters, current velocity and direction for different types of deep water oil spill particle (oil spill particles, hydrates particles, gas particles, etc.) dynamically and simultaneously in multi-dimension. With such application, valuable reference and decision-making information can be provided for understanding the progress of oil spill in deep water, which is helpful for ocean disaster forecasting, warning and emergency response.
A high-level 3D visualization API for Java and ImageJ.
Schmid, Benjamin; Schindelin, Johannes; Cardona, Albert; Longair, Mark; Heisenberg, Martin
2010-05-21
Current imaging methods such as Magnetic Resonance Imaging (MRI), Confocal microscopy, Electron Microscopy (EM) or Selective Plane Illumination Microscopy (SPIM) yield three-dimensional (3D) data sets in need of appropriate computational methods for their analysis. The reconstruction, segmentation and registration are best approached from the 3D representation of the data set. Here we present a platform-independent framework based on Java and Java 3D for accelerated rendering of biological images. Our framework is seamlessly integrated into ImageJ, a free image processing package with a vast collection of community-developed biological image analysis tools. Our framework enriches the ImageJ software libraries with methods that greatly reduce the complexity of developing image analysis tools in an interactive 3D visualization environment. In particular, we provide high-level access to volume rendering, volume editing, surface extraction, and image annotation. The ability to rely on a library that removes the low-level details enables concentrating software development efforts on the algorithm implementation parts. Our framework enables biomedical image software development to be built with 3D visualization capabilities with very little effort. We offer the source code and convenient binary packages along with extensive documentation at http://3dviewer.neurofly.de.
Semantics by analogy for illustrative volume visualization☆
Gerl, Moritz; Rautek, Peter; Isenberg, Tobias; Gröller, Eduard
2012-01-01
We present an interactive graphical approach for the explicit specification of semantics for volume visualization. This explicit and graphical specification of semantics for volumetric features allows us to visually assign meaning to both input and output parameters of the visualization mapping. This is in contrast to the implicit way of specifying semantics using transfer functions. In particular, we demonstrate how to realize a dynamic specification of semantics which allows to flexibly explore a wide range of mappings. Our approach is based on three concepts. First, we use semantic shader augmentation to automatically add rule-based rendering functionality to static visualization mappings in a shader program, while preserving the visual abstraction that the initial shader encodes. With this technique we extend recent developments that define a mapping between data attributes and visual attributes with rules, which are evaluated using fuzzy logic. Second, we let users define the semantics by analogy through brushing on renderings of the data attributes of interest. Third, the rules are specified graphically in an interface that provides visual clues for potential modifications. Together, the presented methods offer a high degree of freedom in the specification and exploration of rule-based mappings and avoid the limitations of a linguistic rule formulation. PMID:23576827
Virtual probing system for medical volume data
NASA Astrophysics Data System (ADS)
Xiao, Yongfei; Fu, Yili; Wang, Shuguo
2007-12-01
Because of the huge computation in 3D medical data visualization, looking into its inner data interactively is always a problem to be resolved. In this paper, we present a novel approach to explore 3D medical dataset in real time by utilizing a 3D widget to manipulate the scanning plane. With the help of the 3D texture property in modern graphics card, a virtual scanning probe is used to explore oblique clipping plane of medical volume data in real time. A 3D model of the medical dataset is also rendered to illustrate the relationship between the scanning-plane image and the other tissues in medical data. It will be a valuable tool in anatomy education and understanding of medical images in the medical research.
Efficient high-quality volume rendering of SPH data.
Fraedrich, Roland; Auer, Stefan; Westermann, Rüdiger
2010-01-01
High quality volume rendering of SPH data requires a complex order-dependent resampling of particle quantities along the view rays. In this paper we present an efficient approach to perform this task using a novel view-space discretization of the simulation domain. Our method draws upon recent work on GPU-based particle voxelization for the efficient resampling of particles into uniform grids. We propose a new technique that leverages a perspective grid to adaptively discretize the view-volume, giving rise to a continuous level-of-detail sampling structure and reducing memory requirements compared to a uniform grid. In combination with a level-of-detail representation of the particle set, the perspective grid allows effectively reducing the amount of primitives to be processed at run-time. We demonstrate the quality and performance of our method for the rendering of fluid and gas dynamics SPH simulations consisting of many millions of particles.
NASA Astrophysics Data System (ADS)
Tavakkol, Sasan; Lynett, Patrick
2017-08-01
In this paper, we introduce an interactive coastal wave simulation and visualization software, called Celeris. Celeris is an open source software which needs minimum preparation to run on a Windows machine. The software solves the extended Boussinesq equations using a hybrid finite volume-finite difference method and supports moving shoreline boundaries. The simulation and visualization are performed on the GPU using Direct3D libraries, which enables the software to run faster than real-time. Celeris provides a first-of-its-kind interactive modeling platform for coastal wave applications and it supports simultaneous visualization with both photorealistic and colormapped rendering capabilities. We validate our software through comparison with three standard benchmarks for non-breaking and breaking waves.
Improving the visualization of 3D ultrasound data with 3D filtering
NASA Astrophysics Data System (ADS)
Shamdasani, Vijay; Bae, Unmin; Managuli, Ravi; Kim, Yongmin
2005-04-01
3D ultrasound imaging is quickly gaining widespread clinical acceptance as a visualization tool that allows clinicians to obtain unique views not available with traditional 2D ultrasound imaging and an accurate understanding of patient anatomy. The ability to acquire, manipulate and interact with the 3D data in real time is an important feature of 3D ultrasound imaging. Volume rendering is often used to transform the 3D volume into 2D images for visualization. Unlike computed tomography (CT) and magnetic resonance imaging (MRI), volume rendering of 3D ultrasound data creates noisy images in which surfaces cannot be readily discerned due to speckles and low signal-to-noise ratio. The degrading effect of speckles is especially severe when gradient shading is performed to add depth cues to the image. Several researchers have reported that smoothing the pre-rendered volume with a 3D convolution kernel, such as 5x5x5, can significantly improve the image quality, but at the cost of decreased resolution. In this paper, we have analyzed the reasons for the improvement in image quality with 3D filtering and determined that the improvement is due to two effects. The filtering reduces speckles in the volume data, which leads to (1) more accurate gradient computation and better shading and (2) decreased noise during compositing. We have found that applying a moderate-size smoothing kernel (e.g., 7x7x7) to the volume data before gradient computation combined with some smoothing of the volume data (e.g., with a 3x3x3 lowpass filter) before compositing yielded images with good depth perception and no appreciable loss in resolution. Providing the clinician with the flexibility to control both of these effects (i.e., shading and compositing) independently could improve the visualization of the 3D ultrasound data. Introducing this flexibility into the ultrasound machine requires 3D filtering to be performed twice on the volume data, once before gradient computation and again before compositing. 3D filtering of an ultrasound volume containing millions of voxels requires a large amount of computation, and doing it twice decreases the number of frames that can be visualized per second. To address this, we have developed several techniques to make computation efficient. For example, we have used the moving average method to filter a 128x128x128 volume with a 3x3x3 boxcar kernel in 17 ms on a single MAP processor running at 400 MHz. The same methods reduced the computing time on a Pentium 4 running at 3 GHz from 110 ms to 62 ms. We believe that our proposed method can improve 3D ultrasound visualization without sacrificing resolution and incurring an excessive computing time.
Kahrs, Lüder Alexander; Labadie, Robert Frederick
2013-01-01
Cadaveric dissection of temporal bone anatomy is not always possible or feasible in certain educational environments. Volume rendering using CT and/or MRI helps understanding spatial relationships, but they suffer in nonrealistic depictions especially regarding color of anatomical structures. Freely available, nonstained histological data sets and software which are able to render such data sets in realistic color could overcome this limitation and be a very effective teaching tool. With recent availability of specialized public-domain software, volume rendering of true-color, histological data sets is now possible. We present both feasibility as well as step-by-step instructions to allow processing of publicly available data sets (Visible Female Human and Visible Ear) into easily navigable 3-dimensional models using free software. Example renderings are shown to demonstrate the utility of these free methods in virtual exploration of the complex anatomy of the temporal bone. After exploring the data sets, the Visible Ear appears more natural than the Visible Human. We provide directions for an easy-to-use, open-source software in conjunction with freely available histological data sets. This work facilitates self-education of spatial relationships of anatomical structures inside the human temporal bone as well as it allows exploration of surgical approaches prior to cadaveric testing and/or clinical implementation. Copyright © 2013 S. Karger AG, Basel.
Entrainment-Zone Restratification and Flow Structures in Stratified Shear Turbulence
NASA Technical Reports Server (NTRS)
Reif, B. Anders Pettersson; Werne, Joseph; Andreassen, Oyvind; Meyer, Christian; Davis-Mansour, Melissa
2002-01-01
Late-time dynamics and morphology of a stratified turbulent shear layer are examined using 1) Reynolds-stress and heat-flux budgets, 2) the single-point structure tensors introduced by Kassinos et al. (2001), and 3) flow visualization via 3D volume rendering. Flux reversal is observed during restratification in the edges of the turbulent layer. We present a first attempt to quantify the turbulence-mean-flow interaction and to characterize the predominant flow structures. Future work will extend this analysis to earlier times and different values of the Reynolds and Richardson numbers.
"Tools For Analysis and Visualization of Large Time- Varying CFD Data Sets"
NASA Technical Reports Server (NTRS)
Wilhelms, Jane; vanGelder, Allen
1999-01-01
During the four years of this grant (including the one year extension), we have explored many aspects of the visualization of large CFD (Computational Fluid Dynamics) datasets. These have included new direct volume rendering approaches, hierarchical methods, volume decimation, error metrics, parallelization, hardware texture mapping, and methods for analyzing and comparing images. First, we implemented an extremely general direct volume rendering approach that can be used to render rectilinear, curvilinear, or tetrahedral grids, including overlapping multiple zone grids, and time-varying grids. Next, we developed techniques for associating the sample data with a k-d tree, a simple hierarchial data model to approximate samples in the regions covered by each node of the tree, and an error metric for the accuracy of the model. We also explored a new method for determining the accuracy of approximate models based on the light field method described at ACM SIGGRAPH (Association for Computing Machinery Special Interest Group on Computer Graphics) '96. In our initial implementation, we automatically image the volume from 32 approximately evenly distributed positions on the surface of an enclosing tessellated sphere. We then calculate differences between these images under different conditions of volume approximation or decimation.
Archeological Testing Fort Hood: 1994-1995. Volume 2
1996-10-01
Type 3 sediment appears to be dry present, both as discrete lenses which are usually decomposition, which renders it a loose, grayish readily...degrading the quality of the shelters, rendering them increasingly attractive for resource. habitation. However, as noted previously (Abbott 1994; Abbott...651 characteristic renders them subject to additional federal laws (e.g., NAGPRA), it increases the urgency to implement management policies that will
Software Method for Computed Tomography Cylinder Data Unwrapping, Re-slicing, and Analysis
NASA Technical Reports Server (NTRS)
Roth, Don J.
2013-01-01
A software method has been developed that is applicable for analyzing cylindrical and partially cylindrical objects inspected using computed tomography (CT). This method involves unwrapping and re-slicing data so that the CT data from the cylindrical object can be viewed as a series of 2D sheets (or flattened onion skins ) in addition to a series of top view slices and 3D volume rendering. The advantages of viewing the data in this fashion are as follows: (1) the use of standard and specialized image processing and analysis methods is facilitated having 2D array data versus a volume rendering; (2) accurate lateral dimensional analysis of flaws is possible in the unwrapped sheets versus volume rendering; (3) flaws in the part jump out at the inspector with the proper contrast expansion settings in the unwrapped sheets; and (4) it is much easier for the inspector to locate flaws in the unwrapped sheets versus top view slices for very thin cylinders. The method is fully automated and requires no input from the user except proper voxel dimension from the CT experiment and wall thickness of the part. The software is available in 32-bit and 64-bit versions, and can be used with binary data (8- and 16-bit) and BMP type CT image sets. The software has memory (RAM) and hard-drive based modes. The advantage of the (64-bit) RAM-based mode is speed (and is very practical for users of 64-bit Windows operating systems and computers having 16 GB or more RAM). The advantage of the hard-drive based analysis is one can work with essentially unlimited-sized data sets. Separate windows are spawned for the unwrapped/re-sliced data view and any image processing interactive capability. Individual unwrapped images and un -wrapped image series can be saved in common image formats. More information is available at http://www.grc.nasa.gov/WWW/OptInstr/ NDE_CT_CylinderUnwrapper.html.
Method and system for rendering and interacting with an adaptable computing environment
Osbourn, Gordon Cecil [Albuquerque, NM; Bouchard, Ann Marie [Albuquerque, NM
2012-06-12
An adaptable computing environment is implemented with software entities termed "s-machines", which self-assemble into hierarchical data structures capable of rendering and interacting with the computing environment. A hierarchical data structure includes a first hierarchical s-machine bound to a second hierarchical s-machine. The first hierarchical s-machine is associated with a first layer of a rendering region on a display screen and the second hierarchical s-machine is associated with a second layer of the rendering region overlaying at least a portion of the first layer. A screen element s-machine is linked to the first hierarchical s-machine. The screen element s-machine manages data associated with a screen element rendered to the display screen within the rendering region at the first layer.
A Heterogeneous Multiprocessor Graphics System Using Processor-Enhanced Memories
1989-02-01
frames per second, font generation directly from conic spline descriptions, and rapid calculation of radiosity form factors. The hardware consists of...generality for rendering curved surfaces, volume data, objects dcscri id with Constructive Solid Geometry, for rendering scenes using the radiosity ...f.aces and for computing a spherical radiosity lighting model (see Section 7.6). Custom Memory Chips \\ 208 bits x 128 pixels - Renderer Board ix p o a
Development of the mouse cochlea database (MCD).
Santi, Peter A; Rapson, Ian; Voie, Arne
2008-09-01
The mouse cochlea database (MCD) provides an interactive, image database of the mouse cochlea for learning its anatomy and data mining of its resources. The MCD website is hosted on a centrally maintained, high-speed server at the following URL: (http://mousecochlea.umn.edu). The MCD contains two types of image resources, serial 2D image stacks and 3D reconstructions of cochlear structures. Complete image stacks of the cochlea from two different mouse strains were obtained using orthogonal plane fluorescence optical microscopy (OPFOS). 2D images of the cochlea are presented on the MCD website as: viewable images within a stack, 2D atlas of the cochlea, orthogonal sections, and direct volume renderings combined with isosurface reconstructions. In order to assess cochlear structures quantitatively, "true" cross-sections of the scala media along the length of the basilar membrane were generated by virtual resectioning of a cochlea orthogonal to a cochlear structure, such as the centroid of the basilar membrane or the scala media. 3D images are presented on the MCD website as: direct volume renderings, movies, interactive QuickTime VRs, flythrough, and isosurface 3D reconstructions of different cochlear structures. 3D computer models can also be used for solid model fabrication by rapid prototyping and models from different cochleas can be combined to produce an average 3D model. The MCD is the first comprehensive image resource on the mouse cochlea and is a new paradigm for understanding the anatomy of the cochlea, and establishing morphometric parameters of cochlear structures in normal and mutant mice.
Tools for Analysis and Visualization of Large Time-Varying CFD Data Sets
NASA Technical Reports Server (NTRS)
Wilhelms, Jane; VanGelder, Allen
1997-01-01
In the second year, we continued to built upon and improve our scanline-based direct volume renderer that we developed in the first year of this grant. This extremely general rendering approach can handle regular or irregular grids, including overlapping multiple grids, and polygon mesh surfaces. It runs in parallel on multi-processors. It can also be used in conjunction with a k-d tree hierarchy, where approximate models and error terms are stored in the nodes of the tree, and approximate fast renderings can be created. We have extended our software to handle time-varying data where the data changes but the grid does not. We are now working on extending it to handle more general time-varying data. We have also developed a new extension of our direct volume renderer that uses automatic decimation of the 3D grid, as opposed to an explicit hierarchy. We explored this alternative approach as being more appropriate for very large data sets, where the extra expense of a tree may be unacceptable. We also describe a new approach to direct volume rendering using hardware 3D textures and incorporates lighting effects. Volume rendering using hardware 3D textures is extremely fast, and machines capable of using this technique are becoming more moderately priced. While this technique, at present, is limited to use with regular grids, we are pursuing possible algorithms extending the approach to more general grid types. We have also begun to explore a new method for determining the accuracy of approximate models based on the light field method described at ACM SIGGRAPH '96. In our initial implementation, we automatically image the volume from 32 equi-distant positions on the surface of an enclosing tessellated sphere. We then calculate differences between these images under different conditions of volume approximation or decimation. We are studying whether this will give a quantitative measure of the effects of approximation. We have created new tools for exploring the differences between images produced by various rendering methods. Images created by our software can be stored in the SGI RGB format. Our idtools software reads in pair of images and compares them using various metrics. The differences of the images using the RGB, HSV, and HSL color models can be calculated and shown. We can also calculate the auto-correlation function and the Fourier transform of the image and image differences. We will explore how these image differences compare in order to find useful metrics for quantifying the success of various visualization approaches. In general, progress was consistent with our research plan for the second year of the grant.
Three-Dimensional Reconstruction of Thoracic Structures: Based on Chinese Visible Human
Luo, Na; Tan, Liwen; Fang, Binji; Li, Ying; Xie, Bing; Liu, Kaijun; Chu, Chun; Li, Min
2013-01-01
We managed to establish three-dimensional digitized visible model of human thoracic structures and to provide morphological data for imaging diagnosis and thoracic and cardiovascular surgery. With Photoshop software, the contour line of lungs and mediastinal structures including heart, aorta and its ramus, azygos vein, superior vena cava, inferior vena cava, thymus, esophagus, diaphragm, phrenic nerve, vagus nerve, sympathetic trunk, thoracic vertebrae, sternum, thoracic duct, and so forth were segmented from the Chinese Visible Human (CVH)-1 data set. The contour data set of segmented thoracic structures was imported to Amira software and 3D thorax models were reconstructed via surface rendering and volume rendering. With Amira software, surface rendering reconstructed model of thoracic organs and its volume rendering reconstructed model were 3D reconstructed and can be displayed together clearly and accurately. It provides a learning tool of interpreting human thoracic anatomy and virtual thoracic and cardiovascular surgery for medical students and junior surgeons. PMID:24369489
NASA Astrophysics Data System (ADS)
Alyassin, Abdal M.
2002-05-01
3D Digital mammography (3DDM) is a new technology that provides high resolution X-ray breast tomographic data. Like any other tomographic medical imaging modalities, viewing a stack of tomographic images may require time especially if the images are of large matrix size. In addition, it may cause difficulty to conceptually construct 3D breast structures. Therefore, there is a need to readily visualize the data in 3D. However, one of the issues that hinder the usage of volume rendering (VR) is finding an automatic way to generate transfer functions that efficiently map the important diagnostic information in the data. We have developed a method that randomly samples the volume. Based on the mean and the standard deviation of these samples, the technique determines the lower limit and upper limit of a piecewise linear ramp transfer function. We have volume rendered several 3DDM data using this technique and compared visually the outcome with the result from a conventional automatic technique. The transfer function generated through the proposed technique provided superior VR images over the conventional technique. Furthermore, the improvement in the reproducibility of the transfer function correlated with the number of samples taken from the volume at the expense of the processing time.
A novel approach to segmentation and measurement of medical image using level set methods.
Chen, Yao-Tien
2017-06-01
The study proposes a novel approach for segmentation and visualization plus value-added surface area and volume measurements for brain medical image analysis. The proposed method contains edge detection and Bayesian based level set segmentation, surface and volume rendering, and surface area and volume measurements for 3D objects of interest (i.e., brain tumor, brain tissue, or whole brain). Two extensions based on edge detection and Bayesian level set are first used to segment 3D objects. Ray casting and a modified marching cubes algorithm are then adopted to facilitate volume and surface visualization of medical-image dataset. To provide physicians with more useful information for diagnosis, the surface area and volume of an examined 3D object are calculated by the techniques of linear algebra and surface integration. Experiment results are finally reported in terms of 3D object extraction, surface and volume rendering, and surface area and volume measurements for medical image analysis. Copyright © 2017 Elsevier Inc. All rights reserved.
MTO-like reference mask modeling for advanced inverse lithography technology patterns
NASA Astrophysics Data System (ADS)
Park, Jongju; Moon, Jongin; Son, Suein; Chung, Donghoon; Kim, Byung-Gook; Jeon, Chan-Uk; LoPresti, Patrick; Xue, Shan; Wang, Sonny; Broadbent, Bill; Kim, Soonho; Hur, Jiuk; Choo, Min
2017-07-01
Advanced Inverse Lithography Technology (ILT) can result in mask post-OPC databases with very small address units, all-angle figures, and very high vertex counts. This creates mask inspection issues for existing mask inspection database rendering. These issues include: large data volumes, low transfer rate, long data preparation times, slow inspection throughput, and marginal rendering accuracy leading to high false detections. This paper demonstrates the application of a new rendering method including a new OASIS-like mask inspection format, new high-speed rendering algorithms, and related hardware to meet the inspection challenges posed by Advanced ILT masks.
Interactive distributed hardware-accelerated LOD-sprite terrain rendering with stable frame rates
NASA Astrophysics Data System (ADS)
Swan, J. E., II; Arango, Jesus; Nakshatrala, Bala K.
2002-03-01
A stable frame rate is important for interactive rendering systems. Image-based modeling and rendering (IBMR) techniques, which model parts of the scene with image sprites, are a promising technique for interactive systems because they allow the sprite to be manipulated instead of the underlying scene geometry. However, with IBMR techniques a frequent problem is an unstable frame rate, because generating an image sprite (with 3D rendering) is time-consuming relative to manipulating the sprite (with 2D image resampling). This paper describes one solution to this problem, by distributing an IBMR technique into a collection of cooperating threads and executable programs across two computers. The particular IBMR technique distributed here is the LOD-Sprite algorithm. This technique uses a multiple level-of-detail (LOD) scene representation. It first renders a keyframe from a high-LOD representation, and then caches the frame as an image sprite. It renders subsequent spriteframes by texture-mapping the cached image sprite into a lower-LOD representation. We describe a distributed architecture and implementation of LOD-Sprite, in the context of terrain rendering, which takes advantage of graphics hardware. We present timing results which indicate we have achieved a stable frame rate. In addition to LOD-Sprite, our distribution method holds promise for other IBMR techniques.
Chen, Jian; Smith, Andrew D; Khan, Majid A; Sinning, Allan R; Conway, Marianne L; Cui, Dongmei
2017-11-01
Recent improvements in three-dimensional (3D) virtual modeling software allows anatomists to generate high-resolution, visually appealing, colored, anatomical 3D models from computed tomography (CT) images. In this study, high-resolution CT images of a cadaver were used to develop clinically relevant anatomic models including facial skull, nasal cavity, septum, turbinates, paranasal sinuses, optic nerve, pituitary gland, carotid artery, cervical vertebrae, atlanto-axial joint, cervical spinal cord, cervical nerve root, and vertebral artery that can be used to teach clinical trainees (students, residents, and fellows) approaches for trans-sphenoidal pituitary surgery and cervical spine injection procedure. Volume, surface rendering and a new rendering technique, semi-auto-combined, were applied in the study. These models enable visualization, manipulation, and interaction on a computer and can be presented in a stereoscopic 3D virtual environment, which makes users feel as if they are inside the model. Anat Sci Educ 10: 598-606. © 2017 American Association of Anatomists. © 2017 American Association of Anatomists.
Effects of VR system fidelity on analyzing isosurface visualization of volume datasets.
Laha, Bireswar; Bowman, Doug A; Socha, John J
2014-04-01
Volume visualization is an important technique for analyzing datasets from a variety of different scientific domains. Volume data analysis is inherently difficult because volumes are three-dimensional, dense, and unfamiliar, requiring scientists to precisely control the viewpoint and to make precise spatial judgments. Researchers have proposed that more immersive (higher fidelity) VR systems might improve task performance with volume datasets, and significant results tied to different components of display fidelity have been reported. However, more information is needed to generalize these results to different task types, domains, and rendering styles. We visualized isosurfaces extracted from synchrotron microscopic computed tomography (SR-μCT) scans of beetles, in a CAVE-like display. We ran a controlled experiment evaluating the effects of three components of system fidelity (field of regard, stereoscopy, and head tracking) on a variety of abstract task categories that are applicable to various scientific domains, and also compared our results with those from our prior experiment using 3D texture-based rendering. We report many significant findings. For example, for search and spatial judgment tasks with isosurface visualization, a stereoscopic display provides better performance, but for tasks with 3D texture-based rendering, displays with higher field of regard were more effective, independent of the levels of the other display components. We also found that systems with high field of regard and head tracking improve performance in spatial judgment tasks. Our results extend existing knowledge and produce new guidelines for designing VR systems to improve the effectiveness of volume data analysis.
Scientific Visualization for Atmospheric Data Analysis in Collaborative Virtual Environments
NASA Astrophysics Data System (ADS)
Engelke, Wito; Flatken, Markus; Garcia, Arturo S.; Bar, Christian; Gerndt, Andreas
2016-04-01
1 INTRODUCTION The three year European research project CROSS DRIVE (Collaborative Rover Operations and Planetary Science Analysis System based on Distributed Remote and Interactive Virtual Environments) started in January 2014. The research and development within this project is motivated by three use case studies: landing site characterization, atmospheric science and rover target selection [1]. Currently the implementation for the second use case is in its final phase [2]. Here, the requirements were generated based on the domain experts input and lead to development and integration of appropriate methods for visualization and analysis of atmospheric data. The methods range from volume rendering, interactive slicing, iso-surface techniques to interactive probing. All visualization methods are integrated in DLR's Terrain Rendering application. With this, the high resolution surface data visualization can be enriched with additional methods appropriate for atmospheric data sets. This results in an integrated virtual environment where the scientist has the possibility to interactively explore his data sets directly within the correct context. The data sets include volumetric data of the martian atmosphere, precomputed two dimensional maps and vertical profiles. In most cases the surface data as well as the atmospheric data has global coverage and is of time dependent nature. Furthermore, all interaction is synchronized between different connected application instances, allowing for collaborative sessions between distant experts. 2 VISUALIZATION TECHNIQUES Also the application is currently used for visualization of data sets related to Mars the techniques can be used for other data sets as well. Currently the prototype is capable of handling 2 and 2.5D surface data as well as 4D atmospheric data. Specifically, the surface data is presented using an LoD approach which is based on the HEALPix tessellation of a sphere [3, 4, 5] and can handle data sets in the order of terabytes. The combination of different data sources (e.g., MOLA, HRSC, HiRISE) and selection of presented data (e.g., infrared, spectral, imagery) is also supported. Furthermore, the data is presented unchanged and with the highest possible resolution for the target setup (e.g., power-wall, workstation, laptop) and view distance. The visualization techniques for the volumetric data sets can handle VTK [6] based data sets and also support different grid types as well as a time component. In detail, the integrated volume rendering uses a GPU based ray casting algorithm which was adapted to work in spherical coordinate systems. This approach results in interactive frame-rates without compromising visual fidelity. Besides direct visualization via volume rendering the prototype supports interactive slicing, extraction of iso-surfaces and probing. The latter can also be used for side-by-side comparison and on-the-fly diagram generation within the application. Similarily to the surface data a combination of different data sources is supported as well. For example, the extracted iso-surface of a scalar pressure field can be used for the visualization of the temperature. The software development is supported by the ViSTA VR-toolkit [7] and supports different target systems as well as a wide range of VR-devices. Furthermore, the prototype is scalable to run on laptops, workstations and cluster setups. REFERENCES [1] A. S. Garcia, D. J. Roberts, T. Fernando, C. Bar, R. Wolff, J. Dodiya, W. Engelke, and A. Gerndt, "A collaborative workspace architecture for strengthening collaboration among space scientists," in IEEE Aerospace Conference, (Big Sky, Montana, USA), 7-14 March 2015. [2] W. Engelke, "Mars Cartography VR System 2/3." German Aerospace Center (DLR), 2015. Project Deliverable D4.2. [3] E. Hivon, F. K. Hansen, and A. J. Banday, "The healpix primer," arXivpreprint astro-ph/9905275, 1999. [4] K. M. Gorski, E. Hivon, A. Banday, B. D. Wandelt, F. K. Hansen, M. Reinecke, and M. Bartelmann, "Healpix: a framework for high-resolution discretization and fast analysis of data distributed on the sphere," The Astrophysical Journal, vol. 622, no. 2, p. 759, 2005. [5] R. Westerteiger, A. Gerndt, and B. Hamann, "Spherical terrain render- ing using the hierarchical healpix grid," VLUDS, vol. 11, pp. 13-23, 2011. [6] W. Schroeder, K. Martin, and B. Lorensen, The Visualization Toolkit. Kitware, 4 ed., 2006. [7] T. van Reimersdahl, T. Kuhlen, A. Gerndt, J. Henrichs, and C. Bischof, "ViSTA: a multimodal, platform-independent VR-toolkit based on WTK, VTK, and MPI," in Proceedings of the 4th International Immersive Projection Technology Workshop (IPT), 2000.
Direct volumetric rendering based on point primitives in OpenGL.
da Rosa, André Luiz Miranda; de Almeida Souza, Ilana; Yuuji Hira, Adilson; Zuffo, Marcelo Knörich
2006-01-01
The aim of this project is to present a renderization by software algorithm of acquired volumetric data. The algorithm was implemented in Java language and the LWJGL graphical library was used, allowing the volume renderization by software and thus preventing the necessity to acquire specific graphical boards for the 3D reconstruction. The considered algorithm creates a model in OpenGL, through point primitives, where each voxel becomes a point with the color values related to this pixel position in the corresponding images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watanabe, T.; Momose, T.; Oku, S.
It is essential to obtain realistic brain surface images, in which sulci and gyri are easily recognized, when examining the correlation between functional (PET or SPECT) and anatomical (MRI) brain studies. The volume rendering technique (VRT) is commonly employed to make three-dimensional (3D) brain surface images. This technique, however, takes considerable time to make only one 3D image. Therefore it has not been practical to make the brain surface images in arbitrary directions on a real-time basis using ordinary work stations or personal computers. The surface rendering technique (SRT), on the other hand, is much less computationally demanding, but themore » quality of resulting images is not satisfactory for our purpose. A new computer algorithm has been developed to make 3D brain surface MR images very quickly using a volume-surface rendering technique (VSRT), in which the quality of resulting images is comparable to that of VRT and computation time to SRT. In VSRT the process of volume rendering is done only once to the direction of the normal vector of each surface point, rather than each time a new view point is determined as in VRT. Subsequent reconstruction of the 3D image uses a similar algorithm to that of SRT. Thus we can obtain brain surface MR images of sufficient quality viewed from any direction on a real-time basis using an easily available personal computer (Macintosh Quadra 800). The calculation time to make a 3D image is less than 1 sec. in VSRT, while that is more than 15 sec. in the conventional VRT. The difference of resulting image quality between VSRT and VRT is almost imperceptible. In conclusion, our new technique for real-time reconstruction of 3D brain surface MR image is very useful and practical in the functional and anatomical correlation study.« less
Enabling Real-Time Volume Rendering of Functional Magnetic Resonance Imaging on an iOS Device.
Holub, Joseph; Winer, Eliot
2017-12-01
Powerful non-invasive imaging technologies like computed tomography (CT), ultrasound, and magnetic resonance imaging (MRI) are used daily by medical professionals to diagnose and treat patients. While 2D slice viewers have long been the standard, many tools allowing 3D representations of digital medical data are now available. The newest imaging advancement, functional MRI (fMRI) technology, has changed medical imaging from viewing static to dynamic physiology (4D) over time, particularly to study brain activity. Add this to the rapid adoption of mobile devices for everyday work and the need to visualize fMRI data on tablets or smartphones arises. However, there are few mobile tools available to visualize 3D MRI data, let alone 4D fMRI data. Building volume rendering tools on mobile devices to visualize 3D and 4D medical data is challenging given the limited computational power of the devices. This paper describes research that explored the feasibility of performing real-time 3D and 4D volume raycasting on a tablet device. The prototype application was tested on a 9.7" iPad Pro using two different fMRI datasets of brain activity. The results show that mobile raycasting is able to achieve between 20 and 40 frames per second for traditional 3D datasets, depending on the sampling interval, and up to 9 frames per second for 4D data. While the prototype application did not always achieve true real-time interaction, these results clearly demonstrated that visualizing 3D and 4D digital medical data is feasible with a properly constructed software framework.
Virtual acoustic environments for comprehensive evaluation of model-based hearing devices.
Grimm, Giso; Luberadzka, Joanna; Hohmann, Volker
2018-06-01
Create virtual acoustic environments (VAEs) with interactive dynamic rendering for applications in audiology. A toolbox for creation and rendering of dynamic virtual acoustic environments (TASCAR) that allows direct user interaction was developed for application in hearing aid research and audiology. The software architecture and the simulation methods used to produce VAEs are outlined. Example environments are described and analysed. With the proposed software, a tool for simulation of VAEs is available. A set of VAEs rendered with the proposed software was described.
Standardized volume-rendering of contrast-enhanced renal magnetic resonance angiography.
Smedby, O; Oberg, R; Asberg, B; Stenström, H; Eriksson, P
2005-08-01
To propose a technique for standardizing volume-rendering technique (VRT) protocols and to compare this with maximum intensity projection (MIP) in regard to image quality and diagnostic confidence in stenosis diagnosis with magnetic resonance angiography (MRA). Twenty patients were examined with MRA under suspicion of renal artery stenosis. Using the histogram function in the volume-rendering software, the 95th and 99th percentiles of the 3D data set were identified and used to define the VRT transfer function. Two radiologists assessed the stenosis pathology and image quality from rotational sequences of MIP and VRT images. Good overall agreement (mean kappa=0.72) was found between MIP and VRT diagnoses. The agreement between MIP and VRT was considerably better than that between observers (mean kappa=0.43). One of the observers judged VRT images as having higher image quality than MIP images. Presenting renal MRA images with VRT gave results in good agreement with MIP. With VRT protocols defined from the histogram of the image, the lack of an absolute gray scale in MRI need not be a major problem.
He, Longjun; Ming, Xing; Liu, Qian
2014-04-01
With computing capability and display size growing, the mobile device has been used as a tool to help clinicians view patient information and medical images anywhere and anytime. However, for direct interactive 3D visualization, which plays an important role in radiological diagnosis, the mobile device cannot provide a satisfactory quality of experience for radiologists. This paper developed a medical system that can get medical images from the picture archiving and communication system on the mobile device over the wireless network. In the proposed application, the mobile device got patient information and medical images through a proxy server connecting to the PACS server. Meanwhile, the proxy server integrated a range of 3D visualization techniques, including maximum intensity projection, multi-planar reconstruction and direct volume rendering, to providing shape, brightness, depth and location information generated from the original sectional images for radiologists. Furthermore, an algorithm that changes remote render parameters automatically to adapt to the network status was employed to improve the quality of experience. Finally, performance issues regarding the remote 3D visualization of the medical images over the wireless network of the proposed application were also discussed. The results demonstrated that this proposed medical application could provide a smooth interactive experience in the WLAN and 3G networks.
Realistic Real-Time Outdoor Rendering in Augmented Reality
Kolivand, Hoshang; Sunar, Mohd Shahrizal
2014-01-01
Realistic rendering techniques of outdoor Augmented Reality (AR) has been an attractive topic since the last two decades considering the sizeable amount of publications in computer graphics. Realistic virtual objects in outdoor rendering AR systems require sophisticated effects such as: shadows, daylight and interactions between sky colours and virtual as well as real objects. A few realistic rendering techniques have been designed to overcome this obstacle, most of which are related to non real-time rendering. However, the problem still remains, especially in outdoor rendering. This paper proposed a much newer, unique technique to achieve realistic real-time outdoor rendering, while taking into account the interaction between sky colours and objects in AR systems with respect to shadows in any specific location, date and time. This approach involves three main phases, which cover different outdoor AR rendering requirements. Firstly, sky colour was generated with respect to the position of the sun. Second step involves the shadow generation algorithm, Z-Partitioning: Gaussian and Fog Shadow Maps (Z-GaF Shadow Maps). Lastly, a technique to integrate sky colours and shadows through its effects on virtual objects in the AR system, is introduced. The experimental results reveal that the proposed technique has significantly improved the realism of real-time outdoor AR rendering, thus solving the problem of realistic AR systems. PMID:25268480
Realistic real-time outdoor rendering in augmented reality.
Kolivand, Hoshang; Sunar, Mohd Shahrizal
2014-01-01
Realistic rendering techniques of outdoor Augmented Reality (AR) has been an attractive topic since the last two decades considering the sizeable amount of publications in computer graphics. Realistic virtual objects in outdoor rendering AR systems require sophisticated effects such as: shadows, daylight and interactions between sky colours and virtual as well as real objects. A few realistic rendering techniques have been designed to overcome this obstacle, most of which are related to non real-time rendering. However, the problem still remains, especially in outdoor rendering. This paper proposed a much newer, unique technique to achieve realistic real-time outdoor rendering, while taking into account the interaction between sky colours and objects in AR systems with respect to shadows in any specific location, date and time. This approach involves three main phases, which cover different outdoor AR rendering requirements. Firstly, sky colour was generated with respect to the position of the sun. Second step involves the shadow generation algorithm, Z-Partitioning: Gaussian and Fog Shadow Maps (Z-GaF Shadow Maps). Lastly, a technique to integrate sky colours and shadows through its effects on virtual objects in the AR system, is introduced. The experimental results reveal that the proposed technique has significantly improved the realism of real-time outdoor AR rendering, thus solving the problem of realistic AR systems.
Semantic layers for illustrative volume rendering.
Rautek, Peter; Bruckner, Stefan; Gröller, Eduard
2007-01-01
Direct volume rendering techniques map volumetric attributes (e.g., density, gradient magnitude, etc.) to visual styles. Commonly this mapping is specified by a transfer function. The specification of transfer functions is a complex task and requires expert knowledge about the underlying rendering technique. In the case of multiple volumetric attributes and multiple visual styles the specification of the multi-dimensional transfer function becomes more challenging and non-intuitive. We present a novel methodology for the specification of a mapping from several volumetric attributes to multiple illustrative visual styles. We introduce semantic layers that allow a domain expert to specify the mapping in the natural language of the domain. A semantic layer defines the mapping of volumetric attributes to one visual style. Volumetric attributes and visual styles are represented as fuzzy sets. The mapping is specified by rules that are evaluated with fuzzy logic arithmetics. The user specifies the fuzzy sets and the rules without special knowledge about the underlying rendering technique. Semantic layers allow for a linguistic specification of the mapping from attributes to visual styles replacing the traditional transfer function specification.
MR volumetric analysis of the course of nephroblastomatosis under chemotherapy in childhood.
Günther, Patrick; Tröger, Jochen; Graf, Norbert; Waag, Karl Ludwig; Schenk, Jens-Peter
2004-08-01
Nephroblastomatosis is a paediatric renal disease that may undergo malignant transformation. When neoadjuvant chemotherapy is indicated for nephroblastomatosis or bilateral Wilms' tumours, exact volumetric analysis using high-speed data processing and visualization may aid in determining tumour response. Using 3D-volume-rendering software, the 0.5-T MRI data of a 2-year-old girl with bilateral nephroblastomatosis was analysed. Exact volume determination of foci of nephroblastomatosis was performed by automatic and manual segmentation, and the relation to normal renal parenchyma was determined over a 12-month period. At the first visit, 80% (460/547 ml) of the extremely enlarged right kidney was due to nephroblastomatosis. Total tumour volume within the right kidney decreased to 74 ml under chemotherapy. Volume analysis of the two emerging right-sided masses after treatment correctly suggested Wilms' tumour. Three-dimensional rendering of the growing masses aided the surgeon in nephron-sparing surgery during tumour resection.
METRO-APEX Volume 15.1: Industrialist's Manual No. 5, Caesar's Rendering Plant. Revised.
ERIC Educational Resources Information Center
University of Southern California, Los Angeles. COMEX Research Project.
The Industrialist's Manual No. 5 (Caesar's Rendering Plant) is one of a set of twenty-one manuals used in METRO-APEX 1974, a computerized college and professional level, computer-supported, role-play, simulation exercise of a community with "normal" problems. Stress is placed on environmental quality considerations. APEX 1974 is an…
Raphael, David T; McIntee, Diane; Tsuruda, Jay S; Colletti, Patrick; Tatevossian, Ray
2005-12-01
Magnetic resonance neurography (MRN) is an imaging method by which nerves can be selectively highlighted. Using commercial software, the authors explored a variety of approaches to develop a three-dimensional volume-rendered MRN image of the entire brachial plexus and used it to evaluate the accuracy of infraclavicular block approaches. With institutional review board approval, MRN of the brachial plexus was performed in 10 volunteer subjects. MRN imaging was performed on a GE 1.5-tesla magnetic resonance scanner (General Electric Healthcare Technologies, Waukesha, WI) using a phased array torso coil. Coronal STIR and T1 oblique sagittal sequences of the brachial plexus were obtained. Multiple software programs were explored for enhanced display and manipulation of the composite magnetic resonance images. The authors developed a frontal slab composite approach that allows single-frame reconstruction of a three-dimensional volume-rendered image of the entire brachial plexus. Automatic segmentation was supplemented by manual segmentation in nearly all cases. For each of three infraclavicular approaches (posteriorly directed needle below midclavicle, infracoracoid, or caudomedial to coracoid), the targeting error was measured as the distance from the MRN plexus midpoint to the approach-targeted site. Composite frontal slabs (coronal views), which are single-frame three-dimensional volume renderings from image-enhanced two-dimensional frontal view projections of the underlying coronal slices, were created. The targeting errors (mean +/- SD) for the approaches-midclavicle, infracoracoid, caudomedial to coracoid-were 0.43 +/- 0.67, 0.99 +/- 1.22, and 0.65 +/- 1.14 cm, respectively. Image-processed three-dimensional volume-rendered MNR scans, which allow visualization of the entire brachial plexus within a single composite image, have educational value in illustrating the complexity and individual variation of the plexus. Suggestions for improved guidance during infraclavicular block procedures are presented.
Hamoud Al-Tamimi, Mohammed Sabbih; Sulong, Ghazali; Shuaib, Ibrahim Lutfi
2015-07-01
Resection of brain tumors is a tricky task in surgery due to its direct influence on the patients' survival rate. Determining the tumor resection extent for its complete information via-à-vis volume and dimensions in pre- and post-operative Magnetic Resonance Images (MRI) requires accurate estimation and comparison. The active contour segmentation technique is used to segment brain tumors on pre-operative MR images using self-developed software. Tumor volume is acquired from its contours via alpha shape theory. The graphical user interface is developed for rendering, visualizing and estimating the volume of a brain tumor. Internet Brain Segmentation Repository dataset (IBSR) is employed to analyze and determine the repeatability and reproducibility of tumor volume. Accuracy of the method is validated by comparing the estimated volume using the proposed method with that of gold-standard. Segmentation by active contour technique is found to be capable of detecting the brain tumor boundaries. Furthermore, the volume description and visualization enable an interactive examination of tumor tissue and its surrounding. Admirable features of our results demonstrate that alpha shape theory in comparison to other existing standard methods is superior for precise volumetric measurement of tumor. Copyright © 2015 Elsevier Inc. All rights reserved.
Gonçalves, Luís F; Romero, Roberto; Espinoza, Jimmy; Lee, Wesley; Treadwell, Marjorie; Chintala, Kavitha; Brandl, Helmut; Chaiworapongsa, Tinnakorn
2004-04-01
To describe clinical and research applications of 4-dimensional imaging of the fetal heart using color Doppler spatiotemporal image correlation. Forty-four volume data sets were acquired by color Doppler spatiotemporal image correlation. Seven subjects were examined: 4 fetuses without abnormalities, 1 fetus with ventriculomegaly and a hypoplastic cerebellum but normal cardiac anatomy, and 2 fetuses with cardiac anomalies detected by fetal echocardiography (1 case of a ventricular septal defect associated with trisomy 21 and 1 case of a double-inlet right ventricle with a 46,XX karyotype). The median gestational age at the time of examination was 21 3/7 weeks (range, 19 5/7-34 0/7 weeks). Volume data sets were reviewed offline by multiplanar display and volume-rendering methods. Representative images and online video clips illustrating the diagnostic potential of this technology are presented. Color Doppler spatiotemporal image correlation allowed multiplanar visualization of ventricular septal defects, multiplanar display and volume rendering of tricuspid regurgitation, volume rendering of the outflow tracts by color and power Doppler ultrasonography (both in a normal case and in a case of a double-inlet right ventricle with a double-outlet right ventricle), and visualization of venous streams at the level of the foramen ovale. Color Doppler spatiotemporal image correlation has the potential to simplify visualization of the outflow tracts and improve the evaluation of the location and extent of ventricular septal defects. Other applications include 3-dimensional evaluation of regurgitation jets and venous streams at the level of the foramen ovale.
A new framework for interactive quality assessment with application to light field coding
NASA Astrophysics Data System (ADS)
Viola, Irene; Ebrahimi, Touradj
2017-09-01
In recent years, light field has experienced a surge of popularity, mainly due to the recent advances in acquisition and rendering technologies that have made it more accessible to the public. Thanks to image-based rendering techniques, light field contents can be rendered in real time on common 2D screens, allowing virtual navigation through the captured scenes in an interactive fashion. However, this richer representation of the scene poses the problem of reliable quality assessments for light field contents. In particular, while subjective methodologies that enable interaction have already been proposed, no work has been done on assessing how users interact with light field contents. In this paper, we propose a new framework to subjectively assess the quality of light field contents in an interactive manner and simultaneously track users behaviour. The framework is successfully used to perform subjective assessment of two coding solutions. Moreover, statistical analysis performed on the results shows interesting correlation between subjective scores and average interaction time.
3D Volume Rendering and 3D Printing (Additive Manufacturing).
Katkar, Rujuta A; Taft, Robert M; Grant, Gerald T
2018-07-01
Three-dimensional (3D) volume-rendered images allow 3D insight into the anatomy, facilitating surgical treatment planning and teaching. 3D printing, additive manufacturing, and rapid prototyping techniques are being used with satisfactory accuracy, mostly for diagnosis and surgical planning, followed by direct manufacture of implantable devices. The major limitation is the time and money spent generating 3D objects. Printer type, material, and build thickness are known to influence the accuracy of printed models. In implant dentistry, the use of 3D-printed surgical guides is strongly recommended to facilitate planning and reduce risk of operative complications. Copyright © 2018 Elsevier Inc. All rights reserved.
Color-coded depth information in volume-rendered magnetic resonance angiography
NASA Astrophysics Data System (ADS)
Smedby, Orjan; Edsborg, Karin; Henriksson, John
2004-05-01
Magnetic Resonance Angiography (MRA) and Computed Tomography Angiography (CTA) data are usually presented using Maximum Intensity Projection (MIP) or Volume Rendering Technique (VRT), but these often fail to demonstrate a stenosis if the projection angle is not suitably chosen. In order to make vascular stenoses visible in projection images independent of the choice of viewing angle, a method is proposed to supplement these images with colors representing the local caliber of the vessel. After preprocessing the volume image with a median filter, segmentation is performed by thresholding, and a Euclidean distance transform is applied. The distance to the background from each voxel in the vessel is mapped to a color. These colors can either be rendered directly using MIP or be presented together with opacity information based on the original image using VRT. The method was tested in a synthetic dataset containing a cylindrical vessel with stenoses in varying angles. The results suggest that the visibility of stenoses is enhanced by the color information. In clinical feasibility experiments, the technique was applied to clinical MRA data. The results are encouraging and indicate that the technique can be used with clinical images.
Fast Time-Varying Volume Rendering Using Time-Space Partition (TSP) Tree
NASA Technical Reports Server (NTRS)
Shen, Han-Wei; Chiang, Ling-Jen; Ma, Kwan-Liu
1999-01-01
We present a new, algorithm for rapid rendering of time-varying volumes. A new hierarchical data structure that is capable of capturing both the temporal and the spatial coherence is proposed. Conventional hierarchical data structures such as octrees are effective in characterizing the homogeneity of the field values existing in the spatial domain. However, when treating time merely as another dimension for a time-varying field, difficulties frequently arise due to the discrepancy between the field's spatial and temporal resolutions. In addition, treating spatial and temporal dimensions equally often prevents the possibility of detecting the coherence that is unique in the temporal domain. Using the proposed data structure, our algorithm can meet the following goals. First, both spatial and temporal coherence are identified and exploited for accelerating the rendering process. Second, our algorithm allows the user to supply the desired error tolerances at run time for the purpose of image-quality/rendering-speed trade-off. Third, the amount of data that are required to be loaded into main memory is reduced, and thus the I/O overhead is minimized. This low I/O overhead makes our algorithm suitable for out-of-core applications.
Effect of Reduced Tube Voltage on Diagnostic Accuracy of CT Colonography.
Futamata, Yoshihiro; Koide, Tomoaki; Ihara, Riku
2017-01-01
The normal tube voltage in computed tomography colonography (CTC) is 120 kV. Some reports indicate that the use of a low tube voltage (lower than 120 kV) technique plays a significant role in reduction of radiation dose. However, to determine whether a lower tube voltage can reduce radiation dose without compromising diagnostic accuracy, an evaluation of images that are obtained while maintaining the volume CT dose index (CTDI vol ) is required. This study investigated the effect of reduced tube voltage in CTC, without modifying radiation dose (i.e. constant CTDI vol ), on image quality. Evaluation of image quality involved the shape of the noise power spectrum, surface profiling with volume rendering (VR), and receiver operating characteristic (ROC) analysis. The shape of the noise power spectrum obtained with a tube voltage of 80 kV and 100 kV was not similar to the one produced with a tube voltage of 120 kV. Moreover, a higher standard deviation was observed on volume-rendered images that were generated using the reduced tube voltages. In addition, ROC analysis revealed a statistically significant drop in diagnostic accuracy with reduced tube voltage, revealing that the modification of tube voltage affects volume-rendered images. The results of this study suggest that reduction of tube voltage in CTC, so as to reduce radiation dose, affects image quality and diagnostic accuracy.
Immersive volume rendering of blood vessels
NASA Astrophysics Data System (ADS)
Long, Gregory; Kim, Han Suk; Marsden, Alison; Bazilevs, Yuri; Schulze, Jürgen P.
2012-03-01
In this paper, we present a novel method of visualizing flow in blood vessels. Our approach reads unstructured tetrahedral data, resamples it, and uses slice based 3D texture volume rendering. Due to the sparse structure of blood vessels, we utilize an octree to efficiently store the resampled data by discarding empty regions of the volume. We use animation to convey time series data, wireframe surface to give structure, and utilize the StarCAVE, a 3D virtual reality environment, to add a fully immersive element to the visualization. Our tool has great value in interdisciplinary work, helping scientists collaborate with clinicians, by improving the understanding of blood flow simulations. Full immersion in the flow field allows for a more intuitive understanding of the flow phenomena, and can be a great help to medical experts for treatment planning.
NASA Astrophysics Data System (ADS)
Ehricke, Hans-Heino; Daiber, Gerhard; Sonntag, Ralf; Strasser, Wolfgang; Lochner, Mathias; Rudi, Lothar S.; Lorenz, Walter J.
1992-09-01
In stereotactic treatment planning the spatial relationships between a variety of objects has to be taken into account in order to avoid destruction of vital brain structures and rupture of vasculature. The visualization of these highly complex relations may be supported by 3-D computer graphics methods. In this context the three-dimensional display of the intracranial vascular tree and additional objects, such as neuroanatomy, pathology, stereotactic devices, or isodose surfaces, is of high clinical value. We report an advanced rendering method for a depth-enhanced maximum intensity projection from magnetic resonance angiography (MRA) and a walk-through approach to the analysis of MRA volume data. Furthermore, various methods for a multiple-object 3-D rendering in stereotaxy are discussed. The development of advanced applications in medical imaging can hardly be successful if image acquisition problems are disregarded. We put particular emphasis on the use of conventional MRI and MRA for stereotactic guidance. The problem of MR distortion is discussed and a novel three- dimensional approach to the quantification and correction of the distortion patterns is presented. Our results suggest that the sole use of MR for stereotactic guidance is highly practical. The true three-dimensionality of the acquired datasets opens up new perspectives to stereotactic treatment planning. For the first time it is possible now to integrate all the necessary information into 3-D scenes, thus enabling an interactive 3-D planning.
A web-based instruction module for interpretation of craniofacial cone beam CT anatomy.
Hassan, B A; Jacobs, R; Scarfe, W C; Al-Rawi, W T
2007-09-01
To develop a web-based module for learner instruction in the interpretation and recognition of osseous anatomy on craniofacial cone-beam CT (CBCT) images. Volumetric datasets from three CBCT systems were acquired (i-CAT, NewTom 3G and AccuiTomo FPD) for various subjects using equipment-specific scanning protocols. The datasets were processed using multiple software to provide two-dimensional (2D) multiplanar reformatted (MPR) images (e.g. sagittal, coronal and axial) and three-dimensional (3D) visual representations (e.g. maximum intensity projection, minimum intensity projection, ray sum, surface and volume rendering). Distinct didactic modules which illustrate the principles of CBCT systems, guided navigation of the volumetric dataset, and anatomic correlation of 3D models and 2D MPR graphics were developed using a hybrid combination of web authoring and image analysis techniques. Interactive web multimedia instruction was facilitated by the use of dynamic highlighting and labelling, and rendered video illustrations, supplemented with didactic textual material. HTML coding and Java scripting were heavily implemented for the blending of the educational modules. An interactive, multimedia educational tool for visualizing the morphology and interrelationships of osseous craniofacial anatomy, as depicted on CBCT MPR and 3D images, was designed and implemented. The present design of a web-based instruction module may assist radiologists and clinicians in learning how to recognize and interpret the craniofacial anatomy of CBCT based images more efficiently.
Terlier, T; Lee, J; Lee, K; Lee, Y
2018-02-06
Technological progress has spurred the development of increasingly sophisticated analytical devices. The full characterization of structures in terms of sample volume and composition is now highly complex. Here, a highly improved solution for 3D characterization of samples, based on an advanced method for 3D data correction, is proposed. Traditionally, secondary ion mass spectrometry (SIMS) provides the chemical distribution of sample surfaces. Combining successive sputtering with 2D surface projections enables a 3D volume rendering to be generated. However, surface topography can distort the volume rendering by necessitating the projection of a nonflat surface onto a planar image. Moreover, the sputtering is highly dependent on the probed material. Local variation of composition affects the sputter yield and the beam-induced roughness, which in turn alters the 3D render. To circumvent these drawbacks, the correlation of atomic force microscopy (AFM) with SIMS has been proposed in previous studies as a solution for the 3D chemical characterization. To extend the applicability of this approach, we have developed a methodology using AFM-time-of-flight (ToF)-SIMS combined with an empirical sputter model, "dynamic-model-based volume correction", to universally correct 3D structures. First, the simulation of 3D structures highlighted the great advantages of this new approach compared with classical methods. Then, we explored the applicability of this new correction to two types of samples, a patterned metallic multilayer and a diblock copolymer film presenting surface asperities. In both cases, the dynamic-model-based volume correction produced an accurate 3D reconstruction of the sample volume and composition. The combination of AFM-SIMS with the dynamic-model-based volume correction improves the understanding of the surface characteristics. Beyond the useful 3D chemical information provided by dynamic-model-based volume correction, the approach permits us to enhance the correlation of chemical information from spectroscopic techniques with the physical properties obtained by AFM.
Lighting design for globally illuminated volume rendering.
Zhang, Yubo; Ma, Kwan-Liu
2013-12-01
With the evolution of graphics hardware, high quality global illumination becomes available for real-time volume rendering. Compared to local illumination, global illumination can produce realistic shading effects which are closer to real world scenes, and has proven useful for enhancing volume data visualization to enable better depth and shape perception. However, setting up optimal lighting could be a nontrivial task for average users. There were lighting design works for volume visualization but they did not consider global light transportation. In this paper, we present a lighting design method for volume visualization employing global illumination. The resulting system takes into account view and transfer-function dependent content of the volume data to automatically generate an optimized three-point lighting environment. Our method fully exploits the back light which is not used by previous volume visualization systems. By also including global shadow and multiple scattering, our lighting system can effectively enhance the depth and shape perception of volumetric features of interest. In addition, we propose an automatic tone mapping operator which recovers visual details from overexposed areas while maintaining sufficient contrast in the dark areas. We show that our method is effective for visualizing volume datasets with complex structures. The structural information is more clearly and correctly presented under the automatically generated light sources.
Systematic Parameterization, Storage, and Representation of Volumetric DICOM Data.
Fischer, Felix; Selver, M Alper; Gezer, Sinem; Dicle, Oğuz; Hillen, Walter
Tomographic medical imaging systems produce hundreds to thousands of slices, enabling three-dimensional (3D) analysis. Radiologists process these images through various tools and techniques in order to generate 3D renderings for various applications, such as surgical planning, medical education, and volumetric measurements. To save and store these visualizations, current systems use snapshots or video exporting, which prevents further optimizations and requires the storage of significant additional data. The Grayscale Softcopy Presentation State extension of the Digital Imaging and Communications in Medicine (DICOM) standard resolves this issue for two-dimensional (2D) data by introducing an extensive set of parameters, namely 2D Presentation States (2DPR), that describe how an image should be displayed. 2DPR allows storing these parameters instead of storing parameter applied images, which cause unnecessary duplication of the image data. Since there is currently no corresponding extension for 3D data, in this study, a DICOM-compliant object called 3D presentation states (3DPR) is proposed for the parameterization and storage of 3D medical volumes. To accomplish this, the 3D medical visualization process is divided into four tasks, namely pre-processing, segmentation, post-processing, and rendering. The important parameters of each task are determined. Special focus is given to the compression of segmented data, parameterization of the rendering process, and DICOM-compliant implementation of the 3DPR object. The use of 3DPR was tested in a radiology department on three clinical cases, which require multiple segmentations and visualizations during the workflow of radiologists. The results show that 3DPR can effectively simplify the workload of physicians by directly regenerating 3D renderings without repeating intermediate tasks, increase efficiency by preserving all user interactions, and provide efficient storage as well as transfer of visualized data.
NASA Astrophysics Data System (ADS)
Rodrigues, Pedro L.; Rodrigues, Nuno F.; Fonseca, Jaime C.; Vilaça, João. L.
2015-03-01
An accurate percutaneous puncture is essential for disintegration and removal of renal stones. Although this procedure has proven to be safe, some organs surrounding the renal target might be accidentally perforated. This work describes a new intraoperative framework where tracked surgical tools are superimposed within 4D ultrasound imaging for security assessment of the percutaneous puncture trajectory (PPT). A PPT is first generated from the skin puncture site towards an anatomical target, using the information retrieved by electromagnetic motion tracking sensors coupled to surgical tools. Then, 2D ultrasound images acquired with a tracked probe are used to reconstruct a 4D ultrasound around the PPT under GPU processing. Volume hole-filling was performed in different processing time intervals by a tri-linear interpolation method. At spaced time intervals, the volume of the anatomical structures was segmented to ascertain if any vital structure is in between PPT and might compromise the surgical success. To enhance the volume visualization of the reconstructed structures, different render transfer functions were used. Results: Real-time US volume reconstruction and rendering with more than 25 frames/s was only possible when rendering only three orthogonal slice views. When using the whole reconstructed volume one achieved 8-15 frames/s. 3 frames/s were reached when one introduce the segmentation and detection if some structure intersected the PPT. The proposed framework creates a virtual and intuitive platform that can be used to identify and validate a PPT to safely and accurately perform the puncture in percutaneous nephrolithotomy.
Simulation and training of lumbar punctures using haptic volume rendering and a 6DOF haptic device
NASA Astrophysics Data System (ADS)
Färber, Matthias; Heller, Julika; Handels, Heinz
2007-03-01
The lumbar puncture is performed by inserting a needle into the spinal chord of the patient to inject medicaments or to extract liquor. The training of this procedure is usually done on the patient guided by experienced supervisors. A virtual reality lumbar puncture simulator has been developed in order to minimize the training costs and the patient's risk. We use a haptic device with six degrees of freedom (6DOF) to feedback forces that resist needle insertion and rotation. An improved haptic volume rendering approach is used to calculate the forces. This approach makes use of label data of relevant structures like skin, bone, muscles or fat and original CT data that contributes information about image structures that can not be segmented. A real-time 3D visualization with optional stereo view shows the punctured region. 2D visualizations of orthogonal slices enable a detailed impression of the anatomical context. The input data consisting of CT and label data and surface models of relevant structures is defined in an XML file together with haptic rendering and visualization parameters. In a first evaluation the visible human male data has been used to generate a virtual training body. Several users with different medical experience tested the lumbar puncture trainer. The simulator gives a good haptic and visual impression of the needle insertion and the haptic volume rendering technique enables the feeling of unsegmented structures. Especially, the restriction of transversal needle movement together with rotation constraints enabled by the 6DOF device facilitate a realistic puncture simulation.
On the utility of 3D hand cursors to explore medical volume datasets with a touchless interface.
Lopes, Daniel Simões; Parreira, Pedro Duarte de Figueiredo; Paulo, Soraia Figueiredo; Nunes, Vitor; Rego, Paulo Amaral; Neves, Manuel Cassiano; Rodrigues, Pedro Silva; Jorge, Joaquim Armando
2017-08-01
Analyzing medical volume datasets requires interactive visualization so that users can extract anatomo-physiological information in real-time. Conventional volume rendering systems rely on 2D input devices, such as mice and keyboards, which are known to hamper 3D analysis as users often struggle to obtain the desired orientation that is only achieved after several attempts. In this paper, we address which 3D analysis tools are better performed with 3D hand cursors operating on a touchless interface comparatively to a 2D input devices running on a conventional WIMP interface. The main goals of this paper are to explore the capabilities of (simple) hand gestures to facilitate sterile manipulation of 3D medical data on a touchless interface, without resorting on wearables, and to evaluate the surgical feasibility of the proposed interface next to senior surgeons (N=5) and interns (N=2). To this end, we developed a touchless interface controlled via hand gestures and body postures to rapidly rotate and position medical volume images in three-dimensions, where each hand acts as an interactive 3D cursor. User studies were conducted with laypeople, while informal evaluation sessions were carried with senior surgeons, radiologists and professional biomedical engineers. Results demonstrate its usability as the proposed touchless interface improves spatial awareness and a more fluent interaction with the 3D volume than with traditional 2D input devices, as it requires lesser number of attempts to achieve the desired orientation by avoiding the composition of several cumulative rotations, which is typically necessary in WIMP interfaces. However, tasks requiring precision such as clipping plane visualization and tagging are best performed with mouse-based systems due to noise, incorrect gestures detection and problems in skeleton tracking that need to be addressed before tests in real medical environments might be performed. Copyright © 2017 Elsevier Inc. All rights reserved.
Parallel volume ray-casting for unstructured-grid data on distributed-memory architectures
NASA Technical Reports Server (NTRS)
Ma, Kwan-Liu
1995-01-01
As computing technology continues to advance, computational modeling of scientific and engineering problems produces data of increasing complexity: large in size and unstructured in shape. Volume visualization of such data is a challenging problem. This paper proposes a distributed parallel solution that makes ray-casting volume rendering of unstructured-grid data practical. Both the data and the rendering process are distributed among processors. At each processor, ray-casting of local data is performed independent of the other processors. The global image composing processes, which require inter-processor communication, are overlapped with the local ray-casting processes to achieve maximum parallel efficiency. This algorithm differs from previous ones in four ways: it is completely distributed, less view-dependent, reasonably scalable, and flexible. Without using dynamic load balancing, test results on the Intel Paragon using from two to 128 processors show, on average, about 60% parallel efficiency.
Light transport on path-space manifolds
NASA Astrophysics Data System (ADS)
Jakob, Wenzel Alban
The pervasive use of computer-generated graphics in our society has led to strict demands on their visual realism. Generally, users of rendering software want their images to look, in various ways, "real", which has been a key driving force towards methods that are based on the physics of light transport. Until recently, industrial practice has relied on a different set of methods that had comparatively little rigorous grounding in physics---but within the last decade, advances in rendering methods and computing power have come together to create a sudden and dramatic shift, in which physics-based methods that were formerly thought impractical have become the standard tool. As a consequence, considerable attention is now devoted towards making these methods as robust as possible. In this context, robustness refers to an algorithm's ability to process arbitrary input without large increases of the rendering time or degradation of the output image. One particularly challenging aspect of robustness entails simulating the precise interaction of light with all the materials that comprise the input scene. This dissertation focuses on one specific group of materials that has fundamentally been the most important source of difficulties in this process. Specular materials, such as glass windows, mirrors or smooth coatings (e.g. on finished wood), account for a significant percentage of the objects that surround us every day. It is perhaps surprising, then, that it is not well-understood how they can be accommodated within the theoretical framework that underlies some of the most sophisticated rendering methods available today. Many of these methods operate using a theoretical framework known as path space integration. But this framework makes no provisions for specular materials: to date, it is not clear how to write down a path space integral involving something as simple as a piece of glass. Although implementations can in practice still render these materials by side-stepping limitations of the theory, they often suffer from unusably slow convergence; improvements to this situation have been hampered by the lack of a thorough theoretical understanding. We address these problems by developing a new theory of path-space light transport which, for the first time, cleanly incorporates specular scattering into the standard framework. Most of the results obtained in the analysis of the ideally smooth case can also be generalized to rendering of glossy materials and volumetric scattering so that this dissertation also provides a powerful new set of tools for dealing with them. The basis of our approach is that each specular material interaction locally collapses the dimension of the space of light paths so that all relevant paths lie on a submanifold of path space. We analyze the high-dimensional differential geometry of this submanifold and use the resulting information to construct an algorithm that is able to "walk" around on it using a simple and efficient equation-solving iteration. This manifold walking algorithm then constitutes the key operation of a new type of Markov Chain Monte Carlo (MCMC) rendering method that computes lighting through very general families of paths that can involve arbitrary combinations of specular, near-specular, glossy, and diffuse surface interactions as well as isotropic or highly anisotropic volume scattering. We demonstrate our implementation on a range of challenging scenes and evaluate it against previous methods.
Clarke, G. M.; Murray, M.; Holloway, C. M. B.; Liu, K.; Zubovits, J. T.; Yaffe, M. J.
2012-01-01
Tumour size, most commonly measured by maximum linear extent, remains a strong predictor of survival in breast cancer. Tumour volume, proportional to the number of tumour cells, may be a more accurate surrogate for size. We describe a novel “3D pathology volumetric technique” for lumpectomies and compare it with 2D measurements. Volume renderings and total tumour volume are computed from digitized whole-mount serial sections using custom software tools. Results are presented for two lumpectomy specimens selected for tumour features which may challenge accurate measurement of tumour burden with conventional, sampling-based pathology: (1) an infiltrative pattern admixed with normal breast elements; (2) a localized invasive mass separated from the in situ component by benign tissue. Spatial relationships between key features (tumour foci, close or involved margins) are clearly visualized in volume renderings. Invasive tumour burden can be underestimated using conventional pathology, compared to the volumetric technique (infiltrative pattern: 30% underestimation; localized mass: 3% underestimation for invasive tumour, 44% for in situ component). Tumour volume approximated from 2D measurements (i.e., maximum linear extent), assuming elliptical geometry, was seen to overestimate volume compared to the 3D volumetric calculation (by a factor of 7x for the infiltrative pattern; 1.5x for the localized invasive mass). PMID:23320179
Fast algorithm for the rendering of three-dimensional surfaces
NASA Astrophysics Data System (ADS)
Pritt, Mark D.
1994-02-01
It is often desirable to draw a detailed and realistic representation of surface data on a computer graphics display. One such representation is a 3D shaded surface. Conventional techniques for rendering shaded surfaces are slow, however, and require substantial computational power. Furthermore, many techniques suffer from aliasing effects, which appear as jagged lines and edges. This paper describes an algorithm for the fast rendering of shaded surfaces without aliasing effects. It is much faster than conventional ray tracing and polygon-based rendering techniques and is suitable for interactive use. On an IBM RISC System/6000TM workstation it renders a 1000 X 1000 surface in about 7 seconds.
NASA Astrophysics Data System (ADS)
Zellmann, Stefan; Percan, Yvonne; Lang, Ulrich
2015-01-01
Reconstruction of 2-d image primitives or of 3-d volumetric primitives is one of the most common operations performed by the rendering components of modern visualization systems. Because this operation is often aided by GPUs, reconstruction is typically restricted to first-order interpolation. With the advent of in situ visualization, the assumption that rendering algorithms are in general executed on GPUs is however no longer adequate. We thus propose a framework that provides versatile texture filtering capabilities: up to third-order reconstruction using various types of cubic filtering and interpolation primitives; cache-optimized algorithms that integrate seamlessly with GPGPU rendering or with software rendering that was optimized for cache-friendly "Structure of Array" (SoA) access patterns; a memory management layer (MML) that gracefully hides the complexities of extra data copies necessary for memory access optimizations such as swizzling, for rendering on GPGPUs, or for reconstruction schemes that rely on pre-filtered data arrays. We prove the effectiveness of our software architecture by integrating it into and validating it using the open source direct volume rendering (DVR) software DeskVOX.
Brennan, Darren D; Zamboni, Giulia; Sosna, Jacob; Callery, Mark P; Vollmer, Charles M V; Raptopoulos, Vassilios D; Kruskal, Jonathan B
2007-05-01
The purposes of this study were to combine a thorough understanding of the technical aspects of the Whipple procedure with advanced rendering techniques by introducing a virtual Whipple procedure and to evaluate the utility of this new rendering technique in prediction of the arterial variants that cross the anticipated surgical resection plane. The virtual Whipple is a novel technique that follows the complex surgical steps in a Whipple procedure. Three-dimensional reconstructed angiographic images are used to identify arterial variants for the surgeon as part of the preoperative radiologic assessment of pancreatic and ampullary tumors.
Culbertson, Heather; Kuchenbecker, Katherine J
2017-01-01
Interacting with physical objects through a tool elicits tactile and kinesthetic sensations that comprise your haptic impression of the object. These cues, however, are largely missing from interactions with virtual objects, yielding an unrealistic user experience. This article evaluates the realism of virtual surfaces rendered using haptic models constructed from data recorded during interactions with real surfaces. The models include three components: surface friction, tapping transients, and texture vibrations. We render the virtual surfaces on a SensAble Phantom Omni haptic interface augmented with a Tactile Labs Haptuator for vibration output. We conducted a human-subject study to assess the realism of these virtual surfaces and the importance of the three model components. Following a perceptual discrepancy paradigm, subjects compared each of 15 real surfaces to a full rendering of the same surface plus versions missing each model component. The realism improvement achieved by including friction, tapping, or texture in the rendering was found to directly relate to the intensity of the surface's property in that domain (slipperiness, hardness, or roughness). A subsequent analysis of forces and vibrations measured during interactions with virtual surfaces indicated that the Omni's inherent mechanical properties corrupted the user's haptic experience, decreasing realism of the virtual surface.
1991-12-01
determined more by economic forces than by flood protection. Thus, if inadequate flood protection rendered development in portions of the American River flood...1978 Patwin. In: Handbook of North American Indians: Volume 8 California, Robert F. Heizer , volume editor. Smithsonian Institution, Washington, D.C. pp...Norman L. & Arlean H. Towne. 1978 Nisenan. In: Handbook of North American Indians: Volume 8 California, Robert F. Heizer , volume editor. Smithsonian
Styszko, Katarzyna; Kupiec, Krzysztof
2016-10-01
In this study the diffusion coefficients of isoproturon, diuron and cybutryn in acrylate and silicone resin-based renders were determined. The diffusion coefficients were determined using measuring concentrations of biocides in the liquid phase after being in contact with renders for specific time intervals. The mathematical solution of the transient diffusion equation for an infinite plate contacted on one side with a limited volume of water was used to calculate the diffusion coefficient. The diffusion coefficients through the acrylate render were 8.10·10(-9) m(2) s(-1) for isoproturon, 1.96·10(-9) m(2) s(-1) for diuron and 1.53·10(-9) m(2) s(-1) for cybutryn. The results for the silicone render were lower by one order of magnitude. The compounds with a high diffusion coefficient for one polymer had likewise high values for the other polymer. Copyright © 2016 Elsevier Ltd. All rights reserved.
Evolution of the Varrier autostereoscopic VR display: 2001-2007
NASA Astrophysics Data System (ADS)
Peterka, Tom; Kooima, Robert L.; Girado, Javier I.; Ge, Jinghua; Sandin, Daniel J.; DeFanti, Thomas A.
2007-02-01
Autostereoscopy (AS) is an increasingly valuable virtual reality (VR) display technology; indeed, the IS&T / SPIE Electronic Imaging Conference has seen rapid growth in the number and scope of AS papers in recent years. The first Varrier paper appeared at SPIE in 2001, and much has changed since then. What began as a single-panel prototype has grown to a full scale VR autostereo display system, with a variety of form factors, features, and options. Varrier is a barrier strip AS display system that qualifies as a true VR display, offering a head-tracked ortho-stereo first person interactive VR experience without the need for glasses or other gear to be worn by the user. Since Varrier's inception, new algorithmic and systemic developments have produced performance and quality improvements. Visual acuity has increased by a factor of 1.4X with new fine-resolution barrier strip linescreens and computational algorithms that support variable sub-pixel resolutions. Performance has improved by a factor of 3X using a new GPU shader-based sub-pixel algorithm that accomplishes in one pass what previously required three passes. The Varrier modulation algorithm that began as a computationally expensive task is now no more costly than conventional stereoscopic rendering. Interactive rendering rates of 60 Hz are now possible in Varrier for complex scene geometry on the order of 100K vertices, and performance is GPU bound, hence it is expected to continue improving with graphics card enhancements. Head tracking is accomplished with a neural network camera-based tracking system developed at EVL for Varrier. Multiple cameras capture subjects at 120 Hz and the neural network recognizes known faces from a database and tracks them in 3D space. New faces are trained and added to the database in a matter of minutes, and accuracy is comparable to commercially available tracking systems. Varrier supports a variety of VR applications, including visualization of polygonal, ray traced, and volume rendered data. Both AS movie playback of pre-rendered stereo frames and interactive manipulation of 3D models are supported. Local as well as distributed computation is employed in various applications. Long-distance collaboration has been demonstrated with AS teleconferencing in Varrier. A variety of application domains such as art, medicine, and science have been exhibited, and Varrier exists in a variety of form factors from large tiled installations to smaller desktop forms to fit a variety of space and budget constraints. Newest developments include the use of a dynamic parallax barrier that affords features that were inconceivable with a static barrier.
INCREASING SAVING BEHAVIOR THROUGH AGE-PROGRESSED RENDERINGS OF THE FUTURE SELF.
Hershfield, Hal E; Goldstein, Daniel G; Sharpe, William F; Fox, Jesse; Yeykelis, Leo; Carstensen, Laura L; Bailenson, Jeremy N
2011-11-01
Many people fail to save what they need to for retirement (Munnell, Webb, and Golub-Sass 2009). Research on excessive discounting of the future suggests that removing the lure of immediate rewards by pre-committing to decisions, or elaborating the value of future rewards can both make decisions more future-oriented. In this article, we explore a third and complementary route, one that deals not with present and future rewards, but with present and future selves. In line with thinkers who have suggested that people may fail, through a lack of belief or imagination, to identify with their future selves (Parfit 1971; Schelling 1984), we propose that allowing people to interact with age-progressed renderings of themselves will cause them to allocate more resources toward the future. In four studies, participants interacted with realistic computer renderings of their future selves using immersive virtual reality hardware and interactive decision aids. In all cases, those who interacted with virtual future selves exhibited an increased tendency to accept later monetary rewards over immediate ones.
Abdellah, Marwan; Eldeib, Ayman; Owis, Mohamed I
2015-01-01
This paper features an advanced implementation of the X-ray rendering algorithm that harnesses the giant computing power of the current commodity graphics processors to accelerate the generation of high resolution digitally reconstructed radiographs (DRRs). The presented pipeline exploits the latest features of NVIDIA Graphics Processing Unit (GPU) architectures, mainly bindless texture objects and dynamic parallelism. The rendering throughput is substantially improved by exploiting the interoperability mechanisms between CUDA and OpenGL. The benchmarks of our optimized rendering pipeline reflect its capability of generating DRRs with resolutions of 2048(2) and 4096(2) at interactive and semi interactive frame-rates using an NVIDIA GeForce 970 GTX device.
Forensic 3D Visualization of CT Data Using Cinematic Volume Rendering: A Preliminary Study.
Ebert, Lars C; Schweitzer, Wolf; Gascho, Dominic; Ruder, Thomas D; Flach, Patricia M; Thali, Michael J; Ampanozi, Garyfalia
2017-02-01
The 3D volume-rendering technique (VRT) is commonly used in forensic radiology. Its main function is to explain medical findings to state attorneys, judges, or police representatives. New visualization algorithms permit the generation of almost photorealistic volume renderings of CT datasets. The objective of this study is to present and compare a variety of radiologic findings to illustrate the differences between and the advantages and limitations of the current VRT and the physically based cinematic rendering technique (CRT). Seventy volunteers were shown VRT and CRT reconstructions of 10 different cases. They were asked to mark the findings on the images and rate them in terms of realism and understandability. A total of 48 of the 70 questionnaires were returned and included in the analysis. On the basis of most of the findings presented, CRT appears to be equal or superior to VRT with respect to the realism and understandability of the visualized findings. Overall, in terms of realism, the difference between the techniques was statistically significant (p < 0.05). Most participants perceived the CRT findings to be more understandable than the VRT findings, but that difference was not statistically significant (p > 0.05). CRT, which is similar to conventional VRT, is not primarily intended for diagnostic radiologic image analysis, and therefore it should be used primarily as a tool to deliver visual information in the form of radiologic image reports. Using CRT for forensic visualization might have advantages over using VRT if conveying a high degree of visual realism is of importance. Most of the shortcomings of CRT have to do with the software being an early prototype.
DOT National Transportation Integrated Search
2017-03-01
This is the second of three reports examining driver medical review practices in the United States and how : they fulfill the basic functions of identifying, assessing, and rendering licensing decisions on medically at-risk : drivers. This volume pre...
Structuring Mentoring Relationships for Competence, Character, and Purpose
ERIC Educational Resources Information Center
Rhodes, Jean E.; Spencer, Renee
2010-01-01
We close this volume with a final commentary from two leaders in the mentoring field. Rhodes and Spencer articulate how the contributions to this volume offer a richer, more complex rendering of relational styles and processes than has been laid out previously in the mentoring literature. They suggest that these efforts should provoke discussion…
ERIC Educational Resources Information Center
Weisburd, Melvin I.
The Field Operations and Enforcement Manual for Air Pollution Control, Volume III, explains in detail the following: inspection procedures for specific sources, kraft pulp mills, animal rendering, steel mill furnaces, coking operations, petroleum refineries, chemical plants, non-ferrous smelting and refining, foundries, cement plants, aluminum…
Hodel, Jérôme; Silvera, Jonathan; Bekaert, Olivier; Rahmouni, Alain; Bastuji-Garin, Sylvie; Vignaud, Alexandre; Petit, Eric; Durning, Bruno; Decq, Philippe
2011-02-01
To assess the three-dimensional turbo spin echo with variable flip-angle distribution magnetic resonance sequence (SPACE: Sampling Perfection with Application optimised Contrast using different flip-angle Evolution) for the imaging of intracranial cerebrospinal fluid (CSF) spaces. We prospectively investigated 18 healthy volunteers and 25 patients, 20 with communicating hydrocephalus (CH), five with non-communicating hydrocephalus (NCH), using the SPACE sequence at 1.5T. Volume rendering views of both intracranial and ventricular CSF were obtained for all patients and volunteers. The subarachnoid CSF distribution was qualitatively evaluated on volume rendering views using a four-point scale. The CSF volumes within total, ventricular and subarachnoid spaces were calculated as well as the ratio between ventricular and subarachnoid CSF volumes. Three different patterns of subarachnoid CSF distribution were observed. In healthy volunteers we found narrowed CSF spaces within the occipital aera. A diffuse narrowing of the subarachnoid CSF spaces was observed in patients with NCH whereas patients with CH exhibited narrowed CSF spaces within the high midline convexity. The ratios between ventricular and subarachnoid CSF volumes were significantly different among the volunteers, patients with CH and patients with NCH. The assessment of CSF spaces volume and distribution may help to characterise hydrocephalus.
Radical-Driven Silicon Surface Passivation for Organic-Inorganic Hybrid Photovoltaics
NASA Astrophysics Data System (ADS)
Chandra, Nitish
The advent of metamaterials has increased the complexity of possible light-matter interactions, creating gaps in knowledge and violating various commonly used approximations and rendering some common mathematical frameworks incomplete. Our forward scattering experiments on metallic shells and cavities have created a need for a rigorous geometry-based analysis of scattering problems and more rigorous current distribution descriptions in the volume of the scattering object. In order to build an accurate understanding of these interactions, we have revisited the fundamentals of Maxwell's equations, electromagnetic potentials and boundary conditions to build a bottom-up geometry-based analysis of scattering. Individual structures or meta-atoms can be designed to localize the incident electromagnetic radiation in order to create a change in local constitutive parameters and possible nonlinear responses. Hence, in next generation engineered materials, an accurate determination of current distribution on the surface and in the structure's volume play an important role in describing and designing desired properties. Multipole expansions of the exact current distribution determined using principles of differential geometry provides an elegant way to study these local interactions of meta-atoms. The dynamics of the interactions can be studied using the behavior of the polarization and magnetization densities generated by localized current densities interacting with the electromagnetic potentials associated with the incident waves. The multipole method combined with propagation of electromagnetic potentials can be used to predict a large variety of linear and nonlinear physical phenomena. This has been demonstrated in experiments that enable the analog detection of sources placed at subwavelength separation by using time reversal of observed signals. Time reversal is accomplished by reversing the direction of the magnetic dipole in bianisotropic metasurfaces while simultaneously providing a method to reduce the losses often observed when light interacts with meta-structures.
Adragna, Norma C; Ravilla, Nagendra B; Lauf, Peter K; Begum, Gulnaz; Khanna, Arjun R; Sun, Dandan; Kahle, Kristopher T
2015-01-01
The defense of cell volume against excessive shrinkage or swelling is a requirement for cell function and organismal survival. Cell swelling triggers a coordinated homeostatic response termed regulatory volume decrease (RVD), resulting in K(+) and Cl(-) efflux via activation of K(+) channels, volume-regulated anion channels (VRACs), and the K(+)-Cl(-) cotransporters, including KCC3. Here, we show genetic alanine (Ala) substitution at threonines (Thr) 991 and 1048 in the KCC3a isoform carboxyl-terminus, preventing inhibitory phosphorylation at these sites, not only significantly up-regulates KCC3a activity up to 25-fold in normally inhibitory isotonic conditions, but is also accompanied by reversal of activity of the related bumetanide-sensitive Na(+)-K(+)-2Cl(-) cotransporter isoform 1 (NKCC1). This results in a rapid (<10 min) and significant (>90%) reduction in intracellular K(+) content (Ki) via both Cl-dependent (KCC3a + NKCC1) and Cl-independent [DCPIB (VRAC inhibitor)-sensitive] pathways, which collectively renders cells less prone to acute swelling in hypotonic osmotic stress. Together, these data demonstrate the phosphorylation state of Thr991/Thr1048 in KCC3a encodes a potent switch of transporter activity, Ki homeostasis, and cell volume regulation, and reveal novel observations into the functional interaction among ion transport molecules involved in RVD.
Adragna, Norma C.; Ravilla, Nagendra B.; Lauf, Peter K.; Begum, Gulnaz; Khanna, Arjun R.; Sun, Dandan; Kahle, Kristopher T.
2015-01-01
The defense of cell volume against excessive shrinkage or swelling is a requirement for cell function and organismal survival. Cell swelling triggers a coordinated homeostatic response termed regulatory volume decrease (RVD), resulting in K+ and Cl− efflux via activation of K+ channels, volume-regulated anion channels (VRACs), and the K+-Cl− cotransporters, including KCC3. Here, we show genetic alanine (Ala) substitution at threonines (Thr) 991 and 1048 in the KCC3a isoform carboxyl-terminus, preventing inhibitory phosphorylation at these sites, not only significantly up-regulates KCC3a activity up to 25-fold in normally inhibitory isotonic conditions, but is also accompanied by reversal of activity of the related bumetanide-sensitive Na+-K+-2Cl− cotransporter isoform 1 (NKCC1). This results in a rapid (<10 min) and significant (>90%) reduction in intracellular K+ content (Ki) via both Cl-dependent (KCC3a + NKCC1) and Cl-independent [DCPIB (VRAC inhibitor)-sensitive] pathways, which collectively renders cells less prone to acute swelling in hypotonic osmotic stress. Together, these data demonstrate the phosphorylation state of Thr991/Thr1048 in KCC3a encodes a potent switch of transporter activity, Ki homeostasis, and cell volume regulation, and reveal novel observations into the functional interaction among ion transport molecules involved in RVD. PMID:26217182
Automatic partitioning of head CTA for enabling segmentation
NASA Astrophysics Data System (ADS)
Suryanarayanan, Srikanth; Mullick, Rakesh; Mallya, Yogish; Kamath, Vidya; Nagaraj, Nithin
2004-05-01
Radiologists perform a CT Angiography procedure to examine vascular structures and associated pathologies such as aneurysms. Volume rendering is used to exploit volumetric capabilities of CT that provides complete interactive 3-D visualization. However, bone forms an occluding structure and must be segmented out. The anatomical complexity of the head creates a major challenge in the segmentation of bone and vessel. An analysis of the head volume reveals varying spatial relationships between vessel and bone that can be separated into three sub-volumes: "proximal", "middle", and "distal". The "proximal" and "distal" sub-volumes contain good spatial separation between bone and vessel (carotid referenced here). Bone and vessel appear contiguous in the "middle" partition that remains the most challenging region for segmentation. The partition algorithm is used to automatically identify these partition locations so that different segmentation methods can be developed for each sub-volume. The partition locations are computed using bone, image entropy, and sinus profiles along with a rule-based method. The algorithm is validated on 21 cases (varying volume sizes, resolution, clinical sites, pathologies) using ground truth identified visually. The algorithm is also computationally efficient, processing a 500+ slice volume in 6 seconds (an impressive 0.01 seconds / slice) that makes it an attractive algorithm for pre-processing large volumes. The partition algorithm is integrated into the segmentation workflow. Fast and simple algorithms are implemented for processing the "proximal" and "distal" partitions. Complex methods are restricted to only the "middle" partition. The partitionenabled segmentation has been successfully tested and results are shown from multiple cases.
Anastasi, Giuseppe; Cutroneo, Giuseppina; Bruschetta, Daniele; Trimarchi, Fabio; Ielitro, Giuseppe; Cammaroto, Simona; Duca, Antonio; Bramanti, Placido; Favaloro, Angelo; Vaccarino, Gianluigi; Milardi, Demetrio
2009-11-01
We have applied high-quality medical imaging techniques to study the structure of the human ankle. Direct volume rendering, using specific algorithms, transforms conventional two-dimensional (2D) magnetic resonance image (MRI) series into 3D volume datasets. This tool allows high-definition visualization of single or multiple structures for diagnostic, research, and teaching purposes. No other image reformatting technique so accurately highlights each anatomic relationship and preserves soft tissue definition. Here, we used this method to study the structure of the human ankle to analyze tendon-bone-muscle relationships. We compared ankle MRI and computerized tomography (CT) images from 17 healthy volunteers, aged 18-30 years (mean 23 years). An additional subject had a partial rupture of the Achilles tendon. The MRI images demonstrated superiority in overall quality of detail compared to the CT images. The MRI series accurately rendered soft tissue and bone in simultaneous image acquisition, whereas CT required several window-reformatting algorithms, with loss of image data quality. We obtained high-quality digital images of the human ankle that were sufficiently accurate for surgical and clinical intervention planning, as well as for teaching human anatomy. Our approach demonstrates that complex anatomical structures such as the ankle, which is rich in articular facets and ligaments, can be easily studied non-invasively using MRI data.
Anastasi, Giuseppe; Cutroneo, Giuseppina; Bruschetta, Daniele; Trimarchi, Fabio; Ielitro, Giuseppe; Cammaroto, Simona; Duca, Antonio; Bramanti, Placido; Favaloro, Angelo; Vaccarino, Gianluigi; Milardi, Demetrio
2009-01-01
We have applied high-quality medical imaging techniques to study the structure of the human ankle. Direct volume rendering, using specific algorithms, transforms conventional two-dimensional (2D) magnetic resonance image (MRI) series into 3D volume datasets. This tool allows high-definition visualization of single or multiple structures for diagnostic, research, and teaching purposes. No other image reformatting technique so accurately highlights each anatomic relationship and preserves soft tissue definition. Here, we used this method to study the structure of the human ankle to analyze tendon–bone–muscle relationships. We compared ankle MRI and computerized tomography (CT) images from 17 healthy volunteers, aged 18–30 years (mean 23 years). An additional subject had a partial rupture of the Achilles tendon. The MRI images demonstrated superiority in overall quality of detail compared to the CT images. The MRI series accurately rendered soft tissue and bone in simultaneous image acquisition, whereas CT required several window-reformatting algorithms, with loss of image data quality. We obtained high-quality digital images of the human ankle that were sufficiently accurate for surgical and clinical intervention planning, as well as for teaching human anatomy. Our approach demonstrates that complex anatomical structures such as the ankle, which is rich in articular facets and ligaments, can be easily studied non-invasively using MRI data. PMID:19678857
CT Demonstration of Caput Medusae
ERIC Educational Resources Information Center
Weber, Edward C.; Vilensky, Joel A.
2009-01-01
Maximum intensity and volume rendered CT displays of caput medusae are provided to demonstrate both the anatomy and physiology of this portosystemic shunt associated with portal hypertension. (Contains 2 figures.)
A unified framework for building high performance DVEs
NASA Astrophysics Data System (ADS)
Lei, Kaibin; Ma, Zhixia; Xiong, Hua
2011-10-01
A unified framework for integrating PC cluster based parallel rendering with distributed virtual environments (DVEs) is presented in this paper. While various scene graphs have been proposed in DVEs, it is difficult to enable collaboration of different scene graphs. This paper proposes a technique for non-distributed scene graphs with the capability of object and event distribution. With the increase of graphics data, DVEs require more powerful rendering ability. But general scene graphs are inefficient in parallel rendering. The paper also proposes a technique to connect a DVE and a PC cluster based parallel rendering environment. A distributed multi-player video game is developed to show the interaction of different scene graphs and the parallel rendering performance on a large tiled display wall.
Goto, Masami; Kunimatsu, Akira; Shojima, Masaaki; Abe, Osamu; Aoki, Shigeki; Hayashi, Naoto; Mori, Harushi; Ino, Kenji; Yano, Keiichi; Saito, Nobuhito; Ohtomo, Kuni
2013-03-25
We present a case in which the origin of the branching vessel at the aneurysm neck was observed at the wrong place on the volume rendering method (VR) with 3D time-of-flight MRA (3D-TOF-MRA) with 3-Tesla MR system. In 3D-TOF-MRA, it is often difficult to observe the origin of the branching vessel, but it is unusual for it to be observed in the wrong place. In the planning of interventional treatment and surgical procedures, false recognition, as in the unique case in the present report, is a serious problem. Decisions based only on VR with 3D-TOF-MRA can be a cause of suboptimal selection in clinical treatment.
Visualization for Molecular Dynamics Simulation of Gas and Metal Surface Interaction
NASA Astrophysics Data System (ADS)
Puzyrkov, D.; Polyakov, S.; Podryga, V.
2016-02-01
The development of methods, algorithms and applications for visualization of molecular dynamics simulation outputs is discussed. The visual analysis of the results of such calculations is a complex and actual problem especially in case of the large scale simulations. To solve this challenging task it is necessary to decide on: 1) what data parameters to render, 2) what type of visualization to choose, 3) what development tools to use. In the present work an attempt to answer these questions was made. For visualization it was offered to draw particles in the corresponding 3D coordinates and also their velocity vectors, trajectories and volume density in the form of isosurfaces or fog. We tested the way of post-processing and visualization based on the Python language with use of additional libraries. Also parallel software was developed that allows processing large volumes of data in the 3D regions of the examined system. This software gives the opportunity to achieve desired results that are obtained in parallel with the calculations, and at the end to collect discrete received frames into a video file. The software package "Enthought Mayavi2" was used as the tool for visualization. This visualization application gave us the opportunity to study the interaction of a gas with a metal surface and to closely observe the adsorption effect.
PACS-based interface for 3D anatomical structure visualization and surgical planning
NASA Astrophysics Data System (ADS)
Koehl, Christophe; Soler, Luc; Marescaux, Jacques
2002-05-01
The interpretation of radiological image is routine but it remains a rather difficult task for physicians. It requires complex mental processes, that permit translation from 2D slices into 3D localization and volume determination of visible diseases. An easier and more extensive visualization and exploitation of medical images can be reached through the use of computer-based systems that provide real help from patient admission to post-operative followup. In this way, we have developed a 3D visualization interface linked to a PACS database that allows manipulation and interaction on virtual organs delineated from CT-scan or MRI. This software provides the 3D real-time surface rendering of anatomical structures, an accurate evaluation of volumes and distances and the improvement of radiological image analysis and exam annotation through a negatoscope tool. It also provides a tool for surgical planning allowing the positioning of an interactive laparoscopic instrument and the organ resection. The software system could revolutionize the field of computerized imaging technology. Indeed, it provides a handy and portable tool for pre-operative and intra-operative analysis of anatomy and pathology in various medical fields. This constitutes the first step of the future development of augmented reality and surgical simulation systems.
Hans, P; Grant, A J; Laitt, R D; Ramsden, R T; Kassner, A; Jackson, A
1999-08-01
Cochlear implantation requires introduction of a stimulating electrode array into the scala vestibuli or scala tympani. Although these structures can be separately identified on many high-resolution scans, it is often difficult to ascertain whether these channels are patent throughout their length. The aim of this study was to determine whether an optimized combination of an imaging protocol and a visualization technique allows routine 3D rendering of the scala vestibuli and scala tympani. A submillimeter T2 fast spin-echo imaging sequence was designed to optimize the performance of 3D visualization methods. The spatial resolution was determined experimentally using primary images and 3D surface and volume renderings from eight healthy subjects. These data were used to develop the imaging sequence and to compare the quality and signal-to-noise dependency of four data visualization algorithms: maximum intensity projection, ray casting with transparent voxels, ray casting with opaque voxels, and isosurface rendering. The ability of these methods to produce 3D renderings of the scala tympani and scala vestibuli was also examined. The imaging technique was used in five patients with sensorineural deafness. Visualization techniques produced optimal results in combination with an isotropic volume imaging sequence. Clinicians preferred the isosurface-rendered images to other 3D visualizations. Both isosurface and ray casting displayed the scala vestibuli and scala tympani throughout their length. Abnormalities were shown in three patients, and in one of these, a focal occlusion of the scala tympani was confirmed at surgery. Three-dimensional images of the scala vestibuli and scala tympani can be routinely produced. The combination of an MR sequence optimized for use with isosurface rendering or ray-casting algorithms can produce 3D images with greater spatial resolution and anatomic detail than has been possible previously.
Sculpting Mountains: Interactive Terrain Modeling Based on Subsurface Geology.
Cordonnier, Guillaume; Cani, Marie-Paule; Benes, Bedrich; Braun, Jean; Galin, Eric
2018-05-01
Most mountain ranges are formed by the compression and folding of colliding tectonic plates. Subduction of one plate causes large-scale asymmetry while their layered composition (or stratigraphy) explains the multi-scale folded strata observed on real terrains. We introduce a novel interactive modeling technique to generate visually plausible, large scale terrains that capture these phenomena. Our method draws on both geological knowledge for consistency and on sculpting systems for user interaction. The user is provided hands-on control on the shape and motion of tectonic plates, represented using a new geologically-inspired model for the Earth crust. The model captures their volume preserving and complex folding behaviors under collision, causing mountains to grow. It generates a volumetric uplift map representing the growth rate of subsurface layers. Erosion and uplift movement are jointly simulated to generate the terrain. The stratigraphy allows us to render folded strata on eroded cliffs. We validated the usability of our sculpting interface through a user study, and compare the visual consistency of the earth crust model with geological simulation results and real terrains.
Archeological Investigations in Cochiti Reservoir, New Mexico. Volume 3. 1976-1977 Field Seasons.
1979-01-01
or methods are in a constant state of flux, and will undoubtedly continue so. The present In 1959, Baumhoff and Heizer suggested that the sys- paper...marrow extraction and when as estimates rather than counts were insect bodies and rendering bone grease. parts (10-25%), cocoons/larvae/eggs (1-10%), and...A yielded rendering bone grease or making soup. The association of 40 burned bone fragments. A 500 ml sample from grid the unidentified fragments and
1997-03-01
these historic resources, rendering them the least preferable alternatives with respect to cultural resources. 2.3.2.4 Visual Resources 1 Construction of...communication). Others measures, however, were interrupted by the decision in 1995 to close the base, an action that rendered many mitigation measures unnecessary...of North American Indians, Vol. 8 (California), pp. 485495. Edited by R. F. Heizer . Smithsonian Institute, Washington, DC. Lienkaemper, J.J. 1992
1998-04-01
Valley (Kroeber & Heizer 1970). In 1972, the Bureau of Indian Affairs listed only 11 individuals claiming Patwin ancestry in the entire territory...facility from the dredge disposal area to the upland open space scenic resource area would render this facility visible from viewpoints with . high...take. The COE probably would not issue a permit unless the USFWS rendered a "non-jeopardy" Biological Opinion, which would incorporate mitigations for
1982-09-01
frequently awkward verbage thus rendering the report more readable. Richard Walling produced the figures and made many constructive coImnts on the...the Cobbs Swamp complex (Chase 1978), had developed into the Render - son complex (Dickens 1971). By approximately A.D. 400, check and simple j...Methods in Archaeology, edited by Robert F. Heizer and Sherburne F. Cook, pp. 60-92. Viking Fund Publications in Anthropology 28. Chicago. Stephenson
INCREASING SAVING BEHAVIOR THROUGH AGE-PROGRESSED RENDERINGS OF THE FUTURE SELF
HERSHFIELD, HAL E.; GOLDSTEIN, DANIEL G.; SHARPE, WILLIAM F.; FOX, JESSE; YEYKELIS, LEO; CARSTENSEN, LAURA L.; BAILENSON, JEREMY N.
2014-01-01
Many people fail to save what they need to for retirement (Munnell, Webb, and Golub-Sass 2009). Research on excessive discounting of the future suggests that removing the lure of immediate rewards by pre-committing to decisions, or elaborating the value of future rewards can both make decisions more future-oriented. In this article, we explore a third and complementary route, one that deals not with present and future rewards, but with present and future selves. In line with thinkers who have suggested that people may fail, through a lack of belief or imagination, to identify with their future selves (Parfit 1971; Schelling 1984), we propose that allowing people to interact with age-progressed renderings of themselves will cause them to allocate more resources toward the future. In four studies, participants interacted with realistic computer renderings of their future selves using immersive virtual reality hardware and interactive decision aids. In all cases, those who interacted with virtual future selves exhibited an increased tendency to accept later monetary rewards over immediate ones. PMID:24634544
[Virtual endoscopy with a volumetric reconstruction technic: the technical aspects].
Pavone, P; Laghi, A; Panebianco, V; Catalano, C; Giura, R; Passariello, R
1998-06-01
We analyze the peculiar technical features of virtual endoscopy obtained with volume rendering. Our preliminary experience is based on virtual endoscopy images from volumetric data acquired with spiral CT (Siemens, Somatom Plus 4) using acquisition protocols standardized for different anatomic areas. Images are reformatted at the CT console, to obtain 1 mm thick contiguous slices, and transferred in DICOM format to an O2 workstation (Silicon Graphics, Mountain View CA, USA) with processor speed of 180 Mhz, 256 Mbyte RAM memory and 4.1 Gbyte hard disk. The software is Vitrea 1.0 (Vital Images, Fairfield, Iowa), running on a Unix platform. Image output is obtained through the Ethernet network to a Macintosh computer and a thermic printer (Kodak 8600 XLS). Diagnostic quality images were obtained in all the cases. Fly-through in the airways allowed correct evaluation of the main bronchi and of the origin of segmentary bronchi. In the vascular district, both carotid strictures and abdominal aortic aneurysms were depicted, with the same accuracy as with conventional reconstruction techniques. In the colon studies, polypoid lesions were correctly depicted in all the cases, with good correlation with endoscopic and double-contrast barium enema findings. In a case of lipoma of the ascending colon, virtual endoscopy allowed to study the colon both cranially and caudally to the lesion. The simultaneous evaluation of axial CT images permitted to characterize the lesion correctly on the basis of its density values. The peculiar feature of volume rendering is the use of the whole information inside the imaging volume to reconstruct three-dimensional images; no threshold values are used and no data are lost as opposite to conventional image reconstruction techniques. The different anatomic structures are visualized modifying the reciprocal opacities, showing the structures of no interest as translucent. The modulation of different opacities is obtained modifying the shape of the opacity curve, either using pre-set curves or in a completely independent way. Other technical features of volume rendering are the perspective evaluation of the objects, color and lighting. In conclusion, volume rendering is a promising technique to elaborate three-dimensional images, offering very realistic endoscopic views. At present, the main limitation is represented by the need of powerful and high-cost workstations.
Distributed rendering for multiview parallax displays
NASA Astrophysics Data System (ADS)
Annen, T.; Matusik, W.; Pfister, H.; Seidel, H.-P.; Zwicker, M.
2006-02-01
3D display technology holds great promise for the future of television, virtual reality, entertainment, and visualization. Multiview parallax displays deliver stereoscopic views without glasses to arbitrary positions within the viewing zone. These systems must include a high-performance and scalable 3D rendering subsystem in order to generate multiple views at real-time frame rates. This paper describes a distributed rendering system for large-scale multiview parallax displays built with a network of PCs, commodity graphics accelerators, multiple projectors, and multiview screens. The main challenge is to render various perspective views of the scene and assign rendering tasks effectively. In this paper we investigate two different approaches: Optical multiplexing for lenticular screens and software multiplexing for parallax-barrier displays. We describe the construction of large-scale multi-projector 3D display systems using lenticular and parallax-barrier technology. We have developed different distributed rendering algorithms using the Chromium stream-processing framework and evaluate the trade-offs and performance bottlenecks. Our results show that Chromium is well suited for interactive rendering on multiview parallax displays.
Approaching the exa-scale: a real-world evaluation of rendering extremely large data sets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patchett, John M; Ahrens, James P; Lo, Li - Ta
2010-10-15
Extremely large scale analysis is becoming increasingly important as supercomputers and their simulations move from petascale to exascale. The lack of dedicated hardware acceleration for rendering on today's supercomputing platforms motivates our detailed evaluation of the possibility of interactive rendering on the supercomputer. In order to facilitate our understanding of rendering on the supercomputing platform, we focus on scalability of rendering algorithms and architecture envisioned for exascale datasets. To understand tradeoffs for dealing with extremely large datasets, we compare three different rendering algorithms for large polygonal data: software based ray tracing, software based rasterization and hardware accelerated rasterization. We presentmore » a case study of strong and weak scaling of rendering extremely large data on both GPU and CPU based parallel supercomputers using Para View, a parallel visualization tool. Wc use three different data sets: two synthetic and one from a scientific application. At an extreme scale, algorithmic rendering choices make a difference and should be considered while approaching exascale computing, visualization, and analysis. We find software based ray-tracing offers a viable approach for scalable rendering of the projected future massive data sizes.« less
A Review on Real-Time 3D Ultrasound Imaging Technology
Zeng, Zhaozheng
2017-01-01
Real-time three-dimensional (3D) ultrasound (US) has attracted much more attention in medical researches because it provides interactive feedback to help clinicians acquire high-quality images as well as timely spatial information of the scanned area and hence is necessary in intraoperative ultrasound examinations. Plenty of publications have been declared to complete the real-time or near real-time visualization of 3D ultrasound using volumetric probes or the routinely used two-dimensional (2D) probes. So far, a review on how to design an interactive system with appropriate processing algorithms remains missing, resulting in the lack of systematic understanding of the relevant technology. In this article, previous and the latest work on designing a real-time or near real-time 3D ultrasound imaging system are reviewed. Specifically, the data acquisition techniques, reconstruction algorithms, volume rendering methods, and clinical applications are presented. Moreover, the advantages and disadvantages of state-of-the-art approaches are discussed in detail. PMID:28459067
A Review on Real-Time 3D Ultrasound Imaging Technology.
Huang, Qinghua; Zeng, Zhaozheng
2017-01-01
Real-time three-dimensional (3D) ultrasound (US) has attracted much more attention in medical researches because it provides interactive feedback to help clinicians acquire high-quality images as well as timely spatial information of the scanned area and hence is necessary in intraoperative ultrasound examinations. Plenty of publications have been declared to complete the real-time or near real-time visualization of 3D ultrasound using volumetric probes or the routinely used two-dimensional (2D) probes. So far, a review on how to design an interactive system with appropriate processing algorithms remains missing, resulting in the lack of systematic understanding of the relevant technology. In this article, previous and the latest work on designing a real-time or near real-time 3D ultrasound imaging system are reviewed. Specifically, the data acquisition techniques, reconstruction algorithms, volume rendering methods, and clinical applications are presented. Moreover, the advantages and disadvantages of state-of-the-art approaches are discussed in detail.
3D chromosome rendering from Hi-C data using virtual reality
NASA Astrophysics Data System (ADS)
Zhu, Yixin; Selvaraj, Siddarth; Weber, Philip; Fang, Jennifer; Schulze, Jürgen P.; Ren, Bing
2015-01-01
Most genome browsers display DNA linearly, using single-dimensional depictions that are useful to examine certain epigenetic mechanisms such as DNA methylation. However, these representations are insufficient to visualize intrachromosomal interactions and relationships between distal genome features. Relationships between DNA regions may be difficult to decipher or missed entirely if those regions are distant in one dimension but could be spatially proximal when mapped to three-dimensional space. For example, the visualization of enhancers folding over genes is only fully expressed in three-dimensional space. Thus, to accurately understand DNA behavior during gene expression, a means to model chromosomes is essential. Using coordinates generated from Hi-C interaction frequency data, we have created interactive 3D models of whole chromosome structures and its respective domains. We have also rendered information on genomic features such as genes, CTCF binding sites, and enhancers. The goal of this article is to present the procedure, findings, and conclusions of our models and renderings.
Topological Galleries: A High Level User Interface for Topology Controlled Volume Rendering
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacCarthy, Brian; Carr, Hamish; Weber, Gunther H.
2011-06-30
Existing topological interfaces to volume rendering are limited by their reliance on sophisticated knowledge of topology by the user. We extend previous work by describing topological galleries, an interface for novice users that is based on the design galleries approach. We report three contributions: an interface based on hierarchical thumbnail galleries to display the containment relationships between topologically identifiable features, the use of the pruning hierarchy instead of branch decomposition for contour tree simplification, and drag-and-drop transfer function assignment for individual components. Initial results suggest that this approach suffers from limitations due to rapid drop-off of feature size in themore » pruning hierarchy. We explore these limitations by providing statistics of feature size as function of depth in the pruning hierarchy of the contour tree.« less
NASA Astrophysics Data System (ADS)
Rettmann, M. E.; Holmes, D. R., III; Gunawan, M. S.; Ge, X.; Karwoski, R. A.; Breen, J. F.; Packer, D. L.; Robb, R. A.
2012-03-01
Geometric analysis of the left atrium and pulmonary veins is important for studying reverse structural remodeling following cardiac ablation therapy. It has been shown that the left atrium decreases in volume and the pulmonary vein ostia decrease in diameter following ablation therapy. Most analysis techniques, however, require laborious manual tracing of image cross-sections. Pulmonary vein diameters are typically measured at the junction between the left atrium and pulmonary veins, called the pulmonary vein ostia, with manually drawn lines on volume renderings or on image cross-sections. In this work, we describe a technique for making semi-automatic measurements of the left atrium and pulmonary vein ostial diameters from high resolution CT scans and multi-phase datasets. The left atrium and pulmonary veins are segmented from a CT volume using a 3D volume approach and cut planes are interactively positioned to separate the pulmonary veins from the body of the left atrium. The cut plane is also used to compute the pulmonary vein ostial diameter. Validation experiments are presented which demonstrate the ability to repeatedly measure left atrial volume and pulmonary vein diameters from high resolution CT scans, as well as the feasibility of this approach for analyzing dynamic, multi-phase datasets. In the high resolution CT scans the left atrial volume measurements show high repeatability with approximately 4% intra-rater repeatability and 8% inter-rater repeatability. Intra- and inter-rater repeatability for pulmonary vein diameter measurements range from approximately 2 to 4 mm. For the multi-phase CT datasets, differences in left atrial volumes between a standard slice-by-slice approach and the proposed 3D volume approach are small, with percent differences on the order of 3% to 6%.
NASA Astrophysics Data System (ADS)
Mahmoud, Faaiza; Ton, Anthony; Crafoord, Joakim; Kramer, Elissa L.; Maguire, Gerald Q., Jr.; Noz, Marilyn E.; Zeleznik, Michael P.
2000-06-01
The purpose of this work was to evaluate three volumetric registration methods in terms of technique, user-friendliness and time requirements. CT and SPECT data from 11 patients were interactively registered using: a 3D method involving only affine transformation; a mixed 3D - 2D non-affine (warping) method; and a 3D non-affine (warping) method. In the first method representative isosurfaces are generated from the anatomical images. Registration proceeds through translation, rotation, and scaling in all three space variables. Resulting isosurfaces are fused and quantitative measurements are possible. In the second method, the 3D volumes are rendered co-planar by performing an oblique projection. Corresponding landmark pairs are chosen on matching axial slice sets. A polynomial warp is then applied. This method has undergone extensive validation and was used to evaluate the results. The third method employs visualization tools. The data model allows images to be localized within two separate volumes. Landmarks are chosen on separate slices. Polynomial warping coefficients are generated and data points from one volume are moved to the corresponding new positions. The two landmark methods were the least time consuming (10 to 30 minutes from start to finish), but did demand a good knowledge of anatomy. The affine method was tedious and required a fair understanding of 3D geometry.
Denoising and 4D visualization of OCT images
Gargesha, Madhusudhana; Jenkins, Michael W.; Rollins, Andrew M.; Wilson, David L.
2009-01-01
We are using Optical Coherence Tomography (OCT) to image structure and function of the developing embryonic heart in avian models. Fast OCT imaging produces very large 3D (2D + time) and 4D (3D volumes + time) data sets, which greatly challenge ones ability to visualize results. Noise in OCT images poses additional challenges. We created an algorithm with a quick, data set specific optimization for reduction of both shot and speckle noise and applied it to 3D visualization and image segmentation in OCT. When compared to baseline algorithms (median, Wiener, orthogonal wavelet, basic non-orthogonal wavelet), a panel of experts judged the new algorithm to give much improved volume renderings concerning both noise and 3D visualization. Specifically, the algorithm provided a better visualization of the myocardial and endocardial surfaces, and the interaction of the embryonic heart tube with surrounding tissue. Quantitative evaluation using an image quality figure of merit also indicated superiority of the new algorithm. Noise reduction aided semi-automatic 2D image segmentation, as quantitatively evaluated using a contour distance measure with respect to an expert segmented contour. In conclusion, the noise reduction algorithm should be quite useful for visualization and quantitative measurements (e.g., heart volume, stroke volume, contraction velocity, etc.) in OCT embryo images. With its semi-automatic, data set specific optimization, we believe that the algorithm can be applied to OCT images from other applications. PMID:18679509
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larsen, Michael B.; Van Horn, J. David; Wu, Fei
The synthesis of microporous polymers generally requires postpolymerization modification via hyper-cross-linking to trap the polymeric network in a state with high void volume. An alternative approach utilizes rigid, sterically demanding monomers to inhibit efficient packing, thus leading to a high degree of free volume between polymer side groups and main chains. Herein we combine polymers of intrinsic microporosity with polymerization-induced microphase separation (PIMS), a versatile methodology for the synthesis of nanostructured materials that can be rendered mesoporous. Copolymerization of various styrenic monomers with divinylbenzene in the presence of a poly(lactide) terminated with a chain-transfer agent (PLA-CTA) results in kinetic trappingmore » of a microphase-separated state. Subsequent etching of PLA provides a bicontinuous mesoporous network. Using equilibrium and kinetic nitrogen sorption experiments as well as positron annihilation lifetime spectroscopy (PALS), we demonstrate that variations in the steric characteristics of the styrenic monomer impart the network with microporosity, resulting in hierarchically (meso and micro) porous materials. Additionally, structure–property relationships of the styrenic monomer with total surface area and pore volume indicate that the glass transition temperature (Tg) of the corresponding styrenic homopolymers provides a reasonable measure of the steric interactions and resultant microporosity in these systems. Finally, PALS provides insight into micro- and mesoscopic void volume differences between porous monoliths containing either tert-butyl or TMS-modified styrenic monomers compared to the parent, unmodified styrene.« less
Radiological tele-immersion for next generation networks.
Ai, Z; Dech, F; Rasmussen, M; Silverstein, J C
2000-01-01
Since the acquisition of high-resolution three-dimensional patient images has become widespread, medical volumetric datasets (CT or MR) larger than 100 MB and encompassing more than 250 slices are common. It is important to make this patient-specific data quickly available and usable to many specialists at different geographical sites. Web-based systems have been developed to provide volume or surface rendering of medical data over networks with low fidelity, but these cannot adequately handle stereoscopic visualization or huge datasets. State-of-the-art virtual reality techniques and high speed networks have made it possible to create an environment for clinicians geographically distributed to immersively share these massive datasets in real-time. An object-oriented method for instantaneously importing medical volumetric data into Tele-Immersive environments has been developed at the Virtual Reality in Medicine Laboratory (VRMedLab) at the University of Illinois at Chicago (UIC). This networked-VR setup is based on LIMBO, an application framework or template that provides the basic capabilities of Tele-Immersion. We have developed a modular general purpose Tele-Immersion program that automatically combines 3D medical data with the methods for handling the data. For this purpose a DICOM loader for IRIS Performer has been developed. The loader was designed for SGI machines as a shared object, which is executed at LIMBO's runtime. The loader loads not only the selected DICOM dataset, but also methods for rendering, handling, and interacting with the data, bringing networked, real-time, stereoscopic interaction with radiological data to reality. Collaborative, interactive methods currently implemented in the loader include cutting planes and windowing. The Tele-Immersive environment has been tested on the UIC campus over an ATM network. We tested the environment with 3 nodes; one ImmersaDesk at the VRMedLab, one CAVE at the Electronic Visualization Laboratory (EVL) on east campus, and a CT scan machine in UIC Hospital. CT data was pulled directly from the scan machine to the Tele-Immersion server in our Laboratory, and then the data was synchronously distributed by our Onyx2 Rack server to all the VR setups. Instead of permitting medical volume visualization at one VR device, by combining teleconferencing, tele-presence, and virtual reality, the Tele-Immersive environment will enable geographically distributed clinicians to intuitively interact with the same medical volumetric models, point, gesture, converse, and see each other. This environment will bring together clinicians at different geographic locations to participate in Tele-Immersive consultation and collaboration.
Physically-Based Rendering of Particle-Based Fluids with Light Transport Effects
NASA Astrophysics Data System (ADS)
Beddiaf, Ali; Babahenini, Mohamed Chaouki
2018-03-01
Recent interactive rendering approaches aim to efficiently produce images. However, time constraints deeply affect their output accuracy and realism (many light phenomena are poorly or not supported at all). To remedy this issue, in this paper, we propose a physically-based fluid rendering approach. First, while state-of-the-art methods focus on isosurface rendering with only two refractions, our proposal (1) considers the fluid as a heterogeneous participating medium with refractive boundaries, and (2) supports both multiple refractions and scattering. Second, the proposed solution is fully particle-based in the sense that no particles transformation into a grid is required. This interesting feature makes it able to handle many particle types (water, bubble, foam, and sand). On top of that, a medium with different fluids (color, phase function, etc.) can also be rendered.
NASA Astrophysics Data System (ADS)
Ballora, Mark; Hall, David L.
2010-04-01
Detection of intrusions is a continuing problem in network security. Due to the large volumes of data recorded in Web server logs, analysis is typically forensic, taking place only after a problem has occurred. This paper describes a novel method of representing Web log information through multi-channel sound, while simultaneously visualizing network activity using a 3-D immersive environment. We are exploring the detection of intrusion signatures and patterns, utilizing human aural and visual pattern recognition ability to detect intrusions as they occur. IP addresses and return codes are mapped to an informative and unobtrusive listening environment to act as a situational sound track of Web traffic. Web log data is parsed and formatted using Python, then read as a data array by the synthesis language SuperCollider [1], which renders it as a sonification. This can be done either for the study of pre-existing data sets or in monitoring Web traffic in real time. Components rendered aurally include IP address, geographical information, and server Return Codes. Users can interact with the data, speeding or slowing the speed of representation (for pre-existing data sets) or "mixing" sound components to optimize intelligibility for tracking suspicious activity.
Gao, Peng; Liu, Peng; Su, Hongsen; Qiao, Liang
2015-04-01
Integrating visualization toolkit and the capability of interaction, bidirectional communication and graphics rendering which provided by HTML5, we explored and experimented on the feasibility of remote medical image reconstruction and interaction in pure Web. We prompted server-centric method which did not need to download the big medical data to local connections and avoided considering network transmission pressure and the three-dimensional (3D) rendering capability of client hardware. The method integrated remote medical image reconstruction and interaction into Web seamlessly, which was applicable to lower-end computers and mobile devices. Finally, we tested this method in the Internet and achieved real-time effects. This Web-based 3D reconstruction and interaction method, which crosses over internet terminals and performance limited devices, may be useful for remote medical assistant.
Augmented reality in laparoscopic surgical oncology.
Nicolau, Stéphane; Soler, Luc; Mutter, Didier; Marescaux, Jacques
2011-09-01
Minimally invasive surgery represents one of the main evolutions of surgical techniques aimed at providing a greater benefit to the patient. However, minimally invasive surgery increases the operative difficulty since the depth perception is usually dramatically reduced, the field of view is limited and the sense of touch is transmitted by an instrument. However, these drawbacks can currently be reduced by computer technology guiding the surgical gesture. Indeed, from a patient's medical image (US, CT or MRI), Augmented Reality (AR) can increase the surgeon's intra-operative vision by providing a virtual transparency of the patient. AR is based on two main processes: the 3D visualization of the anatomical or pathological structures appearing in the medical image, and the registration of this visualization on the real patient. 3D visualization can be performed directly from the medical image without the need for a pre-processing step thanks to volume rendering. But better results are obtained with surface rendering after organ and pathology delineations and 3D modelling. Registration can be performed interactively or automatically. Several interactive systems have been developed and applied to humans, demonstrating the benefit of AR in surgical oncology. It also shows the current limited interactivity due to soft organ movements and interaction between surgeon instruments and organs. If the current automatic AR systems show the feasibility of such system, it is still relying on specific and expensive equipment which is not available in clinical routine. Moreover, they are not robust enough due to the high complexity of developing a real-time registration taking organ deformation and human movement into account. However, the latest results of automatic AR systems are extremely encouraging and show that it will become a standard requirement for future computer-assisted surgical oncology. In this article, we will explain the concept of AR and its principles. Then, we will review the existing interactive and automatic AR systems in digestive surgical oncology, highlighting their benefits and limitations. Finally, we will discuss the future evolutions and the issues that still have to be tackled so that this technology can be seamlessly integrated in the operating room. Copyright © 2011 Elsevier Ltd. All rights reserved.
Virtual Sonography Through the Internet: Volume Compression Issues
Vilarchao-Cavia, Joseba; Troyano-Luque, Juan-Mario; Clavijo, Matilde
2001-01-01
Background Three-dimensional ultrasound images allow virtual sonography even at a distance. However, the size of final 3-D files limits their transmission through slow networks such as the Internet. Objective To analyze compression techniques that transform ultrasound images into small 3-D volumes that can be transmitted through the Internet without loss of relevant medical information. Methods Samples were selected from ultrasound examinations performed during, 1999-2000, in the Obstetrics and Gynecology Department at the University Hospital in La Laguna, Canary Islands, Spain. The conventional ultrasound video output was recorded at 25 fps (frames per second) on a PC, producing 100- to 120-MB files (for from 500 to 550 frames). Processing to obtain 3-D images progressively reduced file size. Results The original frames passed through different compression stages: selecting the region of interest, rendering techniques, and compression for storage. Final 3-D volumes reached 1:25 compression rates (1.5- to 2-MB files). Those volumes need 7 to 8 minutes to be transmitted through the Internet at a mean data throughput of 6.6 Kbytes per second. At the receiving site, virtual sonography is possible using orthogonal projections or oblique cuts. Conclusions Modern volume-rendering techniques allowed distant virtual sonography through the Internet. This is the result of their efficient data compression that maintains its attractiveness as a main criterion for distant diagnosis. PMID:11720963
Ehara, Shoichi; Okuyama, Takuhiro; Shirai, Nobuyuki; Sugioka, Kenichi; Oe, Hiroki; Itoh, Toshihide; Matsuoka, Toshiyuki; Ikura, Yoshihiro; Ueda, Makiko; Naruko, Takahiko; Hozumi, Takeshi; Yoshiyama, Minoru
2009-08-01
Previous studies have shown a correlation between coronary artery cross-sectional diameter and left ventricular (LV) mass. However, no studies have examined the correlation between actual coronary artery volume (CAV) and LV mass. In the present study, measurements of CAV by 64-multislice computed tomography (MSCT) were validated and the relationship between CAV and LV mass was investigated. First, coronary artery phantoms consisting of syringes filled with solutions of contrast medium moving at simulated heart rates were scanned by 64-MSCT. Display window settings permitting accurate calculation of small volumes were optimized by evaluating volume-rendered images of the segmented contrast medium at different window settings. Next, 61 patients without significant coronary artery stenosis were scanned by 64-MSCT with the same protocol as for the phantoms. Coronary arteries were segmented on a workstation and the same window settings were applied to the volume-rendered images to calculate total CAV. Significant correlations between total CAV and LV mass (r=0.660, P<0.0001) were found, whereas an inverse relation was present between total CAV per 100 g of LV mass and LV mass. The novel concept of "CAV" for the characterization of coronary arteries may prove useful for future research, particularly on the causes of LV hypertrophy.
Hybrid Parallelism for Volume Rendering on Large-, Multi-, and Many-Core Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Howison, Mark; Bethel, E. Wes; Childs, Hank
2012-01-01
With the computing industry trending towards multi- and many-core processors, we study how a standard visualization algorithm, ray-casting volume rendering, can benefit from a hybrid parallelism approach. Hybrid parallelism provides the best of both worlds: using distributed-memory parallelism across a large numbers of nodes increases available FLOPs and memory, while exploiting shared-memory parallelism among the cores within each node ensures that each node performs its portion of the larger calculation as efficiently as possible. We demonstrate results from weak and strong scaling studies, at levels of concurrency ranging up to 216,000, and with datasets as large as 12.2 trillion cells.more » The greatest benefit from hybrid parallelism lies in the communication portion of the algorithm, the dominant cost at higher levels of concurrency. We show that reducing the number of participants with a hybrid approach significantly improves performance.« less
[Usefulness of volume rendering stereo-movie in neurosurgical craniotomies].
Fukunaga, Tateya; Mokudai, Toshihiko; Fukuoka, Masaaki; Maeda, Tomonori; Yamamoto, Kouji; Yamanaka, Kozue; Minakuchi, Kiyomi; Miyake, Hirohisa; Moriki, Akihito; Uchida, Yasufumi
2007-12-20
In recent years, the advancements in MR technology combined with the development of the multi-channel coil have resulted in substantially shortened inspection times. In addition, rapid improvement in functional performance in the workstation has produced a more simplified imaging-making process. Consequently, graphical images of intra-cranial lesions can be easily created. For example, the use of three-dimensional spoiled gradient echo (3D-SPGR) volume rendering (VR) after injection of a contrast medium is applied clinically as a preoperative reference image. Recently, improvements in 3D-SPGR VR high-resolution have enabled accurate surface images of the brain to be obtained. We used stereo-imaging created by weighted maximum intensity projection (Weighted MIP) to determine the skin incision line. Furthermore, the stereo imaging technique utilizing 3D-SPGR VR was actually used in cases presented here. The techniques we report here seemed to be very useful in the pre-operative simulation of neurosurgical craniotomy.
A walk through the planned CS building. M.S. Thesis
NASA Technical Reports Server (NTRS)
Khorramabadi, Delnaz
1991-01-01
Using the architectural plan views of our future computer science building as test objects, we have completed the first stage of a Building walkthrough system. The inputs to our system are AutoCAD files. An AutoCAD converter translates the geometrical information in these files into a format suitable for 3D rendering. Major model errors, such as incorrect polygon intersections and random face orientations, are detected and fixed automatically. Interactive viewing and editing tools are provided to view the results, to modify and clean the model and to change surface attributes. Our display system provides a simple-to-use user interface for interactive exploration of buildings. Using only the mouse buttons, the user can move inside and outside the building and change floors. Several viewing and rendering options are provided, such as restricting the viewing frustum, avoiding wall collisions, and selecting different rendering algorithms. A plan view of the current floor, with the position of the eye point and viewing direction on it, is displayed at all times. The scene illumination can be manipulated, by interactively controlling intensity values for 5 light sources.
CSSG: Interactive Realism in Graphics with Complex Materials
2010-09-28
period (April 22, 2009 to June 30, 2010): Greg Nichols, Jeremy Shopf, and Chris Wyman, "Hierarchical Image-Space Radiosity for Interactive...Image-Space Radiosity for Interactive Global Illumination," paper presentation at the Eurographics Symposium on Rendering. Girona, Spain. June
WebGL-enabled 3D visualization of a Solar Flare Simulation
NASA Astrophysics Data System (ADS)
Chen, A.; Cheung, C. M. M.; Chintzoglou, G.
2016-12-01
The visualization of magnetohydrodynamic (MHD) simulations of astrophysical systems such as solar flares often requires specialized software packages (e.g. Paraview and VAPOR). A shortcoming of using such software packages is the inability to share our findings with the public and scientific community in an interactive and engaging manner. By using the javascript-based WebGL application programming interface (API) and the three.js javascript package, we create an online in-browser experience for rendering solar flare simulations that will be interactive and accessible to the general public. The WebGL renderer displays objects such as vector flow fields, streamlines and textured isosurfaces. This allows the user to explore the spatial relation between the solar coronal magnetic field and the thermodynamic structure of the plasma in which the magnetic field is embedded. Plans for extending the features of the renderer will also be presented.
US Army Armor Reference Data in Three Volumes. Volume I. The Army Division.
1981-01-01
dental treatment ASSIGNMENT Organic Armored Division, TOE 17 (d) Optometrc services CAPABILITIES a Provides the following combat service support to a...Support Command. Infantry Division (Mechaniied TOE 29-ft 1 Provides expedient dental treatment CAPABILITIES a Provides medical staff services, including g...administration, and supervision of and f Provides expedient dental treatment plan, mrt division level ol4 uii novel medicaf support rendered by
Three-dimensional confocal microscopy of the living cornea and ocular lens
NASA Astrophysics Data System (ADS)
Masters, Barry R.
1991-07-01
The three-dimensional reconstruction of the optic zone of the cornea and the ocular crystalline lens has been accomplished using confocal microscopy and volume rendering computer techniques. A laser scanning confocal microscope was used in the reflected light mode to obtain the two-dimensional images from the cornea and the ocular lens of a freshly enucleated rabbit eye. The light source was an argon ion laser with a 488 nm wavelength. The microscope objective was a Leitz X25, NA 0.6 water immersion lens. The 400 micron thick cornea was optically sectioned into 133 three micron sections. The semi-transparent cornea and the in-situ ocular lens was visualized as high resolution, high contrast two-dimensional images. The structures observed in the cornea include: superficial epithelial cells and their nuclei, basal epithelial cells and their 'beaded' cell borders, basal lamina, nerve plexus, nerve fibers, nuclei of stromal keratocytes, and endothelial cells. The structures observed in the in- situ ocular lens include: lens capsule, lens epithelial cells, and individual lens fibers. The three-dimensional data sets of the cornea and the ocular lens were reconstructed in the computer using volume rendering techniques. Stereo pairs were also created of the two- dimensional ocular images for visualization. The stack of two-dimensional images was reconstructed into a three-dimensional object using volume rendering techniques. This demonstration of the three-dimensional visualization of the intact, enucleated eye provides an important step toward quantitative three-dimensional morphometry of the eye. The important aspects of three-dimensional reconstruction are discussed.
Low-cost real-time 3D PC distributed-interactive-simulation (DIS) application for C4I
NASA Astrophysics Data System (ADS)
Gonthier, David L.; Veron, Harry
1998-04-01
A 3D Distributed Interactive Simulation (DIS) application was developed and demonstrated in a PC environment. The application is capable of running in the stealth mode or as a player which includes battlefield simulations, such as ModSAF. PCs can be clustered together, but not necessarily collocated, to run a simulation or training exercise on their own. A 3D perspective view of the battlefield is displayed that includes terrain, trees, buildings and other objects supported by the DIS application. Screen update rates of 15 to 20 frames per second have been achieved with fully lit and textured scenes thus providing high quality and fast graphics. A complete PC system can be configured for under $2,500. The software runs under Windows95 and WindowsNT. It is written in C++ and uses a commercial API called RenderWare for 3D rendering. The software uses Microsoft Foundation classes and Microsoft DirectPlay for joystick input. The RenderWare libraries enhance the performance through optimization for MMX and the Pentium Pro processor. The RenderWare and the Righteous 3D graphics board from Orchid Technologies with an advertised rendering rate of up to 2 million texture mapped triangles per second. A low-cost PC DIS simulator that can partake in a real-time collaborative simulation with other platforms is thus achieved.
Visual Systems for Interactive Exploration and Mining of Large-Scale Neuroimaging Data Archives
Bowman, Ian; Joshi, Shantanu H.; Van Horn, John D.
2012-01-01
While technological advancements in neuroimaging scanner engineering have improved the efficiency of data acquisition, electronic data capture methods will likewise significantly expedite the populating of large-scale neuroimaging databases. As they do and these archives grow in size, a particular challenge lies in examining and interacting with the information that these resources contain through the development of compelling, user-driven approaches for data exploration and mining. In this article, we introduce the informatics visualization for neuroimaging (INVIZIAN) framework for the graphical rendering of, and dynamic interaction with the contents of large-scale neuroimaging data sets. We describe the rationale behind INVIZIAN, detail its development, and demonstrate its usage in examining a collection of over 900 T1-anatomical magnetic resonance imaging (MRI) image volumes from across a diverse set of clinical neuroimaging studies drawn from a leading neuroimaging database. Using a collection of cortical surface metrics and means for examining brain similarity, INVIZIAN graphically displays brain surfaces as points in a coordinate space and enables classification of clusters of neuroanatomically similar MRI images and data mining. As an initial step toward addressing the need for such user-friendly tools, INVIZIAN provides a highly unique means to interact with large quantities of electronic brain imaging archives in ways suitable for hypothesis generation and data mining. PMID:22536181
Roles of universal three-dimensional image analysis devices that assist surgical operations.
Sakamoto, Tsuyoshi
2014-04-01
The circumstances surrounding medical image analysis have undergone rapid evolution. In such a situation, it can be said that "imaging" obtained through medical imaging modality and the "analysis" that we employ have become amalgamated. Recently, we feel the distance between "imaging" and "analysis" has become closer regarding the imaging analysis of any organ system, as if both terms mentioned above have become integrated. The history of medical image analysis started with the appearance of the computer. The invention of multi-planar reconstruction (MPR) used in the helical scan had a significant impact and became the basis for recent image analysis. Subsequently, curbed MPR (CPR) and other methods were developed, and the 3D diagnostic imaging and image analysis of the human body have started on a full scale. Volume rendering: the development of a new rendering algorithm and the significant improvement of memory and CPUs contributed to the development of "volume rendering," which allows 3D views with retained internal information. A new value was created by this development; computed tomography (CT) images that used to be for "diagnosis" before that time have become "applicable to treatment." In the past, before the development of volume rendering, a clinician had to mentally reconstruct an image reconfigured for diagnosis into a 3D image, but these developments have allowed the depiction of a 3D image on a monitor. Current technology: Currently, in Japan, the estimation of the liver volume and the perfusion area of the portal vein and hepatic vein are vigorously being adopted during preoperative planning for hepatectomy. Such a circumstance seems to be brought by the substantial improvement of said basic techniques and by upgrading the user interface, allowing doctors easy manipulation by themselves. The following describes the specific techniques. Future of post-processing technology: It is expected, in terms of the role of image analysis, for better or worse, that computer-aided diagnosis (CAD) will develop to a highly advanced level in every diagnostic field. Further, it is also expected in the treatment field that a technique coordinating various devices will be strongly required as a surgery navigator. Actually, surgery using an image navigator is being widely studied, and coordination with hardware, including robots, will also be developed. © 2014 Japanese Society of Hepato-Biliary-Pancreatic Surgery.
NASA Astrophysics Data System (ADS)
Rieder, Christian; Schwier, Michael; Weihusen, Andreas; Zidowitz, Stephan; Peitgen, Heinz-Otto
2009-02-01
Image guided radiofrequency ablation (RFA) is becoming a standard procedure as a minimally invasive method for tumor treatment in the clinical routine. The visualization of pathological tissue and potential risk structures like vessels or important organs gives essential support in image guided pre-interventional RFA planning. In this work our aim is to present novel visualization techniques for interactive RFA planning to support the physician with spatial information of pathological structures as well as the finding of trajectories without harming vitally important tissue. Furthermore, we illustrate three-dimensional applicator models of different manufactures combined with corresponding ablation areas in homogenous tissue, as specified by the manufacturers, to enhance the estimated amount of cell destruction caused by ablation. The visualization techniques are embedded in a workflow oriented application, designed for the use in the clinical routine. To allow a high-quality volume rendering we integrated a visualization method using the fuzzy c-means algorithm. This method automatically defines a transfer function for volume visualization of vessels without the need of a segmentation mask. However, insufficient visualization results of the displayed vessels caused by low data quality can be improved using local vessel segmentation in the vicinity of the lesion. We also provide an interactive segmentation technique of liver tumors for the volumetric measurement and for the visualization of pathological tissue combined with anatomical structures. In order to support coagulation estimation with respect to the heat-sink effect of the cooling blood flow which decreases thermal ablation, a numerical simulation of the heat distribution is provided.
Advanced 3-dimensional planning in neurosurgery.
Ferroli, Paolo; Tringali, Giovanni; Acerbi, Francesco; Schiariti, Marco; Broggi, Morgan; Aquino, Domenico; Broggi, Giovanni
2013-01-01
During the past decades, medical applications of virtual reality technology have been developing rapidly, ranging from a research curiosity to a commercially and clinically important area of medical informatics and technology. With the aid of new technologies, the user is able to process large amounts of data sets to create accurate and almost realistic reconstructions of anatomic structures and related pathologies. As a result, a 3-diensional (3-D) representation is obtained, and surgeons can explore the brain for planning or training. Further improvement such as a feedback system increases the interaction between users and models by creating a virtual environment. Its use for advanced 3-D planning in neurosurgery is described. Different systems of medical image volume rendering have been used and analyzed for advanced 3-D planning: 1 is a commercial "ready-to-go" system (Dextroscope, Bracco, Volume Interaction, Singapore), whereas the others are open-source-based software (3-D Slicer, FSL, and FreesSurfer). Different neurosurgeons at our institution experienced how advanced 3-D planning before surgery allowed them to facilitate and increase their understanding of the complex anatomic and pathological relationships of the lesion. They all agreed that the preoperative experience of virtually planning the approach was helpful during the operative procedure. Virtual reality for advanced 3-D planning in neurosurgery has achieved considerable realism as a result of the available processing power of modern computers. Although it has been found useful to facilitate the understanding of complex anatomic relationships, further effort is needed to increase the quality of the interaction between the user and the model.
Cell Culture on MEMS Platforms: A Review
Ni, Ming; Tong, Wen Hao; Choudhury, Deepak; Rahim, Nur Aida Abdul; Iliescu, Ciprian; Yu, Hanry
2009-01-01
Microfabricated systems provide an excellent platform for the culture of cells, and are an extremely useful tool for the investigation of cellular responses to various stimuli. Advantages offered over traditional methods include cost-effectiveness, controllability, low volume, high resolution, and sensitivity. Both biocompatible and bio-incompatible materials have been developed for use in these applications. Biocompatible materials such as PMMA or PLGA can be used directly for cell culture. However, for bio-incompatible materials such as silicon or PDMS, additional steps need to be taken to render these materials more suitable for cell adhesion and maintenance. This review describes multiple surface modification strategies to improve the biocompatibility of MEMS materials. Basic concepts of cell-biomaterial interactions, such as protein adsorption and cell adhesion are covered. Finally, the applications of these MEMS materials in Tissue Engineering are presented. PMID:20054478
[Rendering surgical care to wounded with neck wounds in an armed conflict].
Samokhvalov, I M; Zavrazhnov, A A; Fakhrutdinov, A M; Sychev, M I
2001-10-01
The results of rendering of the medical care (the first aid, qualified and specialized) obtained in 172 servicemen with neck injuries who stayed in Republic of Chechnya during the period from 09.08.1999 to 28.07.2000 were analyzed. Basing on the results of analysis and experience of casualties' treatment the authors discuss the problems of sequence and volume of surgical care in this group of casualties with reference to available medical evacuation system, surgical tactics at the stage of specialized care. They also consider the peculiarities of operative treatment of the casualties with neck injuries.
Server-based Approach to Web Visualization of Integrated Three-dimensional Brain Imaging Data
Poliakov, Andrew V.; Albright, Evan; Hinshaw, Kevin P.; Corina, David P.; Ojemann, George; Martin, Richard F.; Brinkley, James F.
2005-01-01
The authors describe a client-server approach to three-dimensional (3-D) visualization of neuroimaging data, which enables researchers to visualize, manipulate, and analyze large brain imaging datasets over the Internet. All computationally intensive tasks are done by a graphics server that loads and processes image volumes and 3-D models, renders 3-D scenes, and sends the renderings back to the client. The authors discuss the system architecture and implementation and give several examples of client applications that allow visualization and analysis of integrated language map data from single and multiple patients. PMID:15561787
Real-time 3D image reconstruction guidance in liver resection surgery.
Soler, Luc; Nicolau, Stephane; Pessaux, Patrick; Mutter, Didier; Marescaux, Jacques
2014-04-01
Minimally invasive surgery represents one of the main evolutions of surgical techniques. However, minimally invasive surgery adds difficulty that can be reduced through computer technology. From a patient's medical image [US, computed tomography (CT) or MRI], we have developed an Augmented Reality (AR) system that increases the surgeon's intraoperative vision by providing a virtual transparency of the patient. AR is based on two major processes: 3D modeling and visualization of anatomical or pathological structures appearing in the medical image, and the registration of this visualization onto the real patient. We have thus developed a new online service, named Visible Patient, providing efficient 3D modeling of patients. We have then developed several 3D visualization and surgical planning software tools to combine direct volume rendering and surface rendering. Finally, we have developed two registration techniques, one interactive and one automatic providing intraoperative augmented reality view. From January 2009 to June 2013, 769 clinical cases have been modeled by the Visible Patient service. Moreover, three clinical validations have been realized demonstrating the accuracy of 3D models and their great benefit, potentially increasing surgical eligibility in liver surgery (20% of cases). From these 3D models, more than 50 interactive AR-assisted surgical procedures have been realized illustrating the potential clinical benefit of such assistance to gain safety, but also current limits that automatic augmented reality will overcome. Virtual patient modeling should be mandatory for certain interventions that have now to be defined, such as liver surgery. Augmented reality is clearly the next step of the new surgical instrumentation but remains currently limited due to the complexity of organ deformations during surgery. Intraoperative medical imaging used in new generation of automated augmented reality should solve this issue thanks to the development of Hybrid OR.
Cryo-imaging of fluorescently labeled single cells in a mouse
NASA Astrophysics Data System (ADS)
Steyer, Grant J.; Roy, Debashish; Salvado, Olivier; Stone, Meredith E.; Wilson, David L.
2009-02-01
We developed a cryo-imaging system to provide single-cell detection of fluorescently labeled cells in mouse, with particular applicability to stem cells and metastatic cancer. The Case cryoimaging system consists of a fluorescence microscope, robotic imaging positioner, customized cryostat, PC-based control system, and visualization/analysis software. The system alternates between sectioning (10-40 μm) and imaging, collecting color brightfield and fluorescent blockface image volumes >60GB. In mouse experiments, we imaged quantum-dot labeled stem cells, GFP-labeled cancer and stem cells, and cell-size fluorescent microspheres. To remove subsurface fluorescence, we used a simplified model of light-tissue interaction whereby the next image was scaled, blurred, and subtracted from the current image. We estimated scaling and blurring parameters by minimizing entropy of subtracted images. Tissue specific attenuation parameters were found [uT : heart (267 +/- 47.6 μm), liver (218 +/- 27.1 μm), brain (161 +/- 27.4 μm)] to be within the range of estimates in the literature. "Next image" processing removed subsurface fluorescence equally well across multiple tissues (brain, kidney, liver, adipose tissue, etc.), and analysis of 200 microsphere images in the brain gave 97+/-2% reduction of subsurface fluorescence. Fluorescent signals were determined to arise from single cells based upon geometric and integrated intensity measurements. Next image processing greatly improved axial resolution, enabled high quality 3D volume renderings, and improved enumeration of single cells with connected component analysis by up to 24%. Analysis of image volumes identified metastatic cancer sites, found homing of stem cells to injury sites, and showed microsphere distribution correlated with blood flow patterns. We developed and evaluated cryo-imaging to provide single-cell detection of fluorescently labeled cells in mouse. Our cryo-imaging system provides extreme (>60GB), micron-scale, fluorescence, and bright field image data. Here we describe our image preprocessing, analysis, and visualization techniques. Processing improves axial resolution, reduces subsurface fluorescence by 97%, and enables single cell detection and counting. High quality 3D volume renderings enable us to evaluate cell distribution patterns. Applications include the myriad of biomedical experiments using fluorescent reporter gene and exogenous fluorophore labeling of cells in applications such as stem cell regenerative medicine, cancer, tissue engineering, etc.
GenExp: an interactive web-based genomic DAS client with client-side data rendering.
Gel Moreno, Bernat; Messeguer Peypoch, Xavier
2011-01-01
The Distributed Annotation System (DAS) offers a standard protocol for sharing and integrating annotations on biological sequences. There are more than 1000 DAS sources available and the number is steadily increasing. Clients are an essential part of the DAS system and integrate data from several independent sources in order to create a useful representation to the user. While web-based DAS clients exist, most of them do not have direct interaction capabilities such as dragging and zooming with the mouse. Here we present GenExp, a web based and fully interactive visual DAS client. GenExp is a genome oriented DAS client capable of creating informative representations of genomic data zooming out from base level to complete chromosomes. It proposes a novel approach to genomic data rendering and uses the latest HTML5 web technologies to create the data representation inside the client browser. Thanks to client-side rendering most position changes do not need a network request to the server and so responses to zooming and panning are almost immediate. In GenExp it is possible to explore the genome intuitively moving it with the mouse just like geographical map applications. Additionally, in GenExp it is possible to have more than one data viewer at the same time and to save the current state of the application to revisit it later on. GenExp is a new interactive web-based client for DAS and addresses some of the short-comings of the existing clients. It uses client-side data rendering techniques resulting in easier genome browsing and exploration. GenExp is open source under the GPL license and it is freely available at http://gralggen.lsi.upc.edu/recerca/genexp.
GenExp: An Interactive Web-Based Genomic DAS Client with Client-Side Data Rendering
Gel Moreno, Bernat; Messeguer Peypoch, Xavier
2011-01-01
Background The Distributed Annotation System (DAS) offers a standard protocol for sharing and integrating annotations on biological sequences. There are more than 1000 DAS sources available and the number is steadily increasing. Clients are an essential part of the DAS system and integrate data from several independent sources in order to create a useful representation to the user. While web-based DAS clients exist, most of them do not have direct interaction capabilities such as dragging and zooming with the mouse. Results Here we present GenExp, a web based and fully interactive visual DAS client. GenExp is a genome oriented DAS client capable of creating informative representations of genomic data zooming out from base level to complete chromosomes. It proposes a novel approach to genomic data rendering and uses the latest HTML5 web technologies to create the data representation inside the client browser. Thanks to client-side rendering most position changes do not need a network request to the server and so responses to zooming and panning are almost immediate. In GenExp it is possible to explore the genome intuitively moving it with the mouse just like geographical map applications. Additionally, in GenExp it is possible to have more than one data viewer at the same time and to save the current state of the application to revisit it later on. Conclusions GenExp is a new interactive web-based client for DAS and addresses some of the short-comings of the existing clients. It uses client-side data rendering techniques resulting in easier genome browsing and exploration. GenExp is open source under the GPL license and it is freely available at http://gralggen.lsi.upc.edu/recerca/genexp. PMID:21750706
Vidavsky, Netta; Akiva, Anat; Kaplan-Ashiri, Ifat; Rechav, Katya; Addadi, Lia; Weiner, Steve; Schertel, Andreas
2016-12-01
Many important biological questions can be addressed by studying in 3D large volumes of intact, cryo fixed hydrated tissues (⩾10,000μm 3 ) at high resolution (5-20nm). This can be achieved using serial FIB milling and block face surface imaging under cryo conditions. Here we demonstrate the unique potential of the cryo-FIB-SEM approach using two extensively studied model systems; sea urchin embryos and the tail fin of zebrafish larvae. We focus in particular on the environment of mineral deposition sites. The cellular organelles, including mitochondria, Golgi, ER, nuclei and nuclear pores are made visible by the image contrast created by differences in surface potential of different biochemical components. Auto segmentation and/or volume rendering of the image stacks and 3D reconstruction of the skeleton and the cellular environment, provides a detailed view of the relative distribution in space of the tissue/cellular components, and thus of their interactions. Simultaneous acquisition of secondary and back-scattered electron images adds additional information. For example, a serial view of the zebrafish tail reveals the presence of electron dense mineral particles inside mitochondrial networks extending more than 20μm in depth in the block. Large volume imaging using cryo FIB SEM, as demonstrated here, can contribute significantly to the understanding of the structures and functions of diverse biological tissues. Copyright © 2016 Elsevier Inc. All rights reserved.
Scalable isosurface visualization of massive datasets on commodity off-the-shelf clusters
Bajaj, Chandrajit
2009-01-01
Tomographic imaging and computer simulations are increasingly yielding massive datasets. Interactive and exploratory visualizations have rapidly become indispensable tools to study large volumetric imaging and simulation data. Our scalable isosurface visualization framework on commodity off-the-shelf clusters is an end-to-end parallel and progressive platform, from initial data access to the final display. Interactive browsing of extracted isosurfaces is made possible by using parallel isosurface extraction, and rendering in conjunction with a new specialized piece of image compositing hardware called Metabuffer. In this paper, we focus on the back end scalability by introducing a fully parallel and out-of-core isosurface extraction algorithm. It achieves scalability by using both parallel and out-of-core processing and parallel disks. It statically partitions the volume data to parallel disks with a balanced workload spectrum, and builds I/O-optimal external interval trees to minimize the number of I/O operations of loading large data from disk. We also describe an isosurface compression scheme that is efficient for progress extraction, transmission and storage of isosurfaces. PMID:19756231
Interactive 3D visualization for theoretical virtual observatories
NASA Astrophysics Data System (ADS)
Dykes, T.; Hassan, A.; Gheller, C.; Croton, D.; Krokos, M.
2018-06-01
Virtual observatories (VOs) are online hubs of scientific knowledge. They encompass a collection of platforms dedicated to the storage and dissemination of astronomical data, from simple data archives to e-research platforms offering advanced tools for data exploration and analysis. Whilst the more mature platforms within VOs primarily serve the observational community, there are also services fulfilling a similar role for theoretical data. Scientific visualization can be an effective tool for analysis and exploration of data sets made accessible through web platforms for theoretical data, which often contain spatial dimensions and properties inherently suitable for visualization via e.g. mock imaging in 2D or volume rendering in 3D. We analyse the current state of 3D visualization for big theoretical astronomical data sets through scientific web portals and virtual observatory services. We discuss some of the challenges for interactive 3D visualization and how it can augment the workflow of users in a virtual observatory context. Finally we showcase a lightweight client-server visualization tool for particle-based data sets, allowing quantitative visualization via data filtering, highlighting two example use cases within the Theoretical Astrophysical Observatory.
2004-04-15
Computed tomography (CT) images of resin-impregnated Mechanics of Granular Materials (MGM) specimens are assembled to provide 3-D volume renderings of density patterns formed by dislocation under the external loading stress profile applied during the experiments. Experiments flown on STS-79 and STS-89. Principal Investigator: Dr. Stein Sture
Chen, Lih-Shyang; Hsu, Ta-Wen; Chang, Shu-Han; Lin, Chih-Wen; Chen, Yu-Ruei; Hsieh, Chin-Chiang; Han, Shu-Chen; Chang, Ku-Yaw; Hou, Chun-Ju
2017-01-01
Objective: In traditional surface rendering (SR) computed tomographic endoscopy, only the shape of endoluminal lesion is depicted without gray-level information unless the volume rendering technique is used. However, volume rendering technique is relatively slow and complex in terms of computation time and parameter setting. We use computed tomographic colonography (CTC) images as examples and report a new visualization technique by three-dimensional gray level mapping (GM) to better identify and differentiate endoluminal lesions. Methods: There are 33 various endoluminal cases from 30 patients evaluated in this clinical study. These cases were segmented using gray-level threshold. The marching cube algorithm was used to detect isosurfaces in volumetric data sets. GM is applied using the surface gray level of CTC. Radiologists conducted the clinical evaluation of the SR and GM images. The Wilcoxon signed-rank test was used for data analysis. Results: Clinical evaluation confirms GM is significantly superior to SR in terms of gray-level pattern and spatial shape presentation of endoluminal cases (p < 0.01) and improves the confidence of identification and clinical classification of endoluminal lesions significantly (p < 0.01). The specificity and diagnostic accuracy of GM is significantly better than those of SR in diagnostic performance evaluation (p < 0.01). Conclusion: GM can reduce confusion in three-dimensional CTC and well correlate CTC with sectional images by the location as well as gray-level value. Hence, GM increases identification and differentiation of endoluminal lesions, and facilitates diagnostic process. Advances in knowledge: GM significantly improves the traditional SR method by providing reliable gray-level information for the surface points and is helpful in identification and differentiation of endoluminal lesions according to their shape and density. PMID:27925483
Efficient in-situ visualization of unsteady flows in climate simulation
NASA Astrophysics Data System (ADS)
Vetter, Michael; Olbrich, Stephan
2017-04-01
The simulation of climate data tends to produce very large data sets, which hardly can be processed in classical post-processing visualization applications. Typically, the visualization pipeline consisting of the processes data generation, visualization mapping and rendering is distributed into two parts over the network or separated via file transfer. Within most traditional post-processing scenarios the simulation is done on a supercomputer whereas the data analysis and visualization is done on a graphics workstation. That way temporary data sets with huge volume have to be transferred over the network, which leads to bandwidth bottlenecks and volume limitations. The solution to this issue is the avoidance of temporary storage, or at least significant reduction of data complexity. Within the Climate Visualization Lab - as part of the Cluster of Excellence "Integrated Climate System Analysis and Prediction" (CliSAP) at the University of Hamburg, in cooperation with the German Climate Computing Center (DKRZ) - we develop and integrate an in-situ approach. Our software framework DSVR is based on the separation of the process chain between the mapping and the rendering processes. It couples the mapping process directly to the simulation by calling methods of a parallelized data extraction library, which create a time-based sequence of geometric 3D scenes. This sequence is stored on a special streaming server with an interactive post-filtering option and then played-out asynchronously in a separate 3D viewer application. Since the rendering is part of this viewer application, the scenes can be navigated interactively. In contrast to other in-situ approaches where 2D images are created as part of the simulation or synchronous co-visualization takes place, our method supports interaction in 3D space and in time, as well as fixed frame rates. To integrate in-situ processing based on our DSVR framework and methods in the ICON climate model, we are continuously evolving the data structures and mapping algorithms of the framework to support the ICON model's native grid structures, since DSVR originally was designed for rectilinear grids only. We now have implemented a new output module to ICON to take advantage of the DSVR visualization. The visualization can be configured as most output modules by using a specific namelist and is exemplarily integrated within the non-hydrostatic atmospheric model time loop. With the integration of a DSVR based in-situ pathline extraction within ICON, a further milestone is reached. The pathline algorithm as well as the grid data structures have been optimized for the domain decomposition used for the parallelization of ICON based on MPI and OpenMP. The software implementation and evaluation is done on the supercomputers at DKRZ. In principle, the data complexity is reduced from O(n3) to O(m), where n is the grid resolution and m the number of supporting point of all pathlines. The stability and scalability evaluation is done using Atmospheric Model Intercomparison Project (AMIP) runs. We will give a short introduction in our software framework, as well as a short overview on the implementation and usage of DSVR within ICON. Furthermore, we will present visualization and evaluation results of sample applications.
Lee, Ki-Wook; Kim, Yeun; Perinpanayagam, Hiran; Lee, Jong-Ki; Yoo, Yeon-Jee; Lim, Sang-Min; Chang, Seok Woo; Ha, Byung-Hyun; Zhu, Qiang; Kum, Kee-Yeon
2014-03-01
Micro-computed tomography (MCT) shows detailed root canal morphology that is not seen with traditional tooth clearing. However, alternative image reformatting techniques in MCT involving 2-dimensional (2D) minimum intensity projection (MinIP) and 3-dimensional (3D) volume-rendering reconstruction have not been directly compared with clearing. The aim was to compare alternative image reformatting techniques in MCT with tooth clearing on the mesiobuccal (MB) root of maxillary first molars. Eighteen maxillary first molar MB roots were scanned, and 2D MinIP and 3D volume-rendered images were reconstructed. Subsequently, the same MB roots were processed by traditional tooth clearing. Images from 2D, 3D, 2D + 3D, and clearing techniques were assessed by 4 endodontists to classify canal configuration and to identify fine anatomic structures such as accessory canals, intercanal communications, and loops. All image reformatting techniques in MCT showed detailed configurations and numerous fine structures, such that none were classified as simple type I or II canals; several were classified as types III and IV according to Weine classification or types IV, V, and VI according to Vertucci; and most were nonclassifiable because of their complexity. The clearing images showed less detail, few fine structures, and numerous type I canals. Classification of canal configuration was in 100% intraobserver agreement for all 18 roots visualized by any of the image reformatting techniques in MCT but for only 4 roots (22.2%) classified according to Weine and 6 (33.3%) classified according to Vertucci, when using the clearing technique. The combination of 2D MinIP and 3D volume-rendered images showed the most detailed canal morphology and fine anatomic structures. Copyright © 2014 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
Armstrong, Ryan; de Ribaupierre, Sandrine; Eagleson, Roy
2014-04-01
This paper describes the design and development of a software tool for the evaluation and training of surgical residents using an interactive, immersive, virtual environment. Our objective was to develop a tool to evaluate user spatial reasoning skills and knowledge in a neuroanatomical context, as well as to augment their performance through interactivity. In the visualization, manually segmented anatomical surface images of MRI scans of the brain were rendered using a stereo display to improve depth cues. A magnetically tracked wand was used as a 3D input device for localization tasks within the brain. The movement of the wand was made to correspond to movement of a spherical cursor within the rendered scene, providing a reference for localization. Users can be tested on their ability to localize structures within the 3D scene, and their ability to place anatomical features at the appropriate locations within the rendering. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Ogata, Yuji; Nakahara, Tadaki; Ode, Kenichi; Matsusaka, Yohji; Katagiri, Mari; Iwabuchi, Yu; Itoh, Kazunari; Ichimura, Akira; Jinzaki, Masahiro
2017-05-01
We developed a method of image data projection of bone SPECT into 3D volume-rendered CT images for 3D SPECT/CT fusion. The aims of our study were to evaluate its feasibility and clinical usefulness. Whole-body bone scintigraphy (WB) and SPECT/CT scans were performed in 318 cancer patients using a dedicated SPECT/CT systems. Volume data of bone SPECT and CT were fused to obtain 2D SPECT/CT images. To generate our 3D SPECT/CT images, colored voxel data of bone SPECT were projected onto the corresponding location of the volume-rendered CT data after a semi-automatic bone extraction. Then, the resultant 3D images were blended with conventional volume-rendered CT images, allowing to grasp the three-dimensional relationship between bone metabolism and anatomy. WB and SPECT (WB + SPECT), 2D SPECT/CT fusion, and 3D SPECT/CT fusion were evaluated by two independent reviewers in the diagnosis of bone metastasis. The inter-observer variability and diagnostic accuracy in these three image sets were investigated using a four-point diagnostic scale. Increased bone metabolism was found in 744 metastatic sites and 1002 benign changes. On a per-lesion basis, inter-observer agreements in the diagnosis of bone metastasis were 0.72 for WB + SPECT, 0.90 for 2D SPECT/CT, and 0.89 for 3D SPECT/CT. Receiver operating characteristic analyses for the diagnostic accuracy of bone metastasis showed that WB + SPECT, 2D SPECT/CT, and 3D SPECT/CT had an area under the curve of 0.800, 0.983, and 0.983 for reader 1, 0.865, 0.992, and 0.993 for reader 2, respectively (WB + SPECT vs. 2D or 3D SPECT/CT, p < 0.001; 2D vs. 3D SPECT/CT, n.s.). The durations of interpretation of WB + SPECT, 2D SPECT/CT, and 3D SPECT/CT images were 241 ± 75, 225 ± 73, and 182 ± 71 s for reader 1 and 207 ± 72, 190 ± 73, and 179 ± 73 s for reader 2, respectively. As a result, it took shorter time to read 3D SPECT/CT images than 2D SPECT/CT (p < 0.0001) or WB + SPECT images (p < 0.0001). 3D SPECT/CT fusion offers comparable diagnostic accuracy to 2D SPECT/CT fusion. The visual effect of 3D SPECT/CT fusion facilitates reduction of reading time compared to 2D SPECT/CT fusion.
DOT National Transportation Integrated Search
2017-04-01
This is the third of three reports examining driver medical review practices in the United States and how : they fulfill the basic functions of identifying, assessing, and rendering licensing decisions on medically or : functionally at-risk drivers. ...
NASA Astrophysics Data System (ADS)
Henri, Christopher J.; Pike, Gordon; Collins, D. Louis; Peters, Terence M.
1990-07-01
We present two methods for acquiring and viewing integrated 3-D images of cerebral vasculature and cortical anatomy. The aim of each technique is to provide the neurosurgeon or radiologist with a 3-D image containing information which cannot ordinarily be obtained from a single imaging modality. The first approach employs recent developments in MR which is now capable of imaging flowing blood as well as static tissue. Here, true 3-D data are acquired and displayed using volume or surface rendering techniques. The second approach is based on the integration of x-ray projection angiograms and tomographic image data, allowing a composite image of anatomy and vasculature to be viewed in 3-D. This is accomplished by superimposing an angiographic stereo-pair onto volume rendered images of either CT or MR data created from matched viewing geometries. The two approaches are outlined and compared. Results are presented for each technique and potential clinical applications discussed.
Günther, P; Tröger, J; Holland-Cunz, S; Waag, K L; Schenk, J P
2006-08-01
Exact surgical planning is necessary for complex operations of pathological changes in anatomical structures of the pediatric abdomen. 3D visualization and computer-assisted operational planning based on CT data are being increasingly used for difficult operations in adults. To minimize radiation exposure and for better soft tissue contrast, sonography and MRI are the preferred diagnostic methods in pediatric patients. Because of manifold difficulties 3D visualization of these MRI data has not been realized so far, even though the field of embryonal malformations and tumors could benefit from this.A newly developed and modified raycasting-based powerful 3D volume rendering software (VG Studio Max 1.2) for the planning of pediatric abdominal surgery is presented. With the help of specifically developed algorithms, a useful surgical planning system is demonstrated. Thanks to the easy handling and high-quality visualization with enormous gain of information, the presented system is now an established part of routine surgical planning.
Volonté, Francesco; Buchs, Nicolas C; Pugin, François; Spaltenstein, Joël; Schiltz, Boris; Jung, Minoa; Hagen, Monika; Ratib, Osman; Morel, Philippe
2013-09-01
Computerized management of medical information and 3D imaging has become the norm in everyday medical practice. Surgeons exploit these emerging technologies and bring information previously confined to the radiology rooms into the operating theatre. The paper reports the authors' experience with integrated stereoscopic 3D-rendered images in the da Vinci surgeon console. Volume-rendered images were obtained from a standard computed tomography dataset using the OsiriX DICOM workstation. A custom OsiriX plugin was created that permitted the 3D-rendered images to be displayed in the da Vinci surgeon console and to appear stereoscopic. These rendered images were displayed in the robotic console using the TilePro multi-input display. The upper part of the screen shows the real endoscopic surgical field and the bottom shows the stereoscopic 3D-rendered images. These are controlled by a 3D joystick installed on the console, and are updated in real time. Five patients underwent a robotic augmented reality-enhanced procedure. The surgeon was able to switch between the classical endoscopic view and a combined virtual view during the procedure. Subjectively, the addition of the rendered images was considered to be an undeniable help during the dissection phase. With the rapid evolution of robotics, computer-aided surgery is receiving increasing interest. This paper details the authors' experience with 3D-rendered images projected inside the surgical console. The use of this intra-operative mixed reality technology is considered very useful by the surgeon. It has been shown that the usefulness of this technique is a step toward computer-aided surgery that will progress very quickly over the next few years. Copyright © 2012 John Wiley & Sons, Ltd.
Three-dimensional rendering in medicine: some common misconceptions
NASA Astrophysics Data System (ADS)
Udupa, Jayaram K.
2001-05-01
As seen in the medical imaging literature and in the poster presentations at the annual conference of the Radiological Society of North America during the past 10 years, several mis conceptions are held relating to 3D rendering of medical images. The purpose of this presentation is to illustrate and clarify these with medical examples. Most of the misconceptions have to do with a mix up of the issues related to the common visualization techniques, viz., surface rendering (SR) and volume rendering (VR), and methods of image segmentation. In our survey, we came across the following most commonly held conceptions which we believe (and shall demonstrate) are not correct: (1) SR equated to thresholding. (2) VR considered not requiring segmentation. (3) VR considered to achieve higher resolution than SR. (4) SR/VR considered to require specialized hardware to achieve adequate speed. We shall briefly define and establish some fundamental terms to obviate any potential for terminology-related misconceptions. Subsequently, we shall sort out these issues and illustrate with examples as to why the above conceptions are incorrect. There are many SR methods that use segmentations that are far superior to thresholding. All VR techniques (except the straightforward MIP) require some form of fuzzy object specification, that is, fuzzy segmentation. The details seen in renditions depend fundamentally on, in addition to the rendering method, segmentation techniques also. There are fast-software-based rendering methods that give a performance on PCs similar to or exceeding that of expensive hardware systems. Most of the difficulties encountered in visualization (and also in image processing and analysis) stem from the difficulties in segmentation. It is important to identify these and separate them from the issues related purely to 3D rendering.
LOD map--A visual interface for navigating multiresolution volume visualization.
Wang, Chaoli; Shen, Han-Wei
2006-01-01
In multiresolution volume visualization, a visual representation of level-of-detail (LOD) quality is important for us to examine, compare, and validate different LOD selection algorithms. While traditional methods rely on ultimate images for quality measurement, we introduce the LOD map--an alternative representation of LOD quality and a visual interface for navigating multiresolution data exploration. Our measure for LOD quality is based on the formulation of entropy from information theory. The measure takes into account the distortion and contribution of multiresolution data blocks. A LOD map is generated through the mapping of key LOD ingredients to a treemap representation. The ordered treemap layout is used for relative stable update of the LOD map when the view or LOD changes. This visual interface not only indicates the quality of LODs in an intuitive way, but also provides immediate suggestions for possible LOD improvement through visually-striking features. It also allows us to compare different views and perform rendering budget control. A set of interactive techniques is proposed to make the LOD adjustment a simple and easy task. We demonstrate the effectiveness and efficiency of our approach on large scientific and medical data sets.
Modeling Complex Biological Flows in Multi-Scale Systems using the APDEC Framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trebotich, D
We have developed advanced numerical algorithms to model biological fluids in multiscale flow environments using the software framework developed under the SciDAC APDEC ISIC. The foundation of our computational effort is an approach for modeling DNA-laden fluids as ''bead-rod'' polymers whose dynamics are fully coupled to an incompressible viscous solvent. The method is capable of modeling short range forces and interactions between particles using soft potentials and rigid constraints. Our methods are based on higher-order finite difference methods in complex geometry with adaptivity, leveraging algorithms and solvers in the APDEC Framework. Our Cartesian grid embedded boundary approach to incompressible viscousmore » flow in irregular geometries has also been interfaced to a fast and accurate level-sets method within the APDEC Framework for extracting surfaces from volume renderings of medical image data and used to simulate cardio-vascular and pulmonary flows in critical anatomies.« less
A Case-Based Study with Radiologists Performing Diagnosis Tasks in Virtual Reality.
Venson, José Eduardo; Albiero Berni, Jean Carlo; Edmilson da Silva Maia, Carlos; Marques da Silva, Ana Maria; Cordeiro d'Ornellas, Marcos; Maciel, Anderson
2017-01-01
In radiology diagnosis, medical images are most often visualized slice by slice. At the same time, the visualization based on 3D volumetric rendering of the data is considered useful and has increased its field of application. In this work, we present a case-based study with 16 medical specialists to assess the diagnostic effectiveness of a Virtual Reality interface in fracture identification over 3D volumetric reconstructions. We developed a VR volume viewer compatible with both the Oculus Rift and handheld-based head mounted displays (HMDs). We then performed user experiments to validate the approach in a diagnosis environment. In addition, we assessed the subjects' perception of the 3D reconstruction quality, ease of interaction and ergonomics, and also the users opinion on how VR applications can be useful in healthcare. Among other results, we have found a high level of effectiveness of the VR interface in identifying superficial fractures on head CTs.
Modeling complex biological flows in multi-scale systems using the APDEC framework
NASA Astrophysics Data System (ADS)
Trebotich, David
2006-09-01
We have developed advanced numerical algorithms to model biological fluids in multiscale flow environments using the software framework developed under the SciDAC APDEC ISIC. The foundation of our computational effort is an approach for modeling DNA laden fluids as ''bead-rod'' polymers whose dynamics are fully coupled to an incompressible viscous solvent. The method is capable of modeling short range forces and interactions between particles using soft potentials and rigid constraints. Our methods are based on higher-order finite difference methods in complex geometry with adaptivity, leveraging algorithms and solvers in the APDEC Framework. Our Cartesian grid embedded boundary approach to incompressible viscous flow in irregular geometries has also been interfaced to a fast and accurate level-sets method within the APDEC Framework for extracting surfaces from volume renderings of medical image data and used to simulate cardio-vascular and pulmonary flows in critical anatomies.
Badrinarayan, Preethi; Sastry, G. Narahari
2014-01-01
The present study examines the conformational transitions occurring among the major structural motifs of Aurora kinase (AK) concomitant with the DFG-flip and deciphers the role of non-covalent interactions in rendering specificity. Multiple sequence alignment, docking and structural analysis of a repertoire of 56 crystal structures of AK from Protein Data Bank (PDB) has been carried out. The crystal structures were systematically categorized based on the conformational disposition of the DFG-loop [in (DI) 42, out (DO) 5 and out-up (DOU) 9], G-loop [extended (GE) 53 and folded (GF) 3] and αC-helix [in (CI) 42 and out (CO) 14]. The overlapping subsets on categorization show the inter-dependency among structural motifs. Therefore, the four distinct possibilities a) 2W1C (DI, CI, GE) b) 3E5A (DI, CI, GF) c) 3DJ6 (DI, CO, GF) d) 3UNZ (DOU, CO, GF) along with their co-crystals and apo-forms were subjected to molecular dynamics simulations of 40 ns each to evaluate the variations of individual residues and their impact on forming interactions. The non-covalent interactions formed by the 157 AK co-crystals with different regions of the binding site were initially studied with the docked complexes and structure interaction fingerprints. The frequency of the most prominent interactions was gauged in the AK inhibitors from PDB and the four representative conformations during 40 ns. Based on this study, seven major non-covalent interactions and their complementary sites in AK capable of rendering specificity have been prioritized for the design of different classes of inhibitors. PMID:25485544
Three-dimensional rendering of segmented object using matlab - biomed 2010.
Anderson, Jeffrey R; Barrett, Steven F
2010-01-01
The three-dimensional rendering of microscopic objects is a difficult and challenging task that often requires specialized image processing techniques. Previous work has been described of a semi-automatic segmentation process of fluorescently stained neurons collected as a sequence of slice images with a confocal laser scanning microscope. Once properly segmented, each individual object can be rendered and studied as a three-dimensional virtual object. This paper describes the work associated with the design and development of Matlab files to create three-dimensional images from the segmented object data previously mentioned. Part of the motivation for this work is to integrate both the segmentation and rendering processes into one software application, providing a seamless transition from the segmentation tasks to the rendering and visualization tasks. Previously these tasks were accomplished on two different computer systems, windows and Linux. This transition basically limits the usefulness of the segmentation and rendering applications to those who have both computer systems readily available. The focus of this work is to create custom Matlab image processing algorithms for object rendering and visualization, and merge these capabilities to the Matlab files that were developed especially for the image segmentation task. The completed Matlab application will contain both the segmentation and rendering processes in a single graphical user interface, or GUI. This process for rendering three-dimensional images in Matlab requires that a sequence of two-dimensional binary images, representing a cross-sectional slice of the object, be reassembled in a 3D space, and covered with a surface. Additional segmented objects can be rendered in the same 3D space. The surface properties of each object can be varied by the user to aid in the study and analysis of the objects. This inter-active process becomes a powerful visual tool to study and understand microscopic objects.
Equalizer: a scalable parallel rendering framework.
Eilemann, Stefan; Makhinya, Maxim; Pajarola, Renato
2009-01-01
Continuing improvements in CPU and GPU performances as well as increasing multi-core processor and cluster-based parallelism demand for flexible and scalable parallel rendering solutions that can exploit multipipe hardware accelerated graphics. In fact, to achieve interactive visualization, scalable rendering systems are essential to cope with the rapid growth of data sets. However, parallel rendering systems are non-trivial to develop and often only application specific implementations have been proposed. The task of developing a scalable parallel rendering framework is even more difficult if it should be generic to support various types of data and visualization applications, and at the same time work efficiently on a cluster with distributed graphics cards. In this paper we introduce a novel system called Equalizer, a toolkit for scalable parallel rendering based on OpenGL which provides an application programming interface (API) to develop scalable graphics applications for a wide range of systems ranging from large distributed visualization clusters and multi-processor multipipe graphics systems to single-processor single-pipe desktop machines. We describe the system architecture, the basic API, discuss its advantages over previous approaches, present example configurations and usage scenarios as well as scalability results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahrens, James P; Patchett, John M; Lo, Li - Ta
2011-01-24
This report provides documentation for the completion of the Los Alamos portion of the ASC Level II 'Visualization on the Supercomputing Platform' milestone. This ASC Level II milestone is a joint milestone between Sandia National Laboratory and Los Alamos National Laboratory. The milestone text is shown in Figure 1 with the Los Alamos portions highlighted in boldfaced text. Visualization and analysis of petascale data is limited by several factors which must be addressed as ACES delivers the Cielo platform. Two primary difficulties are: (1) Performance of interactive rendering, which is the most computationally intensive portion of the visualization process. Formore » terascale platforms, commodity clusters with graphics processors (GPUs) have been used for interactive rendering. For petascale platforms, visualization and rendering may be able to run efficiently on the supercomputer platform itself. (2) I/O bandwidth, which limits how much information can be written to disk. If we simply analyze the sparse information that is saved to disk we miss the opportunity to analyze the rich information produced every timestep by the simulation. For the first issue, we are pursuing in-situ analysis, in which simulations are coupled directly with analysis libraries at runtime. This milestone will evaluate the visualization and rendering performance of current and next generation supercomputers in contrast to GPU-based visualization clusters, and evaluate the perfromance of common analysis libraries coupled with the simulation that analyze and write data to disk during a running simulation. This milestone will explore, evaluate and advance the maturity level of these technologies and their applicability to problems of interest to the ASC program. In conclusion, we improved CPU-based rendering performance by a a factor of 2-10 times on our tests. In addition, we evaluated CPU and CPU-based rendering performance. We encourage production visualization experts to consider using CPU-based rendering solutions when it is appropriate. For example, on remote supercomputers CPU-based rendering can offer a means of viewing data without having to offload the data or geometry onto a CPU-based visualization system. In terms of comparative performance of the CPU and CPU we believe that further optimizations of the performance of both CPU or CPU-based rendering are possible. The simulation community is currently confronting this reality as they work to port their simulations to different hardware architectures. What is interesting about CPU rendering of massive datasets is that for part two decades CPU performance has significantly outperformed CPU-based systems. Based on our advancements, evaluations and explorations we believe that CPU-based rendering has returned as one viable option for the visualization of massive datasets.« less
Mania, Katerina; Wooldridge, Dave; Coxon, Matthew; Robinson, Andrew
2006-01-01
Accuracy of memory performance per se is an imperfect reflection of the cognitive activity (awareness states) that underlies performance in memory tasks. The aim of this research is to investigate the effect of varied visual and interaction fidelity of immersive virtual environments on memory awareness states. A between groups experiment was carried out to explore the effect of rendering quality on location-based recognition memory for objects and associated states of awareness. The experimental space, consisting of two interconnected rooms, was rendered either flat-shaded or using radiosity rendering. The computer graphics simulations were displayed on a stereo head-tracked Head Mounted Display. Participants completed a recognition memory task after exposure to the experimental space and reported one of four states of awareness following object recognition. These reflected the level of visual mental imagery involved during retrieval, the familiarity of the recollection, and also included guesses. Experimental results revealed variations in the distribution of participants' awareness states across conditions while memory performance failed to reveal any. Interestingly, results revealed a higher proportion of recollections associated with mental imagery in the flat-shaded condition. These findings comply with similar effects revealed in two earlier studies summarized here, which demonstrated that the less "naturalistic" interaction interface or interface of low interaction fidelity provoked a higher proportion of recognitions based on visual mental images.
Style grammars for interactive visualization of architecture.
Aliaga, Daniel G; Rosen, Paul A; Bekins, Daniel R
2007-01-01
Interactive visualization of architecture provides a way to quickly visualize existing or novel buildings and structures. Such applications require both fast rendering and an effortless input regimen for creating and changing architecture using high-level editing operations that automatically fill in the necessary details. Procedural modeling and synthesis is a powerful paradigm that yields high data amplification and can be coupled with fast-rendering techniques to quickly generate plausible details of a scene without much or any user interaction. Previously, forward generating procedural methods have been proposed where a procedure is explicitly created to generate particular content. In this paper, we present our work in inverse procedural modeling of buildings and describe how to use an extracted repertoire of building grammars to facilitate the visualization and quick modification of architectural structures and buildings. We demonstrate an interactive application where the user draws simple building blocks and, using our system, can automatically complete the building "in the style of" other buildings using view-dependent texture mapping or nonphotorealistic rendering techniques. Our system supports an arbitrary number of building grammars created from user subdivided building models and captured photographs. Using only edit, copy, and paste metaphors, the entire building styles can be altered and transferred from one building to another in a few operations, enhancing the ability to modify an existing architectural structure or to visualize a novel building in the style of the others.
Ibrahim, Mohamed; Wickenhauser, Patrick; Rautek, Peter; Reina, Guido; Hadwiger, Markus
2018-01-01
Molecular dynamics (MD) simulations are crucial to investigating important processes in physics and thermodynamics. The simulated atoms are usually visualized as hard spheres with Phong shading, where individual particles and their local density can be perceived well in close-up views. However, for large-scale simulations with 10 million particles or more, the visualization of large fields-of-view usually suffers from strong aliasing artifacts, because the mismatch between data size and output resolution leads to severe under-sampling of the geometry. Excessive super-sampling can alleviate this problem, but is prohibitively expensive. This paper presents a novel visualization method for large-scale particle data that addresses aliasing while enabling interactive high-quality rendering. We introduce the novel concept of screen-space normal distribution functions (S-NDFs) for particle data. S-NDFs represent the distribution of surface normals that map to a given pixel in screen space, which enables high-quality re-lighting without re-rendering particles. In order to facilitate interactive zooming, we cache S-NDFs in a screen-space mipmap (S-MIP). Together, these two concepts enable interactive, scale-consistent re-lighting and shading changes, as well as zooming, without having to re-sample the particle data. We show how our method facilitates the interactive exploration of real-world large-scale MD simulation data in different scenarios.
Skype Me! Socially Contingent Interactions Help Toddlers Learn Language
ERIC Educational Resources Information Center
Roseberry, Sarah; Hirsh-Pasek, Kathy; Golinkoff, Roberta M.
2014-01-01
Language learning takes place in the context of social interactions, yet the mechanisms that render social interactions useful for learning language remain unclear. This study focuses on whether social contingency might support word learning. Toddlers aged 24-30 months (N = 36) were exposed to novel verbs in one of three conditions: live…
An Agent Based Collaborative Simplification of 3D Mesh Model
NASA Astrophysics Data System (ADS)
Wang, Li-Rong; Yu, Bo; Hagiwara, Ichiro
Large-volume mesh model faces the challenge in fast rendering and transmission by Internet. The current mesh models obtained by using three-dimensional (3D) scanning technology are usually very large in data volume. This paper develops a mobile agent based collaborative environment on the development platform of mobile-C. Communication among distributed agents includes grasping image of visualized mesh model, annotation to grasped image and instant message. Remote and collaborative simplification can be efficiently conducted by Internet.
Visualizing 3D data obtained from microscopy on the Internet.
Pittet, J J; Henn, C; Engel, A; Heymann, J B
1999-01-01
The Internet is a powerful communication medium increasingly exploited by business and science alike, especially in structural biology and bioinformatics. The traditional presentation of static two-dimensional images of real-world objects on the limited medium of paper can now be shown interactively in three dimensions. Many facets of this new capability have already been developed, particularly in the form of VRML (virtual reality modeling language), but there is a need to extend this capability for visualizing scientific data. Here we introduce a real-time isosurfacing node for VRML, based on the marching cube approach, allowing interactive isosurfacing. A second node does three-dimensional (3D) texture-based volume-rendering for a variety of representations. The use of computers in the microscopic and structural biosciences is extensive, and many scientific file formats exist. To overcome the problem of accessing such data from VRML and other tools, we implemented extensions to SGI's IFL (image format library). IFL is a file format abstraction layer defining communication between a program and a data file. These technologies are developed in support of the BioImage project, aiming to establish a database prototype for multidimensional microscopic data with the ability to view the data within a 3D interactive environment. Copyright 1999 Academic Press.
Intuitive Exploration of Volumetric Data Using Dynamic Galleries.
Jönsson, Daniel; Falk, Martin; Ynnerman, Anders
2016-01-01
In this work we present a volume exploration method designed to be used by novice users and visitors to science centers and museums. The volumetric digitalization of artifacts in museums is of rapidly increasing interest as enhanced user experience through interactive data visualization can be achieved. This is, however, a challenging task since the vast majority of visitors are not familiar with the concepts commonly used in data exploration, such as mapping of visual properties from values in the data domain using transfer functions. Interacting in the data domain is an effective way to filter away undesired information but it is difficult to predict where the values lie in the spatial domain. In this work we make extensive use of dynamic previews instantly generated as the user explores the data domain. The previews allow the user to predict what effect changes in the data domain will have on the rendered image without being aware that visual parameters are set in the data domain. Each preview represents a subrange of the data domain where overview and details are given on demand through zooming and panning. The method has been designed with touch interfaces as the target platform for interaction. We provide a qualitative evaluation performed with visitors to a science center to show the utility of the approach.
NASA Technical Reports Server (NTRS)
Apodaca, Tony; Porter, Tom
1989-01-01
The two worlds of interactive graphics and realistic graphics have remained separate. Fast graphics hardware runs simple algorithms and generates simple looking images. Photorealistic image synthesis software runs slowly on large expensive computers. The time has come for these two branches of computer graphics to merge. The speed and expense of graphics hardware is no longer the barrier to the wide acceptance of photorealism. There is every reason to believe that high quality image synthesis will become a standard capability of every graphics machine, from superworkstation to personal computer. The significant barrier has been the lack of a common language, an agreed-upon set of terms and conditions, for 3-D modeling systems to talk to 3-D rendering systems for computing an accurate rendition of that scene. Pixar has introduced RenderMan to serve as that common language. RenderMan, specifically the extensibility it offers in shading calculations, is discussed.
RenderView: physics-based multi- and hyperspectral rendering using measured background panoramics
NASA Astrophysics Data System (ADS)
Talcott, Denise M.; Brown, Wade W.; Thomas, David J.
2003-09-01
As part of the survivability engineering process it is necessary to accurately model and visualize the vehicle signatures in multi- or hyperspectral bands of interest. The signature at a given wavelength is a function of the surface optical properties, reflection of the background and, in the thermal region, the emission of thermal radiation. Currently, it is difficult to obtain and utilize background models that are of sufficient fidelity when compared with the vehicle models. In addition, the background models create an additional layer of uncertainty in estimating the vehicles signature. Therefore, to meet exacting rendering requirements we have developed RenderView, which incorporates the full bidirectional reflectance distribution function (BRDF). Instead of using a modeled background we have incorporated a measured calibrated background panoramic image to provide the high fidelity background interaction. Uncertainty in the background signature is reduced to the error in the measurement which is considerably smaller than the uncertainty inherent in a modeled background. RenderView utilizes a number of different descriptions of the BRDF, including the Sandford-Robertson. In addition, it provides complete conservation of energy with off axis sampling. A description of RenderView will be presented along with a methodology developed for collecting background panoramics. Examples of the RenderView output and the background panoramics will be presented along with our approach to handling the solar irradiance problem.
Games Con Men Play: The Semiosis of Deceptive Interaction.
ERIC Educational Resources Information Center
Hankiss, Agnes
1980-01-01
Analyzes some of the most frequent deceptive interactions as rendered through case histories of male con artists and their victims taken from police records. Discusses the recurrent elements in both the con-games strategies and victims' way of interpreting those strategies. (JMF)
ERIC Educational Resources Information Center
DeVillar, Robert A.; Jiang, Binbin
2011-01-01
Creatively and rigorously blending historical research and contemporary data from various disciplines, this book cogently and comprehensively illustrates the problems and opportunities the American nation faces in education, economics, and the global arena. The authors propose a framework of transformation that would render American culture no…
Realistic tissue visualization using photoacoustic image
NASA Astrophysics Data System (ADS)
Cho, Seonghee; Managuli, Ravi; Jeon, Seungwan; Kim, Jeesu; Kim, Chulhong
2018-02-01
Visualization methods are very important in biomedical imaging. As a technology that understands life, biomedical imaging has the unique advantage of providing the most intuitive information in the image. This advantage of biomedical imaging can be greatly improved by choosing a special visualization method. This is more complicated in volumetric data. Volume data has the advantage of containing 3D spatial information. Unfortunately, the data itself cannot directly represent the potential value. Because images are always displayed in 2D space, visualization is the key and creates the real value of volume data. However, image processing of 3D data requires complicated algorithms for visualization and high computational burden. Therefore, specialized algorithms and computing optimization are important issues in volume data. Photoacoustic-imaging is a unique imaging modality that can visualize the optical properties of deep tissue. Because the color of the organism is mainly determined by its light absorbing component, photoacoustic data can provide color information of tissue, which is closer to real tissue color. In this research, we developed realistic tissue visualization using acoustic-resolution photoacoustic volume data. To achieve realistic visualization, we designed specialized color transfer function, which depends on the depth of the tissue from the skin. We used direct ray casting method and processed color during computing shader parameter. In the rendering results, we succeeded in obtaining similar texture results from photoacoustic data. The surface reflected rays were visualized in white, and the reflected color from the deep tissue was visualized red like skin tissue. We also implemented the CUDA algorithm in an OpenGL environment for real-time interactive imaging.
Random forest classification of large volume structures for visuo-haptic rendering in CT images
NASA Astrophysics Data System (ADS)
Mastmeyer, Andre; Fortmeier, Dirk; Handels, Heinz
2016-03-01
For patient-specific voxel-based visuo-haptic rendering of CT scans of the liver area, the fully automatic segmentation of large volume structures such as skin, soft tissue, lungs and intestine (risk structures) is important. Using a machine learning based approach, several existing segmentations from 10 segmented gold-standard patients are learned by random decision forests individually and collectively. The core of this paper is feature selection and the application of the learned classifiers to a new patient data set. In a leave-some-out cross-validation, the obtained full volume segmentations are compared to the gold-standard segmentations of the untrained patients. The proposed classifiers use a multi-dimensional feature space to estimate the hidden truth, instead of relying on clinical standard threshold and connectivity based methods. The result of our efficient whole-body section classification are multi-label maps with the considered tissues. For visuo-haptic simulation, other small volume structures would have to be segmented additionally. We also take a look into these structures (liver vessels). For an experimental leave-some-out study consisting of 10 patients, the proposed method performs much more efficiently compared to state of the art methods. In two variants of leave-some-out experiments we obtain best mean DICE ratios of 0.79, 0.97, 0.63 and 0.83 for skin, soft tissue, hard bone and risk structures. Liver structures are segmented with DICE 0.93 for the liver, 0.43 for blood vessels and 0.39 for bile vessels.
A Nationwide Experimental Multi-Gigabit Network
2003-03-01
television and cinema , and to real- time interactive teleconferencing. There is another variable which affects this happy growth in network bandwidth and...render large scientific data sets with interactive frame rates on the desktop or in an immersive virtual reality ( VR ) environment. In our design, we
Volumetric ambient occlusion for real-time rendering and games.
Szirmay-Kalos, L; Umenhoffer, T; Toth, B; Szecsi, L; Sbert, M
2010-01-01
This new algorithm, based on GPUs, can compute ambient occlusion to inexpensively approximate global-illumination effects in real-time systems and games. The first step in deriving this algorithm is to examine how ambient occlusion relates to the physically founded rendering equation. The correspondence stems from a fuzzy membership function that defines what constitutes nearby occlusions. The next step is to develop a method to calculate ambient occlusion in real time without precomputation. The algorithm is based on a novel interpretation of ambient occlusion that measures the relative volume of the visible part of the surface's tangent sphere. The new formula's integrand has low variation and thus can be estimated accurately with a few samples.
NASA Astrophysics Data System (ADS)
Forbes, Angus; Villegas, Javier; Almryde, Kyle R.; Plante, Elena
2014-03-01
In this paper, we present a novel application, 3D+Time Brain View, for the stereoscopic visualization of functional Magnetic Resonance Imaging (fMRI) data gathered from participants exposed to unfamiliar spoken languages. An analysis technique based on Independent Component Analysis (ICA) is used to identify statistically significant clusters of brain activity and their changes over time during different testing sessions. That is, our system illustrates the temporal evolution of participants' brain activity as they are introduced to a foreign language through displaying these clusters as they change over time. The raw fMRI data is presented as a stereoscopic pair in an immersive environment utilizing passive stereo rendering. The clusters are presented using a ray casting technique for volume rendering. Our system incorporates the temporal information and the results of the ICA into the stereoscopic 3D rendering, making it easier for domain experts to explore and analyze the data.
Evaluation of haptic interfaces for simulation of drill vibration in virtual temporal bone surgery.
Ghasemloonia, Ahmad; Baxandall, Shalese; Zareinia, Kourosh; Lui, Justin T; Dort, Joseph C; Sutherland, Garnette R; Chan, Sonny
2016-11-01
Surgical training is evolving from an observership model towards a new paradigm that includes virtual-reality (VR) simulation. In otolaryngology, temporal bone dissection has become intimately linked with VR simulation as the complexity of anatomy demands a high level of surgeon aptitude and confidence. While an adequate 3D visualization of the surgical site is available in current simulators, the force feedback rendered during haptic interaction does not convey vibrations. This lack of vibration rendering limits the simulation fidelity of a surgical drill such as that used in temporal bone dissection. In order to develop an immersive simulation platform capable of haptic force and vibration feedback, the efficacy of hand controllers for rendering vibration in different drilling circumstances needs to be investigated. In this study, the vibration rendering ability of four different haptic hand controllers were analyzed and compared to find the best commercial haptic hand controller. A test-rig was developed to record vibrations encountered during temporal bone dissection and a software was written to render the recorded signals without adding hardware to the system. An accelerometer mounted on the end-effector of each device recorded the rendered vibration signals. The newly recorded vibration signal was compared with the input signal in both time and frequency domains by coherence and cross correlation analyses to quantitatively measure the fidelity of these devices in terms of rendering vibrotactile drilling feedback in different drilling conditions. This method can be used to assess the vibration rendering ability in VR simulation systems and selection of ideal haptic devices. Copyright © 2016 Elsevier Ltd. All rights reserved.
Tiled vector data model for the geographical features of symbolized maps.
Li, Lin; Hu, Wei; Zhu, Haihong; Li, You; Zhang, Hang
2017-01-01
Electronic maps (E-maps) provide people with convenience in real-world space. Although web map services can display maps on screens, a more important function is their ability to access geographical features. An E-map that is based on raster tiles is inferior to vector tiles in terms of interactive ability because vector maps provide a convenient and effective method to access and manipulate web map features. However, the critical issue regarding rendering tiled vector maps is that geographical features that are rendered in the form of map symbols via vector tiles may cause visual discontinuities, such as graphic conflicts and losses of data around the borders of tiles, which likely represent the main obstacles to exploring vector map tiles on the web. This paper proposes a tiled vector data model for geographical features in symbolized maps that considers the relationships among geographical features, symbol representations and map renderings. This model presents a method to tailor geographical features in terms of map symbols and 'addition' (join) operations on the following two levels: geographical features and map features. Thus, these maps can resolve the visual discontinuity problem based on the proposed model without weakening the interactivity of vector maps. The proposed model is validated by two map data sets, and the results demonstrate that the rendered (symbolized) web maps present smooth visual continuity.
Research on Visualization of Ground Laser Radar Data Based on Osg
NASA Astrophysics Data System (ADS)
Huang, H.; Hu, C.; Zhang, F.; Xue, H.
2018-04-01
Three-dimensional (3D) laser scanning is a new advanced technology integrating light, machine, electricity, and computer technologies. It can conduct 3D scanning to the whole shape and form of space objects with high precision. With this technology, you can directly collect the point cloud data of a ground object and create the structure of it for rendering. People use excellent 3D rendering engine to optimize and display the 3D model in order to meet the higher requirements of real time realism rendering and the complexity of the scene. OpenSceneGraph (OSG) is an open source 3D graphics engine. Compared with the current mainstream 3D rendering engine, OSG is practical, economical, and easy to expand. Therefore, OSG is widely used in the fields of virtual simulation, virtual reality, science and engineering visualization. In this paper, a dynamic and interactive ground LiDAR data visualization platform is constructed based on the OSG and the cross-platform C++ application development framework Qt. In view of the point cloud data of .txt format and the triangulation network data file of .obj format, the functions of 3D laser point cloud and triangulation network data display are realized. It is proved by experiments that the platform is of strong practical value as it is easy to operate and provides good interaction.
Optimization Model for Web Based Multimodal Interactive Simulations.
Halic, Tansel; Ahn, Woojin; De, Suvranu
2015-07-15
This paper presents a technique for optimizing the performance of web based multimodal interactive simulations. For such applications where visual quality and the performance of simulations directly influence user experience, overloading of hardware resources may result in unsatisfactory reduction in the quality of the simulation and user satisfaction. However, optimization of simulation performance on individual hardware platforms is not practical. Hence, we present a mixed integer programming model to optimize the performance of graphical rendering and simulation performance while satisfying application specific constraints. Our approach includes three distinct phases: identification, optimization and update . In the identification phase, the computing and rendering capabilities of the client device are evaluated using an exploratory proxy code. This data is utilized in conjunction with user specified design requirements in the optimization phase to ensure best possible computational resource allocation. The optimum solution is used for rendering (e.g. texture size, canvas resolution) and simulation parameters (e.g. simulation domain) in the update phase. Test results are presented on multiple hardware platforms with diverse computing and graphics capabilities to demonstrate the effectiveness of our approach.
Optimization Model for Web Based Multimodal Interactive Simulations
Halic, Tansel; Ahn, Woojin; De, Suvranu
2015-01-01
This paper presents a technique for optimizing the performance of web based multimodal interactive simulations. For such applications where visual quality and the performance of simulations directly influence user experience, overloading of hardware resources may result in unsatisfactory reduction in the quality of the simulation and user satisfaction. However, optimization of simulation performance on individual hardware platforms is not practical. Hence, we present a mixed integer programming model to optimize the performance of graphical rendering and simulation performance while satisfying application specific constraints. Our approach includes three distinct phases: identification, optimization and update. In the identification phase, the computing and rendering capabilities of the client device are evaluated using an exploratory proxy code. This data is utilized in conjunction with user specified design requirements in the optimization phase to ensure best possible computational resource allocation. The optimum solution is used for rendering (e.g. texture size, canvas resolution) and simulation parameters (e.g. simulation domain) in the update phase. Test results are presented on multiple hardware platforms with diverse computing and graphics capabilities to demonstrate the effectiveness of our approach. PMID:26085713
Gesture Interaction Browser-Based 3D Molecular Viewer.
Virag, Ioan; Stoicu-Tivadar, Lăcrămioara; Crişan-Vida, Mihaela
2016-01-01
The paper presents an open source system that allows the user to interact with a 3D molecular viewer using associated hand gestures for rotating, scaling and panning the rendered model. The novelty of this approach is that the entire application is browser-based and doesn't require installation of third party plug-ins or additional software components in order to visualize the supported chemical file formats. This kind of solution is suitable for instruction of users in less IT oriented environments, like medicine or chemistry. For rendering various molecular geometries our team used GLmol (a molecular viewer written in JavaScript). The interaction with the 3D models is made with Leap Motion controller that allows real-time tracking of the user's hand gestures. The first results confirmed that the resulting application leads to a better way of understanding various types of translational bioinformatics related problems in both biomedical research and education.
Visell, Yon
2015-04-01
This paper proposes a fast, physically accurate method for synthesizing multimodal, acoustic and haptic, signatures of distributed fracture in quasi-brittle heterogeneous materials, such as wood, granular media, or other fiber composites. Fracture processes in these materials are challenging to simulate with existing methods, due to the prevalence of large numbers of disordered, quasi-random spatial degrees of freedom, representing the complex physical state of a sample over the geometric volume of interest. Here, I develop an algorithm for simulating such processes, building on a class of statistical lattice models of fracture that have been widely investigated in the physics literature. This algorithm is enabled through a recently published mathematical construction based on the inverse transform method of random number sampling. It yields a purely time domain stochastic jump process representing stress fluctuations in the medium. The latter can be readily extended by a mean field approximation that captures the averaged constitutive (stress-strain) behavior of the material. Numerical simulations and interactive examples demonstrate the ability of these algorithms to generate physically plausible acoustic and haptic signatures of fracture in complex, natural materials interactively at audio sampling rates.
TreeVector: scalable, interactive, phylogenetic trees for the web.
Pethica, Ralph; Barker, Gary; Kovacs, Tim; Gough, Julian
2010-01-28
Phylogenetic trees are complex data forms that need to be graphically displayed to be human-readable. Traditional techniques of plotting phylogenetic trees focus on rendering a single static image, but increases in the production of biological data and large-scale analyses demand scalable, browsable, and interactive trees. We introduce TreeVector, a Scalable Vector Graphics-and Java-based method that allows trees to be integrated and viewed seamlessly in standard web browsers with no extra software required, and can be modified and linked using standard web technologies. There are now many bioinformatics servers and databases with a range of dynamic processes and updates to cope with the increasing volume of data. TreeVector is designed as a framework to integrate with these processes and produce user-customized phylogenies automatically. We also address the strengths of phylogenetic trees as part of a linked-in browsing process rather than an end graphic for print. TreeVector is fast and easy to use and is available to download precompiled, but is also open source. It can also be run from the web server listed below or the user's own web server. It has already been deployed on two recognized and widely used database Web sites.
An Augmented Reality Nanomanipulator for Learning Nanophysics: The "NanoLearner" Platform
NASA Astrophysics Data System (ADS)
Marchi, Florence; Marliere, Sylvain; Florens, Jean Loup; Luciani, Annie; Chevrier, Joel
The work focuses on the description and evaluation of an augmented reality nanomanipulator, called "NanoLearner" platform used as educational tool in practical works of nanophysics. Through virtual reality associated to multisensory renderings, students are immersed in the nanoworld where they can interact in real time with a sample surface or an object, using their senses as hearing, seeing and touching. The role of each sensorial rendering in the understanding and control of the "approach-retract" interaction has been determined thanks to statistical studies obtained during the practical works. Finally, we present two extensions of the use of this innovative tool for investigating nano effects in living organisms and for allowing grand public to have access to a natural understanding of nanophenomena.
DOT National Transportation Integrated Search
2016-10-01
This report is the first of three examining driver medical review practices in the United States and how they fulfilled the basic functions of identifying, assessing, and rendering licensing decisions on medically at-risk drivers. The aim was not to ...
42 CFR 495.306 - Establishing patient volume.
Code of Federal Regulations, 2011 CFR
2011-10-01
... encounter means services rendered to an individual on any one day where— (i) Medicaid or CHIP (or a Medicaid or CHIP demonstration project approved under section 1115 of the Act) paid for part or all of the service; (ii) Medicaid or CHIP (or a Medicaid or CHIP demonstration project approved under section 1115 of...
Pathfinder. Volume 8, Number 3, May/June 2010. Technology - Rendering an Ever-Clearer Picture
2010-06-01
Agency,Office of Corporate Communications,4600 Sangamore Road, Mail Stop D-54,Bethesda,MD, 20816 -5003 8. PERFORMING ORGANIZATION REPORT NUMBER 9...Bethesda, MD 20816 -5003 Telephone: (301) 227-7388, DSN 287-7388 E-mail: pathfinder@nga.mil Director Vice Adm. Robert B. Murrett, U.S. Navy Deputy
1984-05-01
growth toward lands already annexed and away from agricultural * lands until needed demands a sound basis upon which to render judgment. For the City...for Bureau of Land Management. Johnson, Patti 3. 1978 Patwin. In Handbook of North American Indians, Volume 8, California. Robert F. Heizer , ed
Teistler, M; Breiman, R S; Lison, T; Bott, O J; Pretschner, D P; Aziz, A; Nowinski, W L
2008-10-01
Volumetric imaging (computed tomography and magnetic resonance imaging) provides increased diagnostic detail but is associated with the problem of navigation through large amounts of data. In an attempt to overcome this problem, a novel 3D navigation tool has been designed and developed that is based on an alternative input device. A 3D mouse allows for simultaneous definition of position and orientation of orthogonal or oblique multiplanar reformatted images or slabs, which are presented within a virtual 3D scene together with the volume-rendered data set and additionally as 2D images. Slabs are visualized with maximum intensity projection, average intensity projection, or standard volume rendering technique. A prototype has been implemented based on PC technology that has been tested by several radiologists. It has shown to be easily understandable and usable after a very short learning phase. Our solution may help to fully exploit the diagnostic potential of volumetric imaging by allowing for a more efficient reading process compared to currently deployed solutions based on conventional mouse and keyboard.
Silverstein, Jonathan C; Dech, Fred; Kouchoukos, Philip L
2004-01-01
Radiological volumes are typically reviewed by surgeons using cross-sections and iso-surface reconstructions. Applications that combine collaborative stereo volume visualization with symbolic anatomic information and data fusions would expand surgeons' capabilities in interpretation of data and in planning treatment. Such an application has not been seen clinically. We are developing methods to systematically combine symbolic anatomy (term hierarchies and iso-surface atlases) with patient data using data fusion. We describe our progress toward integrating these methods into our collaborative virtual reality application. The fully combined application will be a feature-rich stereo collaborative volume visualization environment for use by surgeons in which DICOM datasets will self-report underlying anatomy with visual feedback. Using hierarchical navigation of SNOMED-CT anatomic terms integrated with our existing Tele-immersive DICOM-based volumetric rendering application, we will display polygonal representations of anatomic systems on the fly from menus that query a database. The methods and tools involved in this application development are SNOMED-CT, DICOM, VISIBLE HUMAN, volumetric fusion and C++ on a Tele-immersive platform. This application will allow us to identify structures and display polygonal representations from atlas data overlaid with the volume rendering. First, atlas data is automatically translated, rotated, and scaled to the patient data during loading using a public domain volumetric fusion algorithm. This generates a modified symbolic representation of the underlying canonical anatomy. Then, through the use of collision detection or intersection testing of various transparent polygonal representations, the polygonal structures are highlighted into the volumetric representation while the SNOMED names are displayed. Thus, structural names and polygonal models are associated with the visualized DICOM data. This novel juxtaposition of information promises to expand surgeons' abilities to interpret images and plan treatment.
Hänel, Claudia; Pieperhoff, Peter; Hentschel, Bernd; Amunts, Katrin; Kuhlen, Torsten
2014-01-01
The visualization of the progression of brain tissue loss in neurodegenerative diseases like corticobasal syndrome (CBS) can provide not only information about the localization and distribution of the volume loss, but also helps to understand the course and the causes of this neurodegenerative disorder. The visualization of such medical imaging data is often based on 2D sections, because they show both internal and external structures in one image. Spatial information, however, is lost. 3D visualization of imaging data is capable to solve this problem, but it faces the difficulty that more internally located structures may be occluded by structures near the surface. Here, we present an application with two designs for the 3D visualization of the human brain to address these challenges. In the first design, brain anatomy is displayed semi-transparently; it is supplemented by an anatomical section and cortical areas for spatial orientation, and the volumetric data of volume loss. The second design is guided by the principle of importance-driven volume rendering: A direct line-of-sight to the relevant structures in the deeper parts of the brain is provided by cutting out a frustum-like piece of brain tissue. The application was developed to run in both, standard desktop environments and in immersive virtual reality environments with stereoscopic viewing for improving the depth perception. We conclude, that the presented application facilitates the perception of the extent of brain degeneration with respect to its localization and affected regions. PMID:24847243
Operating System Support for Mobile Interactive Applications
2002-08-01
Buckingham Palace (inte- rior) Scene Number of polygons Taj Mahal 127406 Café 138598 Notre Dame 160206 Buckingham Palace 235572...d em an d (m ill io ns o f c yc le s) ª Number of polygons rendered Taj Mahal Café Notre Dame Buckingham Palace (a) Random camera position 0 100 200...Notre Dame Buckingham Palace (b) Fixed camera position The « -axis is the number of polygons rendered, i.e. ¬G where ¬ is the original model size
Graphical Neuroimaging Informatics: Application to Alzheimer’s Disease
Bowman, Ian; Joshi, Shantanu H.; Greer, Vaughan
2013-01-01
The Informatics Visualization for Neuroimaging (INVIZIAN) framework allows one to graphically display image and meta-data information from sizeable collections of neuroimaging data as a whole using a dynamic and compelling user interface. Users can fluidly interact with an entire collection of cortical surfaces using only their mouse. In addition, users can cluster and group brains according in multiple ways for subsequent comparison using graphical data mining tools. In this article, we illustrate the utility of INVIZIAN for simultaneous exploration and mining a large collection of extracted cortical surface data arising in clinical neuroimaging studies of patients with Alzheimer’s Disease, mild cognitive impairment, as well as healthy control subjects. Alzheimer’s Disease is particularly interesting due to the wide-spread effects on cortical architecture and alterations of volume in specific brain areas associated with memory. We demonstrate INVIZIAN’s ability to render multiple brain surfaces from multiple diagnostic groups of subjects, showcase the interactivity of the system, and showcase how INVIZIAN can be employed to generate hypotheses about the collection of data which would be suitable for direct access to the underlying raw data and subsequent formal statistical analysis. Specifically, we use INVIZIAN show how cortical thickness and hippocampal volume differences between group are evident even in the absence of more formal hypothesis testing. In the context of neurological diseases linked to brain aging such as AD, INVIZIAN provides a unique means for considering the entirety of whole brain datasets, look for interesting relationships among them, and thereby derive new ideas for further research and study. PMID:24203652
Chemomechanical Polymers as Sensors and Actuators for Biological and Medicinal Applications
Schneider, Hans-Jörg; Kato, Kazuaki; Strongin, Robert M.
2007-01-01
Changes in the chemical environment can trigger large motions in chemomechanical polymers. The unique feature of such intelligent materials, mostly in the form of hydrogels, is therefore, that they serve as sensors and actuators at the same time, and do not require any measuring devices, transducers or power supplies. Until recently the most often used of these materials responded to changes in pH. Chemists are now increasingly using supramolecular recognition sites in materials, which are covalently bound to the polymer backbone. This allows one to use a nearly unlimited variety of guest (or effector) compounds in the environment for a selective response by automatically triggered size changes. This is illustrated with non-covalent interactions of effectors comprising of metal ions, isomeric organic compounds, including enantiomers, nucleotides, aminoacids, and peptides. Two different effector molecules can induce motions as functions of their concentration, thus representing a logical AND gate. This concept is particularly fruitful with effector compounds such as peptides, which only trigger size changes if, e.g. copper ions are present in the surroundings. Another principle relies on the fast formation of covalent bonds between an effector and the chemomechanical polymer. The most promising application is the selective interaction of covalently fixed boronic acid residues with glucose, which renders itself not only for sensing, but eventually also for delivery of drugs such as insulin. The speed of the responses can significantly increase by increasing the surface to volume ratio of the polymer particles. Of particular interest is the sensitivity increase which can be reached by downsizing the particle volume. PMID:19606275
NASA Astrophysics Data System (ADS)
Lorenz, Cristian; Schäfer, Dirk; Eshuis, Peter; Carroll, John; Grass, Michael
2012-02-01
Interventional C-arm systems allow the efficient acquisition of 3D cone beam CT images. They can be used for intervention planning, navigation, and outcome assessment. We present a fast and completely automated volume of interest (VOI) delineation for cardiac interventions, covering the whole visceral cavity including mediastinum and lungs but leaving out rib-cage and spine. The problem is addressed in a model based approach. The procedure has been evaluated on 22 patient cases and achieves an average surface error below 2mm. The method is able to cope with varying image intensities, varying truncations due to the limited reconstruction volume, and partially with heavy metal and motion artifacts.
NASA Technical Reports Server (NTRS)
Mehling, Joshua S.; Holley, James; O'Malley, Marcia K.
2015-01-01
The fidelity with which series elastic actuators (SEAs) render desired impedances is important. Numerous approaches to SEA impedance control have been developed under the premise that high-precision actuator torque control is a prerequisite. Indeed, the design of an inner torque compensator has a significant impact on actuator impedance rendering. The disturbance observer (DOB) based torque control implemented in NASA's Valkyrie robot is considered here and a mathematical model of this torque control, cascaded with an outer impedance compensator, is constructed. While previous work has examined the impact a disturbance observer has on torque control performance, little has been done regarding DOBs and impedance rendering accuracy. Both simulation and a series of experiments are used to demonstrate the significant improvements possible in an SEA's ability to render desired dynamic behaviors when utilizing a DOB. Actuator transparency at low impedances is improved, closed loop hysteresis is reduced, and the actuator's dynamic response to both commands and interaction torques more faithfully matches that of the desired model. All of this is achieved by leveraging DOB based control rather than increasing compensator gains, thus making improved SEA impedance control easier to achieve in practice.
cellVIEW: a Tool for Illustrative and Multi-Scale Rendering of Large Biomolecular Datasets
Le Muzic, Mathieu; Autin, Ludovic; Parulek, Julius; Viola, Ivan
2017-01-01
In this article we introduce cellVIEW, a new system to interactively visualize large biomolecular datasets on the atomic level. Our tool is unique and has been specifically designed to match the ambitions of our domain experts to model and interactively visualize structures comprised of several billions atom. The cellVIEW system integrates acceleration techniques to allow for real-time graphics performance of 60 Hz display rate on datasets representing large viruses and bacterial organisms. Inspired by the work of scientific illustrators, we propose a level-of-detail scheme which purpose is two-fold: accelerating the rendering and reducing visual clutter. The main part of our datasets is made out of macromolecules, but it also comprises nucleic acids strands which are stored as sets of control points. For that specific case, we extend our rendering method to support the dynamic generation of DNA strands directly on the GPU. It is noteworthy that our tool has been directly implemented inside a game engine. We chose to rely on a third party engine to reduce software development work-load and to make bleeding-edge graphics techniques more accessible to the end-users. To our knowledge cellVIEW is the only suitable solution for interactive visualization of large bimolecular landscapes on the atomic level and is freely available to use and extend. PMID:29291131
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brunhart-Lupo, Nicholas
2016-12-06
LibIsopach is a toolkit for high performance distributed immersive visualization, leveraging modern OpenGL. It features a multi-process scenegraph, explicit instance rendering, mesh generation, and three-dimensional user interaction event processing.
Parallel Visualization of Large-Scale Aerodynamics Calculations: A Case Study on the Cray T3E
NASA Technical Reports Server (NTRS)
Ma, Kwan-Liu; Crockett, Thomas W.
1999-01-01
This paper reports the performance of a parallel volume rendering algorithm for visualizing a large-scale, unstructured-grid dataset produced by a three-dimensional aerodynamics simulation. This dataset, containing over 18 million tetrahedra, allows us to extend our performance results to a problem which is more than 30 times larger than the one we examined previously. This high resolution dataset also allows us to see fine, three-dimensional features in the flow field. All our tests were performed on the Silicon Graphics Inc. (SGI)/Cray T3E operated by NASA's Goddard Space Flight Center. Using 511 processors, a rendering rate of almost 9 million tetrahedra/second was achieved with a parallel overhead of 26%.
Real-time 3D image reconstruction guidance in liver resection surgery
Nicolau, Stephane; Pessaux, Patrick; Mutter, Didier; Marescaux, Jacques
2014-01-01
Background Minimally invasive surgery represents one of the main evolutions of surgical techniques. However, minimally invasive surgery adds difficulty that can be reduced through computer technology. Methods From a patient’s medical image [US, computed tomography (CT) or MRI], we have developed an Augmented Reality (AR) system that increases the surgeon’s intraoperative vision by providing a virtual transparency of the patient. AR is based on two major processes: 3D modeling and visualization of anatomical or pathological structures appearing in the medical image, and the registration of this visualization onto the real patient. We have thus developed a new online service, named Visible Patient, providing efficient 3D modeling of patients. We have then developed several 3D visualization and surgical planning software tools to combine direct volume rendering and surface rendering. Finally, we have developed two registration techniques, one interactive and one automatic providing intraoperative augmented reality view. Results From January 2009 to June 2013, 769 clinical cases have been modeled by the Visible Patient service. Moreover, three clinical validations have been realized demonstrating the accuracy of 3D models and their great benefit, potentially increasing surgical eligibility in liver surgery (20% of cases). From these 3D models, more than 50 interactive AR-assisted surgical procedures have been realized illustrating the potential clinical benefit of such assistance to gain safety, but also current limits that automatic augmented reality will overcome. Conclusions Virtual patient modeling should be mandatory for certain interventions that have now to be defined, such as liver surgery. Augmented reality is clearly the next step of the new surgical instrumentation but remains currently limited due to the complexity of organ deformations during surgery. Intraoperative medical imaging used in new generation of automated augmented reality should solve this issue thanks to the development of Hybrid OR. PMID:24812598
Comparison of gesture and conventional interaction techniques for interventional neuroradiology.
Hettig, Julian; Saalfeld, Patrick; Luz, Maria; Becker, Mathias; Skalej, Martin; Hansen, Christian
2017-09-01
Interaction with radiological image data and volume renderings within a sterile environment is a challenging task. Clinically established methods such as joystick control and task delegation can be time-consuming and error-prone and interrupt the workflow. New touchless input modalities may have the potential to overcome these limitations, but their value compared to established methods is unclear. We present a comparative evaluation to analyze the value of two gesture input modalities (Myo Gesture Control Armband and Leap Motion Controller) versus two clinically established methods (task delegation and joystick control). A user study was conducted with ten experienced radiologists by simulating a diagnostic neuroradiological vascular treatment with two frequently used interaction tasks in an experimental operating room. The input modalities were assessed using task completion time, perceived task difficulty, and subjective workload. Overall, the clinically established method of task delegation performed best under the study conditions. In general, gesture control failed to exceed the clinical input approach. However, the Myo Gesture Control Armband showed a potential for simple image selection task. Novel input modalities have the potential to take over single tasks more efficiently than clinically established methods. The results of our user study show the relevance of task characteristics such as task complexity on performance with specific input modalities. Accordingly, future work should consider task characteristics to provide a useful gesture interface for a specific use case instead of an all-in-one solution.
An application of the MPP to the interactive manipulation of stereo images of digital terrain models
NASA Technical Reports Server (NTRS)
Pol, Sanjay; Mcallister, David; Davis, Edward
1987-01-01
Massively Parallel Processor algorithms were developed for the interactive manipulation of flat shaded digital terrain models defined over grids. The emphasis is on real time manipulation of stereo images. Standard graphics transformations are applied to a 128 x 128 grid of elevations followed by shading and a perspective projection to produce the right eye image. The surface is then rendered using a simple painter's algorithm for hidden surface removal. The left eye image is produced by rotating the surface 6 degs about the viewer's y axis followed by a perspective projection and rendering of the image as described above. The left and right eye images are then presented on a graphics device using standard stereo technology. Performance evaluations and comparisons are presented.
NASA Astrophysics Data System (ADS)
Bates, Lisa M.; Hanson, Dennis P.; Kall, Bruce A.; Meyer, Frederic B.; Robb, Richard A.
1998-06-01
An important clinical application of biomedical imaging and visualization techniques is provision of image guided neurosurgical planning and navigation techniques using interactive computer display systems in the operating room. Current systems provide interactive display of orthogonal images and 3D surface or volume renderings integrated with and guided by the location of a surgical probe. However, structures in the 'line-of-sight' path which lead to the surgical target cannot be directly visualized, presenting difficulty in obtaining full understanding of the 3D volumetric anatomic relationships necessary for effective neurosurgical navigation below the cortical surface. Complex vascular relationships and histologic boundaries like those found in artereovenous malformations (AVM's) also contribute to the difficulty in determining optimal approaches prior to actual surgical intervention. These difficulties demonstrate the need for interactive oblique imaging methods to provide 'line-of-sight' visualization. Capabilities for 'line-of- sight' interactive oblique sectioning are present in several current neurosurgical navigation systems. However, our implementation is novel, in that it utilizes a completely independent software toolkit, AVW (A Visualization Workshop) developed at the Mayo Biomedical Imaging Resource, integrated with a current neurosurgical navigation system, the COMPASS stereotactic system at Mayo Foundation. The toolkit is a comprehensive, C-callable imaging toolkit containing over 500 optimized imaging functions and structures. The powerful functionality and versatility of the AVW imaging toolkit provided facile integration and implementation of desired interactive oblique sectioning using a finite set of functions. The implementation of the AVW-based code resulted in higher-level functions for complete 'line-of-sight' visualization.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-27
... Consumption A. Demand for Biomass-Based Diesel B. Availability of Feedstocks To Produce 1.28 Billion Gallons of Biodiesel 1. Grease and Rendered Fats 2. Corn Oil 3. Soybean Oil 4. Effects on Food Prices 5. Other Bio-Oils C. Production Capacity D. Consumption Capacity E. Biomass-Based Diesel Distribution...
Observing the Interactive Qualities of L2 Instructional Practices in ESL and FSL Classrooms
ERIC Educational Resources Information Center
Zuniga, Michael; Simard, Daphnée
2016-01-01
Discourse features that promote the generation of interactionally modified input and output, such as negotiation for meaning, have been shown to significantly enhance second language acquisition. Research has also identified several characteristics of instructional practices that render them more or less propitious to the generation of these…
iview: an interactive WebGL visualizer for protein-ligand complex.
Li, Hongjian; Leung, Kwong-Sak; Nakane, Takanori; Wong, Man-Hon
2014-02-25
Visualization of protein-ligand complex plays an important role in elaborating protein-ligand interactions and aiding novel drug design. Most existing web visualizers either rely on slow software rendering, or lack virtual reality support. The vital feature of macromolecular surface construction is also unavailable. We have developed iview, an easy-to-use interactive WebGL visualizer of protein-ligand complex. It exploits hardware acceleration rather than software rendering. It features three special effects in virtual reality settings, namely anaglyph, parallax barrier and oculus rift, resulting in visually appealing identification of intermolecular interactions. It supports four surface representations including Van der Waals surface, solvent excluded surface, solvent accessible surface and molecular surface. Moreover, based on the feature-rich version of iview, we have also developed a neat and tailor-made version specifically for our istar web platform for protein-ligand docking purpose. This demonstrates the excellent portability of iview. Using innovative 3D techniques, we provide a user friendly visualizer that is not intended to compete with professional visualizers, but to enable easy accessibility and platform independence.
Yim, Sunghoon; Jeon, Seokhee; Choi, Seungmoon
2016-01-01
In this paper, we present an extended data-driven haptic rendering method capable of reproducing force responses during pushing and sliding interaction on a large surface area. The main part of the approach is a novel input variable set for the training of an interpolation model, which incorporates the position of a proxy - an imaginary contact point on the undeformed surface. This allows us to estimate friction in both sliding and sticking states in a unified framework. Estimating the proxy position is done in real-time based on simulation using a sliding yield surface - a surface defining a border between the sliding and sticking regions in the external force space. During modeling, the sliding yield surface is first identified via an automated palpation procedure. Then, through manual palpation on a target surface, input data and resultant force data are acquired. The data are used to build a radial basis interpolation model. During rendering, this input-output mapping interpolation model is used to estimate force responses in real-time in accordance with the interaction input. Physical performance evaluation demonstrates that our approach achieves reasonably high estimation accuracy. A user study also shows plausible perceptual realism under diverse and extensive exploration.
IBM techexplorer and MathML: Interactive Multimodal Scientific Documents
NASA Astrophysics Data System (ADS)
Diaz, Angel
2001-06-01
The World Wide Web provides a standard publishing platform for disseminating scientific and technical articles, books, journals, courseware, or even homework on the internet; however, the transition from paper to web-based interactive content has brought new opportunities for creating interactive content. Students, scientists, and engineers are now faced with the task of rendering the 2D presentational structure of mathematics, harnessing the wealth of scientific and technical software, and creating truly accessible scientific portals across international boundaries and markets. The recent emergence of World Wide Web Consortium (W3C) standards such as the Mathematical Markup Language (MathML), Language (XSL), and Aural CSS (ACSS) provide a foundation whereby mathematics can be displayed, enlivened, computed, and audio formatted. With interoperability ensured by standards, software applications can be easily brought together to create extensible and interactive scientific content. In this presentation we will provide an overview of the IBM techexplorer Hypermedia Browser, a web browser plug-in and ActiveX control aimed at bringing interactive mathematics to the masses across platforms and applications. We will demonstrate "live" mathematics where documents that contain MathML expressions can be edited and computed right inside your favorite web browser. This demonstration will be generalized as we show how MathML can be used to enliven even PowerPoint presentations. Finally, we will close the loop by demonstrating a novel approach to spoken mathematics based on MathML, DOM, XSL, ACSS, techexplorer, and IBM ViaVoice. By making use of techexplorer as the glue that binds the rendered content to the web browser, the back-end computation software, the Java applets that augment the exposition, and voice-rendering systems such as ViaVoice, authors can indeed create truly extensible and interactive scientific content. For more information see: [http://www.software.ibm.com/techexplorer] [http://www.alphaworks.ibm.com] [http://www.w3.org
Volumetric visualization algorithm development for an FPGA-based custom computing machine
NASA Astrophysics Data System (ADS)
Sallinen, Sami J.; Alakuijala, Jyrki; Helminen, Hannu; Laitinen, Joakim
1998-05-01
Rendering volumetric medical images is a burdensome computational task for contemporary computers due to the large size of the data sets. Custom designed reconfigurable hardware could considerably speed up volume visualization if an algorithm suitable for the platform is used. We present an algorithm and speedup techniques for visualizing volumetric medical CT and MR images with a custom-computing machine based on a Field Programmable Gate Array (FPGA). We also present simulated performance results of the proposed algorithm calculated with a software implementation running on a desktop PC. Our algorithm is capable of generating perspective projection renderings of single and multiple isosurfaces with transparency, simulated X-ray images, and Maximum Intensity Projections (MIP). Although more speedup techniques exist for parallel projection than for perspective projection, we have constrained ourselves to perspective viewing, because of its importance in the field of radiotherapy. The algorithm we have developed is based on ray casting, and the rendering is sped up by three different methods: shading speedup by gradient precalculation, a new generalized version of Ray-Acceleration by Distance Coding (RADC), and background ray elimination by speculative ray selection.
Rowe, Steven P; Zinreich, S James; Fishman, Elliot K
2018-06-01
Three-dimensional (3D) visualizations of volumetric data from CT have gained widespread clinical acceptance and are an important method for evaluating complex anatomy and pathology. Recently, cinematic rendering (CR), a new 3D visualization methodology, has become available. CR utilizes a lighting model that allows for the production of photorealistic images from isotropic voxel data. Given how new this technique is, studies to evaluate its clinical utility and any potential advantages or disadvantages relative to other 3D methods such as volume rendering have yet to be published. In this pictorial review, we provide examples of normal calvarial, maxillofacial, and skull base anatomy and pathological conditions that highlight the potential for CR images to aid in patient evaluation and treatment planning. The highly detailed images and nuanced shadowing that are intrinsic to CR are well suited to the display of the complex anatomy in this region of the body. We look forward to studies with CR that will ascertain the ultimate value of this methodology to evaluate calvarium, maxillofacial, and skull base morphology as well as other complex anatomic structures.
Sewerin, Philipp; Ostendorf, Benedikt; Hueber, Axel J; Kleyer, Arnd
2018-04-01
Until now, most major medical advancements have been achieved through hypothesis-driven research within the scope of clinical trials. However, due to a multitude of variables, only a certain number of research questions could be addressed during a single study, thus rendering these studies expensive and time consuming. Big data acquisition enables a new data-based approach in which large volumes of data can be used to investigate all variables, thus opening new horizons. Due to universal digitalization of the data as well as ever-improving hard- and software solutions, imaging would appear to be predestined for such analyses. Several small studies have already demonstrated that automated analysis algorithms and artificial intelligence can identify pathologies with high precision. Such automated systems would also seem well suited for rheumatology imaging, since a method for individualized risk stratification has long been sought for these patients. However, despite all the promising options, the heterogeneity of the data and highly complex regulations covering data protection in Germany would still render a big data solution for imaging difficult today. Overcoming these boundaries is challenging, but the enormous potential advances in clinical management and science render pursuit of this goal worthwhile.
Visualizing Vector Fields Using Line Integral Convolution and Dye Advection
NASA Technical Reports Server (NTRS)
Shen, Han-Wei; Johnson, Christopher R.; Ma, Kwan-Liu
1996-01-01
We present local and global techniques to visualize three-dimensional vector field data. Using the Line Integral Convolution (LIC) method to image the global vector field, our new algorithm allows the user to introduce colored 'dye' into the vector field to highlight local flow features. A fast algorithm is proposed that quickly recomputes the dyed LIC images. In addition, we introduce volume rendering methods that can map the LIC texture on any contour surface and/or translucent region defined by additional scalar quantities, and can follow the advection of colored dye throughout the volume.
Identification of Vibrotactile Patterns Encoding Obstacle Distance Information.
Kim, Yeongmi; Harders, Matthias; Gassert, Roger
2015-01-01
Delivering distance information of nearby obstacles from sensors embedded in a white cane-in addition to the intrinsic mechanical feedback from the cane-can aid the visually impaired in ambulating independently. Haptics is a common modality for conveying such information to cane users, typically in the form of vibrotactile signals. In this context, we investigated the effect of tactile rendering methods, tactile feedback configurations and directions of tactile flow on the identification of obstacle distance. Three tactile rendering methods with temporal variation only, spatio-temporal variation and spatial/temporal/intensity variation were investigated for two vibration feedback configurations. Results showed a significant interaction between tactile rendering method and feedback configuration. Spatio-temporal variation generally resulted in high correct identification rates for both feedback configurations. In the case of the four-finger vibration, tactile rendering with spatial/temporal/intensity variation also resulted in high distance identification rate. Further, participants expressed their preference for the four-finger vibration over the single-finger vibration in a survey. Both preferred rendering methods with spatio-temporal variation and spatial/temporal/intensity variation for the four-finger vibration could convey obstacle distance information with low workload. Overall, the presented findings provide valuable insights and guidance for the design of haptic displays for electronic travel aids for the visually impaired.
Assessment of a new biomimetic scaffold and its effects on bone formation by OCT
NASA Astrophysics Data System (ADS)
Yang, Ying; Aydin, Halil M.; Piskin, Erhan; El Haj, Alicia J.
2009-02-01
The ultimate target of bone tissue engineering is to generate functional load bearing bone. By nature, the porous volume in the trabecular bone is occupied by osseous medulla. The natural bone matrix consists of hydroxyapatite (HA) crystals precipitated along the collagen type I fibres. The mineral phase renders bone strength while collagen provides flexibility. Without mineral component, bone is very flexible and can not bear loads, whereas it is brittle in the case of mineral phase without the collagen presence. In this study, we designed and prepared a new type of scaffold which mimics the features of natural bone. The scaffold consists of three different components, a biphasic polymeric base composed of two different biodegradable polymers prepared by using dual porogen approach and bioactive agents, i.e., collagen and HA particles which are distributed throughout the matrix only in the pore surfaces. Interaction of the bioactive scaffolds possessing very high porosity and interconnected pore structures with cells were investigated in a prolonged culture period by using an osteoblastic cell line. The mineral HA particles have a slight different refractive index from the other elements such as polymeric scaffolds and cell/matrix in a tissue engineering constructs, exhibiting brighter images in OCT. Thus, OCT renders a convenient means to assess the morphology and architecture of the blank biomimetic scaffolds. This study also takes a close observation of OCT images for the cultured cell-scaffold constructs in order to assess neo-formed minerals and matrix. The OCT assessments have been compared with the results from confocal and SEM analysis.
An image-processing software package: UU and Fig for optical metrology applications
NASA Astrophysics Data System (ADS)
Chen, Lujie
2013-06-01
Modern optical metrology applications are largely supported by computational methods, such as phase shifting [1], Fourier Transform [2], digital image correlation [3], camera calibration [4], etc, in which image processing is a critical and indispensable component. While it is not too difficult to obtain a wide variety of image-processing programs from the internet; few are catered for the relatively special area of optical metrology. This paper introduces an image-processing software package: UU (data processing) and Fig (data rendering) that incorporates many useful functions to process optical metrological data. The cross-platform programs UU and Fig are developed based on wxWidgets. At the time of writing, it has been tested on Windows, Linux and Mac OS. The userinterface is designed to offer precise control of the underline processing procedures in a scientific manner. The data input/output mechanism is designed to accommodate diverse file formats and to facilitate the interaction with other independent programs. In terms of robustness, although the software was initially developed for personal use, it is comparably stable and accurate to most of the commercial software of similar nature. In addition to functions for optical metrology, the software package has a rich collection of useful tools in the following areas: real-time image streaming from USB and GigE cameras, computational geometry, computer vision, fitting of data, 3D image processing, vector image processing, precision device control (rotary stage, PZT stage, etc), point cloud to surface reconstruction, volume rendering, batch processing, etc. The software package is currently used in a number of universities for teaching and research.
NASA Astrophysics Data System (ADS)
Zheng, Guoyan
2007-03-01
Surgical navigation systems visualize the positions and orientations of surgical instruments and implants as graphical overlays onto a medical image of the operated anatomy on a computer monitor. The orthopaedic surgical navigation systems could be categorized according to the image modalities that are used for the visualization of surgical action. In the so-called CT-based systems or 'surgeon-defined anatomy' based systems, where a 3D volume or surface representation of the operated anatomy could be constructed from the preoperatively acquired tomographic data or through intraoperatively digitized anatomy landmarks, a photorealistic rendering of the surgical action has been identified to greatly improve usability of these navigation systems. However, this may not hold true when the virtual representation of surgical instruments and implants is superimposed onto 2D projection images in a fluoroscopy-based navigation system due to the so-called image occlusion problem. Image occlusion occurs when the field of view of the fluoroscopic image is occupied by the virtual representation of surgical implants or instruments. In these situations, the surgeon may miss part of the image details, even if transparency and/or wire-frame rendering is used. In this paper, we propose to use non-photorealistic rendering to overcome this difficulty. Laboratory testing results on foamed plastic bones during various computer-assisted fluoroscopybased surgical procedures including total hip arthroplasty and long bone fracture reduction and osteosynthesis are shown.
1991-09-01
single, indivisible entity. This somewhat arbitrary 3 treatment may be rendered more acceptable if one keeps in mind that to some extent, reoccupation of...R.F. Heizer , pp. 538-549. Handbook of North American Indians, vol. 8. Smithsonian Institution, Washington, D.C. I Bedwell, S.F. 1970 Prehistory and
Thong, Patricia S P; Tandjung, Stephanus S; Movania, Muhammad Mobeen; Chiew, Wei-Ming; Olivo, Malini; Bhuvaneswari, Ramaswamy; Seah, Hock-Soon; Lin, Feng; Qian, Kemao; Soo, Khee-Chee
2012-05-01
Oral lesions are conventionally diagnosed using white light endoscopy and histopathology. This can pose a challenge because the lesions may be difficult to visualise under white light illumination. Confocal laser endomicroscopy can be used for confocal fluorescence imaging of surface and subsurface cellular and tissue structures. To move toward real-time "virtual" biopsy of oral lesions, we interfaced an embedded computing system to a confocal laser endomicroscope to achieve a prototype three-dimensional (3-D) fluorescence imaging system. A field-programmable gated array computing platform was programmed to enable synchronization of cross-sectional image grabbing and Z-depth scanning, automate the acquisition of confocal image stacks and perform volume rendering. Fluorescence imaging of the human and murine oral cavities was carried out using the fluorescent dyes fluorescein sodium and hypericin. Volume rendering of cellular and tissue structures from the oral cavity demonstrate the potential of the system for 3-D fluorescence visualization of the oral cavity in real-time. We aim toward achieving a real-time virtual biopsy technique that can complement current diagnostic techniques and aid in targeted biopsy for better clinical outcomes.
Approximating scatterplots of large datasets using distribution splats
NASA Astrophysics Data System (ADS)
Camuto, Matthew; Crawfis, Roger; Becker, Barry G.
2000-02-01
Many situations exist where the plotting of large data sets with categorical attributes is desired in a 3D coordinate system. For example, a marketing company may conduct a survey involving one million subjects and then plot peoples favorite car type against their weight, height and annual income. Scatter point plotting, in which each point is individually plotted at its correspond cartesian location using a defined primitive, is usually used to render a plot of this type. If the dependent variable is continuous, we can discretize the 3D space into bins or voxels and retain the average value of all records falling within each voxel. Previous work employed volume rendering techniques, in particular, splatting, to represent this aggregated data, by mapping each average value to a representative color.
High resolution renderings and interactive visualization of the 2006 Huntington Beach experiment
NASA Astrophysics Data System (ADS)
Im, T.; Nayak, A.; Keen, C.; Samilo, D.; Matthews, J.
2006-12-01
The Visualization Center at the Scripps Institution of Oceanography investigates innovative ways to represent graphically interactive 3D virtual landscapes and to produce high resolution, high quality renderings of Earth sciences data and the sensors and instruments used to collect the data . Among the Visualization Center's most recent work is the visualization of the Huntington Beach experiment, a study launched in July 2006 by the Southern California Ocean Observing System (http://www.sccoos.org/) to record and synthesize data of the Huntington Beach coastal region. Researchers and students at the Visualization Center created visual presentations that combine bathymetric data provided by SCCOOS with USGS aerial photography and with 3D polygonal models of sensors created in Maya into an interactive 3D scene using the Fledermaus suite of visualization tools (http://www.ivs3d.com). In addition, the Visualization Center has produced high definition (HD) animations of SCCOOS sensor instruments (e.g. REMUS, drifters, spray glider, nearshore mooring, OCSD/USGS mooring and CDIP mooring) using the Maya modeling and animation software and rendered over multiple nodes of the OptIPuter Visualization Cluster at Scripps. These visualizations are aimed at providing researchers with a broader context of sensor locations relative to geologic characteristics, to promote their use as an educational resource for informal education settings and increasing public awareness, and also as an aid for researchers' proposals and presentations. These visualizations are available for download on the Visualization Center website at http://siovizcenter.ucsd.edu/sccoos/hb2006.php.
Data-Driven Modeling and Rendering of Force Responses from Elastic Tool Deformation
Rakhmatov, Ruslan; Ogay, Tatyana; Jeon, Seokhee
2018-01-01
This article presents a new data-driven model design for rendering force responses from elastic tool deformation. The new design incorporates a six-dimensional input describing the initial position of the contact, as well as the state of the tool deformation. The input-output relationship of the model was represented by a radial basis functions network, which was optimized based on training data collected from real tool-surface contact. Since the input space of the model is represented in the local coordinate system of a tool, the model is independent of recording and rendering devices and can be easily deployed to an existing simulator. The model also supports complex interactions, such as self and multi-contact collisions. In order to assess the proposed data-driven model, we built a custom data acquisition setup and developed a proof-of-concept rendering simulator. The simulator was evaluated through numerical and psychophysical experiments with four different real tools. The numerical evaluation demonstrated the perceptual soundness of the proposed model, meanwhile the user study revealed the force feedback of the proposed simulator to be realistic. PMID:29342964
Interactive Molecular Graphics for Augmented Reality Using HoloLens.
Müller, Christoph; Krone, Michael; Huber, Markus; Biener, Verena; Herr, Dominik; Koch, Steffen; Reina, Guido; Weiskopf, Daniel; Ertl, Thomas
2018-06-13
Immersive technologies like stereo rendering, virtual reality, or augmented reality (AR) are often used in the field of molecular visualisation. Modern, comparably lightweight and affordable AR headsets like Microsoft's HoloLens open up new possibilities for immersive analytics in molecular visualisation. A crucial factor for a comprehensive analysis of molecular data in AR is the rendering speed. HoloLens, however, has limited hardware capabilities due to requirements like battery life, fanless cooling and weight. Consequently, insights from best practises for powerful desktop hardware may not be transferable. Therefore, we evaluate the capabilities of the HoloLens hardware for modern, GPU-enabled, high-quality rendering methods for the space-filling model commonly used in molecular visualisation. We also assess the scalability for large molecular data sets. Based on the results, we discuss ideas and possibilities for immersive molecular analytics. Besides more obvious benefits like the stereoscopic rendering offered by the device, this specifically includes natural user interfaces that use physical navigation instead of the traditional virtual one. Furthermore, we consider different scenarios for such an immersive system, ranging from educational use to collaborative scenarios.
[3D-visualization by MRI for surgical planning of Wilms tumors].
Schenk, J P; Waag, K-L; Graf, N; Wunsch, R; Jourdan, C; Behnisch, W; Tröger, J; Günther, P
2004-10-01
To improve surgical planning of kidney tumors in childhood (Wilms tumor, mesoblastic nephroma) after radiologic verification of the presumptive diagnosis with interactive colored 3D-animation in MRI. In 7 children (1 boy, 6 girls) with a mean age of 3 years (1 month to 11 years), the MRI database (DICOM) was processed with a raycasting-based 3D-volume-rendering software (VG Studio Max 1.1/Volume Graphics). The abdominal MRI-sequences (coronal STIR, coronal T1 TSE, transverse T1/T2 TSE, sagittal T2 TSE, transverse and coronal T1 TSE post contrast) were obtained with a 0.5T unit in 4 - 6 mm slices. Additionally, a phase-contrast-MR-angiography was applied to delineate the large abdominal and retroperitoneal vessels. A notebook was used to demonstrate the 3D-visualization for surgical planning before surgery and during the surgical procedure. In all 7 cases, the surgical approach was influenced by interactive 3D-animation and the information found useful for surgical planning. Above all, the 3D-visualization demonstrates the mass effect of the Wilms tumor and its anatomical relationship to the renal hilum and to the rest of the kidney as well as the topographic relationship of the tumor to the critical vessels. One rupture of the tumor capsule occurred as a surgical complication. For the surgeon, the transformation of the anatomical situation from MRI to the surgical situs has become much easier. For surgical planning of Wilms tumors, the 3D-visualization with 3D-animation of the situs helps to transfer important information from the pediatric radiologist to the pediatric surgeon and optimizes the surgical preparation. A reduction of complications is to be expected.
Shen, Liangbo; Carrasco-Zevallos, Oscar; Keller, Brenton; Viehland, Christian; Waterman, Gar; Hahn, Paul S.; Kuo, Anthony N.; Toth, Cynthia A.; Izatt, Joseph A.
2016-01-01
Intra-operative optical coherence tomography (OCT) requires a display technology which allows surgeons to visualize OCT data without disrupting surgery. Previous research and commercial intrasurgical OCT systems have integrated heads-up display (HUD) systems into surgical microscopes to provide monoscopic viewing of OCT data through one microscope ocular. To take full advantage of our previously reported real-time volumetric microscope-integrated OCT (4D MIOCT) system, we describe a stereoscopic HUD which projects a stereo pair of OCT volume renderings into both oculars simultaneously. The stereoscopic HUD uses a novel optical design employing spatial multiplexing to project dual OCT volume renderings utilizing a single micro-display. The optical performance of the surgical microscope with the HUD was quantitatively characterized and the addition of the HUD was found not to substantially effect the resolution, field of view, or pincushion distortion of the operating microscope. In a pilot depth perception subject study, five ophthalmic surgeons completed a pre-set dexterity task with 50.0% (SD = 37.3%) higher success rate and in 35.0% (SD = 24.8%) less time on average with stereoscopic OCT vision compared to monoscopic OCT vision. Preliminary experience using the HUD in 40 vitreo-retinal human surgeries by five ophthalmic surgeons is reported, in which all surgeons reported that the HUD did not alter their normal view of surgery and that live surgical maneuvers were readily visible in displayed stereoscopic OCT volumes. PMID:27231616
Schulz-Wendtland, Rüdiger; Harz, Markus; Meier-Meitinger, Martina; Brehm, Barbara; Wacker, Till; Hahn, Horst K; Wagner, Florian; Wittenberg, Thomas; Beckmann, Matthias W; Uder, Michael; Fasching, Peter A; Emons, Julius
2017-03-01
Three-dimensional (3D) printing has become widely available, and a few cases of its use in clinical practice have been described. The aim of this study was to explore facilities for the semi-automated delineation of breast cancer tumors and to assess the feasibility of 3D printing of breast cancer tumors. In a case series of five patients, different 3D imaging methods-magnetic resonance imaging (MRI), digital breast tomosynthesis (DBT), and 3D ultrasound-were used to capture 3D data for breast cancer tumors. The volumes of the breast tumors were calculated to assess the comparability of the breast tumor models, and the MRI information was used to render models on a commercially available 3D printer to materialize the tumors. The tumor volumes calculated from the different 3D methods appeared to be comparable. Tumor models with volumes between 325 mm 3 and 7,770 mm 3 were printed and compared with the models rendered from MRI. The materialization of the tumors reflected the computer models of them. 3D printing (rapid prototyping) appears to be feasible. Scenarios for the clinical use of the technology might include presenting the model to the surgeon to provide a better understanding of the tumor's spatial characteristics in the breast, in order to improve decision-making in relation to neoadjuvant chemotherapy or surgical approaches. J. Surg. Oncol. 2017;115:238-242. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Post-processing methods of rendering and visualizing 3-D reconstructed tomographic images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, S.T.C.
The purpose of this presentation is to discuss the computer processing techniques of tomographic images, after they have been generated by imaging scanners, for volume visualization. Volume visualization is concerned with the representation, manipulation, and rendering of volumetric data. Since the first digital images were produced from computed tomography (CT) scanners in the mid 1970s, applications of visualization in medicine have expanded dramatically. Today, three-dimensional (3D) medical visualization has expanded from using CT data, the first inherently digital source of 3D medical data, to using data from various medical imaging modalities, including magnetic resonance scanners, positron emission scanners, digital ultrasound,more » electronic and confocal microscopy, and other medical imaging modalities. We have advanced from rendering anatomy to aid diagnosis and visualize complex anatomic structures to planning and assisting surgery and radiation treatment. New, more accurate and cost-effective procedures for clinical services and biomedical research have become possible by integrating computer graphics technology with medical images. This trend is particularly noticeable in current market-driven health care environment. For example, interventional imaging, image-guided surgery, and stereotactic and visualization techniques are now stemming into surgical practice. In this presentation, we discuss only computer-display-based approaches of volumetric medical visualization. That is, we assume that the display device available is two-dimensional (2D) in nature and all analysis of multidimensional image data is to be carried out via the 2D screen of the device. There are technologies such as holography and virtual reality that do provide a {open_quotes}true 3D screen{close_quotes}. To confine the scope, this presentation will not discuss such approaches.« less
Gockner, T L; Zelzer, S; Mokry, T; Gnutzmann, D; Bellemann, N; Mogler, C; Beierfuß, A; Köllensperger, E; Germann, G; Radeleff, B A; Stampfl, U; Kauczor, H U; Pereira, P L; Sommer, C M
2015-04-01
This study was designed to compare technical parameters during ablation as well as CT 3D rendering and histopathology of the ablation zone between sphere-enhanced microwave ablation (sMWA) and bland microwave ablation (bMWA). In six sheep-livers, 18 microwave ablations were performed with identical system presets (power output: 80 W, ablation time: 120 s). In three sheep, transarterial embolisation (TAE) was performed immediately before microwave ablation using spheres (diameter: 40 ± 10 μm) (sMWA). In the other three sheep, microwave ablation was performed without spheres embolisation (bMWA). Contrast-enhanced CT, sacrifice, and liver harvest followed immediately after microwave ablation. Study goals included technical parameters during ablation (resulting power output, ablation time), geometry of the ablation zone applying specific CT 3D rendering with a software prototype (short axis of the ablation zone, volume of the largest aligned ablation sphere within the ablation zone), and histopathology (hematoxylin-eosin, Masson Goldner and TUNEL). Resulting power output/ablation times were 78.7 ± 1.0 W/120 ± 0.0 s for bMWA and 78.4 ± 1.0 W/120 ± 0.0 s for sMWA (n.s., respectively). Short axis/volume were 23.7 ± 3.7 mm/7.0 ± 2.4 cm(3) for bMWA and 29.1 ± 3.4 mm/11.5 ± 3.9 cm(3) for sMWA (P < 0.01, respectively). Histopathology confirmed the signs of coagulation necrosis as well as early and irreversible cell death for bMWA and sMWA. For sMWA, spheres were detected within, at the rim, and outside of the ablation zone without conspicuous features. Specific CT 3D rendering identifies a larger ablation zone for sMWA compared with bMWA. The histopathological signs and the detectable amount of cell death are comparable for both groups. When comparing sMWA with bMWA, TAE has no effect on the technical parameters during ablation.
Strategies for Effectively Visualizing a 3D Flow Using Volume Line Integral Convolution
NASA Technical Reports Server (NTRS)
Interrante, Victoria; Grosch, Chester
1997-01-01
This paper discusses strategies for effectively portraying 3D flow using volume line integral convolution. Issues include defining an appropriate input texture, clarifying the distinct identities and relative depths of the advected texture elements, and selectively highlighting regions of interest in both the input and output volumes. Apart from offering insights into the greater potential of 3D LIC as a method for effectively representing flow in a volume, a principal contribution of this work is the suggestion of a technique for generating and rendering 3D visibility-impeding 'halos' that can help to intuitively indicate the presence of depth discontinuities between contiguous elements in a projection and thereby clarify the 3D spatial organization of elements in the flow. The proposed techniques are applied to the visualization of a hot, supersonic, laminar jet exiting into a colder, subsonic coflow.
Viewpoints: Interactive Exploration of Large Multivariate Earth and Space Science Data Sets
NASA Astrophysics Data System (ADS)
Levit, C.; Gazis, P. R.
2006-05-01
Analysis and visualization of extremely large and complex data sets may be one of the most significant challenges facing earth and space science investigators in the forthcoming decades. While advances in hardware speed and storage technology have roughly kept up with (indeed, have driven) increases in database size, the same is not of our abilities to manage the complexity of these data. Current missions, instruments, and simulations produce so much data of such high dimensionality that they outstrip the capabilities of traditional visualization and analysis software. This problem can only be expected to get worse as data volumes increase by orders of magnitude in future missions and in ever-larger supercomputer simulations. For large multivariate data (more than 105 samples or records with more than 5 variables per sample) the interactive graphics response of most existing statistical analysis, machine learning, exploratory data analysis, and/or visualization tools such as Torch, MLC++, Matlab, S++/R, and IDL stutters, stalls, or stops working altogether. Fortunately, the graphics processing units (GPUs) built in to all professional desktop and laptop computers currently on the market are capable of transforming, filtering, and rendering hundreds of millions of points per second. We present a prototype open-source cross-platform application which leverages much of the power latent in the GPU to enable smooth interactive exploration and analysis of large high- dimensional data using a variety of classical and recent techniques. The targeted application is the interactive analysis of large, complex, multivariate data sets, with dimensionalities that may surpass 100 and sample sizes that may exceed 106-108.
ERIC Educational Resources Information Center
da Silva, André Constantino; Freire, Fernanda Maria Pereira; de Arruda, Alan Victor Pereira; da Rocha, Heloísa Vieira
2013-01-01
e-Learning environments offer content, such text, audio, video, animations, using the Web infrastructure and they are designed to users interacting with keyboard, mouse and a medium-sized screen. Mobile devices, such as smartphones and tablets, have enough computation power to render Web pages, allowing browsing the Internet and access e-Learning…
Tile-based Level of Detail for the Parallel Age
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niski, K; Cohen, J D
Today's PCs incorporate multiple CPUs and GPUs and are easily arranged in clusters for high-performance, interactive graphics. We present an approach based on hierarchical, screen-space tiles to parallelizing rendering with level of detail. Adapt tiles, render tiles, and machine tiles are associated with CPUs, GPUs, and PCs, respectively, to efficiently parallelize the workload with good resource utilization. Adaptive tile sizes provide load balancing while our level of detail system allows total and independent management of the load on CPUs and GPUs. We demonstrate our approach on parallel configurations consisting of both single PCs and a cluster of PCs.
A 3-RSR Haptic Wearable Device for Rendering Fingertip Contact Forces.
Leonardis, Daniele; Solazzi, Massimiliano; Bortone, Ilaria; Frisoli, Antonio
2017-01-01
A novel wearable haptic device for modulating contact forces at the fingertip is presented. Rendering of forces by skin deformation in three degrees of freedom (DoF), with contact-no contact capabilities, was implemented through rigid parallel kinematics. The novel asymmetrical three revolute-spherical-revolute (3-RSR) configuration allowed compact dimensions with minimum encumbrance of the hand workspace. The device was designed to render constant to low frequency deformation of the fingerpad in three DoF, combining light weight with relatively high output forces. A differential method for solving the non-trivial inverse kinematics is proposed and implemented in real time for controlling the device. The first experimental activity evaluated discrimination of different fingerpad stretch directions in a group of five subjects. The second experiment, enrolling 19 subjects, evaluated cutaneous feedback provided in a virtual pick-and-place manipulation task. Stiffness of the fingerpad plus device was measured and used to calibrate the physics of the virtual environment. The third experiment with 10 subjects evaluated interaction forces in a virtual lift-and-hold task. Although with different performance in the two manipulation experiments, overall results show that participants better controlled interaction forces when the cutaneous feedback was active, with significant differences between the visual and visuo-haptic experimental conditions.
HVS: an image-based approach for constructing virtual environments
NASA Astrophysics Data System (ADS)
Zhang, Maojun; Zhong, Li; Sun, Lifeng; Li, Yunhao
1998-09-01
Virtual Reality Systems can construct virtual environment which provide an interactive walkthrough experience. Traditionally, walkthrough is performed by modeling and rendering 3D computer graphics in real-time. Despite the rapid advance of computer graphics technique, the rendering engine usually places a limit on scene complexity and rendering quality. This paper presents a approach which uses the real-world image or synthesized image to comprise a virtual environment. The real-world image or synthesized image can be recorded by camera, or synthesized by off-line multispectral image processing for Landsat TM (Thematic Mapper) Imagery and SPOT HRV imagery. They are digitally warped on-the-fly to simulate walking forward/backward, to left/right and 360-degree watching around. We have developed a system HVS (Hyper Video System) based on these principles. HVS improves upon QuickTime VR and Surround Video in the walking forward/backward.
Openwebglobe 2: Visualization of Complex 3D-GEODATA in the (mobile) Webbrowser
NASA Astrophysics Data System (ADS)
Christen, M.
2016-06-01
Providing worldwide high resolution data for virtual globes consists of compute and storage intense tasks for processing data. Furthermore, rendering complex 3D-Geodata, such as 3D-City models with an extremely high polygon count and a vast amount of textures at interactive framerates is still a very challenging task, especially on mobile devices. This paper presents an approach for processing, caching and serving massive geospatial data in a cloud-based environment for large scale, out-of-core, highly scalable 3D scene rendering on a web based virtual globe. Cloud computing is used for processing large amounts of geospatial data and also for providing 2D and 3D map data to a large amount of (mobile) web clients. In this paper the approach for processing, rendering and caching very large datasets in the currently developed virtual globe "OpenWebGlobe 2" is shown, which displays 3D-Geodata on nearly every device.
Uninjured trees - a meaningful guide to white-pine weevil control decisions
William E. Waters
1962-01-01
The white-pine weevil, Pissodes strobi, is a particularly insidious forest pest that can render a stand of host trees virtually worthless. It rarely, if ever, kills a tree; but the crooks, forks, and internal defects that develop in attacked trees over a period of years may reduce the merchantable volume and value of the tree at harvest age to zero. Dollar losses are...
ERIC Educational Resources Information Center
Whitcomb, Emeline S.
1931-01-01
This chapter of the "Biennial Survey of Education in the United States, 1928-1930" focuses on the following topic areas as they relate to homemaking education: Part I: Present trends, contains the following: (a) Contributions rendered; (b) Philosophy changes; (c) Expansion of home economics in our public schools; (d) Home economics required; (e)…
Basic Course Deskbook, Volume 2: General Administrative Law
2002-03-01
jurisdictions can result in a void marriage. 5. Impotence: usually must render the party physically incapable of normal sexual relations and must...ground for annulment in itself, but may constitute fraud if the party never intended to have sexual relations. IX. UNIFORMED SERVICES FORMER SPOUSES...must submit a sworn statement articulating reasonable facts supporting the existence or nonexistence of requisite sexual contact before genetic
1984-10-11
Unfamiliar names rendered phonetically or transliterated are enclosed in parentheses. Words or names preceded by a ques- tion mark and enclosed in...48 Batasan Examines Volume, Inconsistency of Marcos Decrees (Mariano M . Florido; VISAYAN HERALD, 10 Sep 84) 52 KBL Leaders Consider ’Political...QIIAN ÜOI NHAN DAN, Jul 84) M Artillery Mobility Requirements Outlined (Nguyen Dinh Thach; TAP CHI QUAN DOI NHAN DAN, Jul 84) 101 PARTY
NASA Astrophysics Data System (ADS)
Idicheria, Cherian Alex
An experimental study was performed with the aim of investigating the structure of transitional and turbulent nonpremixed jet flames under different gravity conditions. In particular, the focus was to determine the effect of buoyancy on the mean and fluctuating characteristics of the jet flames. Experiments were conducted under three gravity levels, viz. 1 g, 20 mg and 100 mug. The milligravity and microgravity conditions were achieved by dropping a jet-flame rig in the UT-Austin 1.25-second and the NASA-Glenn Research Center 2.2-second drop towers, respectively. The principal diagnostics employed were time-resolved, cinematographic imaging of the visible soot luminosity and planar laser Mie scattering (PLMS). For the cinematographic flame luminosity imaging experiments, the flames studied were piloted nonpremixed propane, ethylene and methane jet flames at source Reynolds numbers ranging from 2000 to 10500. From the soot luminosity images, mean and root-mean square (RMS) images were computed, and volume rendering of the image sequences was used to investigate the large-scale structure evolution and flame tip dynamics. The relative importance of buoyancy was quantified with the parameter, xL , as defined by Becker and Yamazaki [1978]. The results show, in contrast to previous microgravity studies, that the high Reynolds number flames have the same flame length irrespective of the gravity level. The RMS fluctuations and volume renderings indicate that the large-scale structure and flame tip dynamics are essentially identical to those of purely momentum driven flames provided xL is approximately less than 2. The volume-renderings show that the luminous structure celerities (normalized by jet exit velocity) are approximately constant for xL < 6, but are substantially larger for xL > 8. The celerity values for xL > 8 are seen to follow a x3/2L scaling, which can be predicted with a simplified momentum equation analysis for the buoyancy-dominated regime. The underlying turbulent structure and mean mixture fraction characteristics were investigated in nonreacting and reacting jets with a PLMS diagnostic system developed for the UT-Austin 1.25-second drop tower. (Abstract shortened by UMI.)
Choi, Dong-hak; Hiro-Oka, Hideaki; Shimizu, Kimiya; Ohbayashi, Kohji
2012-01-01
An ultrafast frequency domain optical coherence tomography system was developed at A-scan rates between 2.5 and 10 MHz, a B-scan rate of 4 or 8 kHz, and volume-rates between 12 and 41 volumes/second. In the case of the worst duty ratio of 10%, the averaged A-scan rate was 1 MHz. Two optical demultiplexers at a center wavelength of 1310 nm were used for linear-k spectral dispersion and simultaneous differential signal detection at 320 wavelengths. The depth-range, sensitivity, sensitivity roll-off by 6 dB, and axial resolution were 4 mm, 97 dB, 6 mm, and 23 μm, respectively. Using FPGAs for FFT and a GPU for volume rendering, a real-time 4D display was demonstrated at a rate up to 41 volumes/second for an image size of 256 (axial) × 128 × 128 (lateral) voxels. PMID:23243560
Rohmer, Kai; Jendersie, Johannes; Grosch, Thorsten
2017-11-01
Augmented Reality offers many applications today, especially on mobile devices. Due to the lack of mobile hardware for illumination measurements, photorealistic rendering with consistent appearance of virtual objects is still an area of active research. In this paper, we present a full two-stage pipeline for environment acquisition and augmentation of live camera images using a mobile device with a depth sensor. We show how to directly work on a recorded 3D point cloud of the real environment containing high dynamic range color values. For unknown and automatically changing camera settings, a color compensation method is introduced. Based on this, we show photorealistic augmentations using variants of differential light simulation techniques. The presented methods are tailored for mobile devices and run at interactive frame rates. However, our methods are scalable to trade performance for quality and can produce quality renderings on desktop hardware.
Declarative language design for interactive visualization.
Heer, Jeffrey; Bostock, Michael
2010-01-01
We investigate the design of declarative, domain-specific languages for constructing interactive visualizations. By separating specification from execution, declarative languages can simplify development, enable unobtrusive optimization, and support retargeting across platforms. We describe the design of the Protovis specification language and its implementation within an object-oriented, statically-typed programming language (Java). We demonstrate how to support rich visualizations without requiring a toolkit-specific data model and extend Protovis to enable declarative specification of animated transitions. To support cross-platform deployment, we introduce rendering and event-handling infrastructures decoupled from the runtime platform, letting designers retarget visualization specifications (e.g., from desktop to mobile phone) with reduced effort. We also explore optimizations such as runtime compilation of visualization specifications, parallelized execution, and hardware-accelerated rendering. We present benchmark studies measuring the performance gains provided by these optimizations and compare performance to existing Java-based visualization tools, demonstrating scalability improvements exceeding an order of magnitude.
Smits, Samuel A; Ouverney, Cleber C
2010-08-18
Many software packages have been developed to address the need for generating phylogenetic trees intended for print. With an increased use of the web to disseminate scientific literature, there is a need for phylogenetic trees to be viewable across many types of devices and feature some of the interactive elements that are integral to the browsing experience. We propose a novel approach for publishing interactive phylogenetic trees. We present a javascript library, jsPhyloSVG, which facilitates constructing interactive phylogenetic trees from raw Newick or phyloXML formats directly within the browser in Scalable Vector Graphics (SVG) format. It is designed to work across all major browsers and renders an alternative format for those browsers that do not support SVG. The library provides tools for building rectangular and circular phylograms with integrated charting. Interactive features may be integrated and made to respond to events such as clicks on any element of the tree, including labels. jsPhyloSVG is an open-source solution for rendering dynamic phylogenetic trees. It is capable of generating complex and interactive phylogenetic trees across all major browsers without the need for plugins. It is novel in supporting the ability to interpret the tree inference formats directly, exposing the underlying markup to data-mining services. The library source code, extensive documentation and live examples are freely accessible at www.jsphylosvg.com.
NASA Astrophysics Data System (ADS)
Liu, Shuai; Chen, Ge; Yao, Shifeng; Tian, Fenglin; Liu, Wei
2017-07-01
This paper presents a novel integrated marine visualization framework which focuses on processing, analyzing the multi-dimension spatiotemporal marine data in one workflow. Effective marine data visualization is needed in terms of extracting useful patterns, recognizing changes, and understanding physical processes in oceanography researches. However, the multi-source, multi-format, multi-dimension characteristics of marine data pose a challenge for interactive and feasible (timely) marine data analysis and visualization in one workflow. And, global multi-resolution virtual terrain environment is also needed to give oceanographers and the public a real geographic background reference and to help them to identify the geographical variation of ocean phenomena. This paper introduces a data integration and processing method to efficiently visualize and analyze the heterogeneous marine data. Based on the data we processed, several GPU-based visualization methods are explored to interactively demonstrate marine data. GPU-tessellated global terrain rendering using ETOPO1 data is realized and the video memory usage is controlled to ensure high efficiency. A modified ray-casting algorithm for the uneven multi-section Argo volume data is also presented and the transfer function is designed to analyze the 3D structure of ocean phenomena. Based on the framework we designed, an integrated visualization system is realized. The effectiveness and efficiency of the framework is demonstrated. This system is expected to make a significant contribution to the demonstration and understanding of marine physical process in a virtual global environment.
NASA Astrophysics Data System (ADS)
Juhnke, Bethany; Berron, Monica; Philip, Adriana; Williams, Jordan; Holub, Joseph; Winer, Eliot
2013-03-01
Advancements in medical image visualization in recent years have enabled three-dimensional (3D) medical images to be volume-rendered from magnetic resonance imaging (MRI) and computed tomography (CT) scans. Medical data is crucial for patient diagnosis and medical education, and analyzing these three-dimensional models rather than two-dimensional (2D) slices would enable more efficient analysis by surgeons and physicians, especially non-radiologists. An interaction device that is intuitive, robust, and easily learned is necessary to integrate 3D modeling software into the medical community. The keyboard and mouse configuration does not readily manipulate 3D models because these traditional interface devices function within two degrees of freedom, not the six degrees of freedom presented in three dimensions. Using a familiar, commercial-off-the-shelf (COTS) device for interaction would minimize training time and enable maximum usability with 3D medical images. Multiple techniques are available to manipulate 3D medical images and provide doctors more innovative ways of visualizing patient data. One such example is windowing. Windowing is used to adjust the viewed tissue density of digital medical data. A software platform available at the Virtual Reality Applications Center (VRAC), named Isis, was used to visualize and interact with the 3D representations of medical data. In this paper, we present the methodology and results of a user study that examined the usability of windowing 3D medical imaging using a Kinect™ device compared to a traditional mouse.
Immersive Molecular Visualization with Omnidirectional Stereoscopic Ray Tracing and Remote Rendering
Stone, John E.; Sherman, William R.; Schulten, Klaus
2016-01-01
Immersive molecular visualization provides the viewer with intuitive perception of complex structures and spatial relationships that are of critical interest to structural biologists. The recent availability of commodity head mounted displays (HMDs) provides a compelling opportunity for widespread adoption of immersive visualization by molecular scientists, but HMDs pose additional challenges due to the need for low-latency, high-frame-rate rendering. State-of-the-art molecular dynamics simulations produce terabytes of data that can be impractical to transfer from remote supercomputers, necessitating routine use of remote visualization. Hardware-accelerated video encoding has profoundly increased frame rates and image resolution for remote visualization, however round-trip network latencies would cause simulator sickness when using HMDs. We present a novel two-phase rendering approach that overcomes network latencies with the combination of omnidirectional stereoscopic progressive ray tracing and high performance rasterization, and its implementation within VMD, a widely used molecular visualization and analysis tool. The new rendering approach enables immersive molecular visualization with rendering techniques such as shadows, ambient occlusion lighting, depth-of-field, and high quality transparency, that are particularly helpful for the study of large biomolecular complexes. We describe ray tracing algorithms that are used to optimize interactivity and quality, and we report key performance metrics of the system. The new techniques can also benefit many other application domains. PMID:27747138
A heterogeneous computing environment for simulating astrophysical fluid flows
NASA Technical Reports Server (NTRS)
Cazes, J.
1994-01-01
In the Concurrent Computing Laboratory in the Department of Physics and Astronomy at Louisiana State University we have constructed a heterogeneous computing environment that permits us to routinely simulate complicated three-dimensional fluid flows and to readily visualize the results of each simulation via three-dimensional animation sequences. An 8192-node MasPar MP-1 computer with 0.5 GBytes of RAM provides 250 MFlops of execution speed for our fluid flow simulations. Utilizing the parallel virtual machine (PVM) language, at periodic intervals data is automatically transferred from the MP-1 to a cluster of workstations where individual three-dimensional images are rendered for inclusion in a single animation sequence. Work is underway to replace executions on the MP-1 with simulations performed on the 512-node CM-5 at NCSA and to simultaneously gain access to more potent volume rendering workstations.
Scarfone, Christopher; Lavely, William C; Cmelak, Anthony J; Delbeke, Dominique; Martin, William H; Billheimer, Dean; Hallahan, Dennis E
2004-04-01
The aim of this investigation was to evaluate the influence and accuracy of (18)F-FDG PET in target volume definition as a complementary modality to CT for patients with head and neck cancer (HNC) using dedicated PET and CT scanners. Six HNC patients were custom fitted with head and neck and upper body immobilization devices, and conventional radiotherapy CT simulation was performed together with (18)F-FDG PET imaging. Gross target volume (GTV) and pathologic nodal volumes were first defined in the conventional manner based on CT. A segmentation and surface-rendering registration technique was then used to coregister the (18)F-FDG PET and CT planning image datasets. (18)F-FDG PET GTVs were determined and displayed simultaneously with the CT contours. CT GTVs were then modified based on the PET data to form final PET/CT treatment volumes. Five-field intensity-modulated radiation therapy (IMRT) was then used to demonstrate dose targeting to the CT GTV or the PET/CT GTV. One patient was PET-negative after induction chemotherapy. The CT GTV was modified in all remaining patients based on (18)F-FDG PET data. The resulting PET/CT GTV was larger than the original CT volume by an average of 15%. In 5 cases, (18)F-FDG PET identified active lymph nodes that corresponded to lymph nodes contoured on CT. The pathologically enlarged CT lymph nodes were modified to create final lymph node volumes in 3 of 5 cases. In 1 of 6 patients, (18)F-FDG-avid lymph nodes were not identified as pathologic on CT. In 2 of 6 patients, registration of the independently acquired PET and CT data using segmentation and surface rendering resulted in a suboptimal alignment and, therefore, had to be repeated. Radiotherapy planning using IMRT demonstrated the capability of this technique to target anatomic or anatomic/physiologic target volumes. In this manner, metabolically active sites can be intensified to greater daily doses. Inclusion of (18)F-FDG PET data resulted in modified target volumes in radiotherapy planning for HNC. PET and CT data acquired on separate, dedicated scanners may be coregistered for therapy planning; however, dual-acquisition PET/CT systems may be considered to reduce the need for reregistrations. It is possible to use IMRT to target dose to metabolically active sites based on coregistered PET/CT data.
Almonte, Lisa; Colchero, Jaime
2017-02-23
The present work analyses how the tip-sample interaction signals critically determine the operation of an Atomic Force Microscope (AFM) set-up immersed in liquid. On heterogeneous samples, the conservative tip-sample interaction may vary significantly from point to point - in particular from attractive to repulsive - rendering correct feedback very challenging. Lipid membranes prepared on a mica substrate are analyzed as reference samples which are locally heterogeneous (material contrast). The AFM set-up is operated dynamically at low oscillation amplitude and all available experimental data signals - the normal force, as well as the amplitude and frequency - are recorded simultaneously. From the analysis of how the dissipation (oscillation amplitude) and the conservative interaction (normal force and resonance frequency) vary with the tip-sample distance we conclude that dissipation is the only appropriate feedback source for stable and correct topographic imaging. The normal force and phase then carry information about the sample composition ("chemical contrast"). Dynamic AFM allows imaging in a non-contact regime where essentially no forces are applied, rendering dynamic AFM a truly non-invasive technique.
Sequence alignment visualization in HTML5 without Java.
Gille, Christoph; Birgit, Weyand; Gille, Andreas
2014-01-01
Java has been extensively used for the visualization of biological data in the web. However, the Java runtime environment is an additional layer of software with an own set of technical problems and security risks. HTML in its new version 5 provides features that for some tasks may render Java unnecessary. Alignment-To-HTML is the first HTML-based interactive visualization for annotated multiple sequence alignments. The server side script interpreter can perform all tasks like (i) sequence retrieval, (ii) alignment computation, (iii) rendering, (iv) identification of a homologous structural models and (v) communication with BioDAS-servers. The rendered alignment can be included in web pages and is displayed in all browsers on all platforms including touch screen tablets. The functionality of the user interface is similar to legacy Java applets and includes color schemes, highlighting of conserved and variable alignment positions, row reordering by drag and drop, interlinked 3D visualization and sequence groups. Novel features are (i) support for multiple overlapping residue annotations, such as chemical modifications, single nucleotide polymorphisms and mutations, (ii) mechanisms to quickly hide residue annotations, (iii) export to MS-Word and (iv) sequence icons. Alignment-To-HTML, the first interactive alignment visualization that runs in web browsers without additional software, confirms that to some extend HTML5 is already sufficient to display complex biological data. The low speed at which programs are executed in browsers is still the main obstacle. Nevertheless, we envision an increased use of HTML and JavaScript for interactive biological software. Under GPL at: http://www.bioinformatics.org/strap/toHTML/.
An augmented reality tool for learning spatial anatomy on mobile devices.
Jain, Nishant; Youngblood, Patricia; Hasel, Matthew; Srivastava, Sakti
2017-09-01
Augmented Realty (AR) offers a novel method of blending virtual and real anatomy for intuitive spatial learning. Our first aim in the study was to create a prototype AR tool for mobile devices. Our second aim was to complete a technical evaluation of our prototype AR tool focused on measuring the system's ability to accurately render digital content in the real world. We imported Computed Tomography (CT) data derived virtual surface models into a 3D Unity engine environment and implemented an AR algorithm to display these on mobile devices. We investigated the accuracy of the virtual renderings by comparing a physical cube with an identical virtual cube for dimensional accuracy. Our comparative study confirms that our AR tool renders 3D virtual objects with a high level of accuracy as evidenced by the degree of similarity between measurements of the dimensions of a virtual object (a cube) and the corresponding physical object. We developed an inexpensive and user-friendly prototype AR tool for mobile devices that creates highly accurate renderings. This prototype demonstrates an intuitive, portable, and integrated interface for spatial interaction with virtual anatomical specimens. Integrating this AR tool with a library of CT derived surface models provides a platform for spatial learning in the anatomy curriculum. The segmentation methodology implemented to optimize human CT data for mobile viewing can be extended to include anatomical variations and pathologies. The ability of this inexpensive educational platform to deliver a library of interactive, 3D models to students worldwide demonstrates its utility as a supplemental teaching tool that could greatly benefit anatomical instruction. Clin. Anat. 30:736-741, 2017. © 2017Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Modelling compressible dense and dilute two-phase flows
NASA Astrophysics Data System (ADS)
Saurel, Richard; Chinnayya, Ashwin; Carmouze, Quentin
2017-06-01
Many two-phase flow situations, from engineering science to astrophysics, deal with transition from dense (high concentration of the condensed phase) to dilute concentration (low concentration of the same phase), covering the entire range of volume fractions. Some models are now well accepted at the two limits, but none are able to cover accurately the entire range, in particular regarding waves propagation. In the present work, an alternative to the Baer and Nunziato (BN) model [Baer, M. R. and Nunziato, J. W., "A two-phase mixture theory for the deflagration-to-detonation transition (DDT) in reactive granular materials," Int. J. Multiphase Flow 12(6), 861 (1986)], initially designed for dense flows, is built. The corresponding model is hyperbolic and thermodynamically consistent. Contrarily to the BN model that involves 6 wave speeds, the new formulation involves 4 waves only, in agreement with the Marble model [Marble, F. E., "Dynamics of a gas containing small solid particles," Combustion and Propulsion (5th AGARD Colloquium) (Pergamon Press, 1963), Vol. 175] based on pressureless Euler equations for the dispersed phase, a well-accepted model for low particle volume concentrations. In the new model, the presence of pressure in the momentum equation of the particles and consideration of volume fractions in the two phases render the model valid for large particle concentrations. A symmetric version of the new model is derived as well for liquids containing gas bubbles. This model version involves 4 characteristic wave speeds as well, but with different velocities. Last, the two sub-models with 4 waves are combined in a unique formulation, valid for the full range of volume fractions. It involves the same 6 wave speeds as the BN model, but at a given point of space, 4 waves only emerge, depending on the local volume fractions. The non-linear pressure waves propagate only in the phase with dominant volume fraction. The new model is tested numerically on various test problems ranging from separated phases in a shock tube to shock-particle cloud interaction. Its predictions are compared to BN and Marble models as well as against experimental data showing clear improvements.
Experimenter's Laboratory for Visualized Interactive Science
NASA Technical Reports Server (NTRS)
Hansen, Elaine R.; Rodier, Daniel R.; Klemp, Marjorie K.
1994-01-01
ELVIS (Experimenter's Laboratory for Visualized Interactive Science) is an interactive visualization environment that enables scientists, students, and educators to visualize and analyze large, complex, and diverse sets of scientific data. It accomplishes this by presenting the data sets as 2-D, 3-D, color, stereo, and graphic images with movable and multiple light sources combined with displays of solid-surface, contours, wire-frame, and transparency. By simultaneously rendering diverse data sets acquired from multiple sources, formats, and resolutions and by interacting with the data through an intuitive, direct-manipulation interface, ELVIS provides an interactive and responsive environment for exploratory data analysis.
NASA Astrophysics Data System (ADS)
Karasso, P. S.; Mungal, M. G.
1991-05-01
This study investigates the structure and mixing of the two-dimensional turbulent mixing layer when subjected to longitudinal streamwise curvature. The straight layer is now well known to be dominated by the primary Kelvin-Helmholtz (KH) instability as well as the secondary Taylor-Goertler (TG) instability. For equal density fluids, placing the high-speed fluid on the inside of a streamwise bend causes the TG instability to be enhanced (unstable case), while placing the low-speed fluid on the inside of the same bend leads to the suppression of the TG instability (stable case). The location of the mixing transition is correspondingly altered. Our goal is to study the changes to the mixing field and growth rate resulting from the competition between instabilities. Our studies are performed in a newly constructed blow-down water facility capable of high Reynolds numbers and excellent optical access. Maximum flow speeds are 2 and 0.25 m/sec for the high- and low-speed sides, respectively, leading to maximum Reynolds numbers of 80 000 based on velocity difference and the width of the layer. We are able to dye one stream with a fluorescent dye, thus providing several planar views of the flow under laser sheet illumination. These views are superior to conventional approaches as they are free of wall effects and are not spatially integrating. However, our most useful diagnostic of the structure of the flow is the ability to record high-speed images of the end view of the flow that are then reconstructed by computer using the volume rendering technique of Jiménez et al.1 This approach is especially useful as it allows us to compare the structural changes to the flow resulting from the competition between the KH and TG instabilities. Another advantage is the fact that several hundred frames, covering many characteristic times, are incorporated into the rendered image and thus capture considerably more flow physics than do still images. We currently have our rendering techniques fully operational,2 and are presently acquiring high quality high-speed movies of the various flow cases. Our findings to date, based on planar time-averaged and instantaneous views, show the following: (1) a 50% increase in growth rate from the stable to the unstable case resulting from mild curvature; (2) an enhancement of the TG vortices in the unstable case, but without major disruption of the KH instability which remains relatively intact; and (3) the occurrence of the KH instability at angles tilted with respect to the splitter plate tip, in agreement with the predictions of linear stability theory. This final observation has not been reported to date, primarily because sheet techniques have not been used at Reynolds numbers as high as the present study. The presentation will provide detailed views of the changes between the stable, straight, and unstable cases using our volume rendering approach, and will provide statistical measures such as changes to vortex spacing and size, to quantify such changes.
An Interactive Virtual 3D Tool for Scientific Exploration of Planetary Surfaces
NASA Astrophysics Data System (ADS)
Traxler, Christoph; Hesina, Gerd; Gupta, Sanjeev; Paar, Gerhard
2014-05-01
In this paper we present an interactive 3D visualization tool for scientific analysis and planning of planetary missions. At the moment scientists have to look at individual camera images separately. There is no tool to combine them in three dimensions and look at them seamlessly as a geologist would do (by walking backwards and forwards resulting in different scales). For this reason a virtual 3D reconstruction of the terrain that can be interactively explored is necessary. Such a reconstruction has to consider multiple scales ranging from orbital image data to close-up surface image data from rover cameras. The 3D viewer allows seamless zooming between these various scales, giving scientists the possibility to relate small surface features (e.g. rock outcrops) to larger geological contexts. For a reliable geologic assessment a realistic surface rendering is important. Therefore the material properties of the rock surfaces will be considered for real-time rendering. This is achieved by an appropriate Bidirectional Reflectance Distribution Function (BRDF) estimated from the image data. The BRDF is implemented to run on the Graphical Processing Unit (GPU) to enable realistic real-time rendering, which allows a naturalistic perception for scientific analysis. Another important aspect for realism is the consideration of natural lighting conditions, which means skylight to illuminate the reconstructed scene. In our case we provide skylights from Mars and Earth, which allows switching between these two modes of illumination. This gives geologists the opportunity to perceive rock outcrops from Mars as they would appear on Earth facilitating scientific assessment. Besides viewing the virtual reconstruction on multiple scales, scientists can also perform various measurements, i.e. geo-coordinates of a selected point or distance between two surface points. Rover or other models can be placed into the scene and snapped onto certain location of the terrain. These are important features to support the planning of rover paths. In addition annotations can be placed directly into the 3D scene, which also serve as landmarks to aid navigation. The presented visualization and planning tool is a valuable asset for scientific analysis of planetary mission data. It complements traditional methods by giving access to an interactive virtual 3D reconstruction, which is realistically rendered. Representative examples and further information about the interactive 3D visualization tool can be found on the FP7-SPACE Project PRoViDE web page http://www.provide-space.eu/interactive-virtual-3d-tool/. The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 312377 'PRoViDE'.
Improving aircraft conceptual design - A PHIGS interactive graphics interface for ACSYNT
NASA Technical Reports Server (NTRS)
Wampler, S. G.; Myklebust, A.; Jayaram, S.; Gelhausen, P.
1988-01-01
A CAD interface has been created for the 'ACSYNT' aircraft conceptual design code that permits the execution and control of the design process via interactive graphics menus. This CAD interface was coded entirely with the new three-dimensional graphics standard, the Programmer's Hierarchical Interactive Graphics System. The CAD/ACSYNT system is designed for use by state-of-the-art high-speed imaging work stations. Attention is given to the approaches employed in modeling, data storage, and rendering.
China’s Pursuit of Africa’s Natural Resources, (CSL Issue Paper, Volume 1-09, June 2009)
2009-06-01
beyond recovery, and mineral exploitation has generated significant pollution that has rendered agricultural land infertile and given rise to social...heard. Then-South African President Thabo Mbeki cautioned China against dumping its low-cost textile and plastics products in Africa, thus denying...energy-security-asem- beijing -financial-crisis (accessed December 5, 2008). African Politics Portal. 2008. Top Ten Misconceptions about Chinese
Journal of Special Operations Medicine. Volume 8, Edition 4, Fall 2008
2008-01-01
preempt, or respond to terrorism. Weapons of mass destruction (WMDs) counterproliferation missions are taken to lo - cate, seize, destroy, render...computing consumable supply quantities, all line items are rounded to the nearest quarter package. This not only provides lo - gistics units an easier...substantial sleep pressure (fatigue) is a los - ing proposition. Second, detractors often like to draw comparisons be- tween civil-aviation operations, which do
Real-Time High-Dynamic Range Texture Mapping
2001-01-01
the renderings produced by radiosity and global illumination algorithms. As a particular example, Greg Ward’s RADIANCE synthetic imaging system [32...in soft- ware only. [26] presented a technique for performing Ward’s tone reproduction algo- rithm interactively to visualize radiosity solutions
Snow rendering for interactive snowplow simulation : supporting safety in snowplow design.
DOT National Transportation Integrated Search
2011-02-01
During a snowfall, following a snowplow can be extremely dangerous. This danger comes from the human visual : systems inability to accurately perceive the speed and motion of the snowplow, often resulting in rear-end : collisions. For this project...
Relighting Character Motion for Photoreal Simulations
2006-11-01
Southern California Cinema -Television Interactive Media Division, LA, CA 90089 ABSTRACT. We present a fully image-based approach for...Graphics Proceedings, Annual Conference Series, 279–288. DEBEVEC, P. E., TAYLOR, C. J., AND MALIK, J. 1996. Modeling and rendering architecture from
On soft clipping of Zernike moments for deblurring and enhancement of optical point spread functions
NASA Astrophysics Data System (ADS)
Becherer, Nico; Jödicke, Hanna; Schlosser, Gregor; Hesser, Jürgen; Zeilfelder, Frank; Männer, Reinhard
2006-02-01
Blur and noise originating from the physical imaging processes degrade the microscope data. Accurate deblurring techniques require, however, an accurate estimation of the underlying point-spread function (PSF). A good representation of PSFs can be achieved by Zernike Polynomials since they offer a compact representation where low-order coefficients represent typical aberrations of optical wavefronts while noise is represented in higher order coefficients. A quantitative description of the noise distribution (Gaussian) over the Zernike moments of various orders is given which is the basis for the new soft clipping approach for denoising of PSFs. Instead of discarding moments beyond a certain order, those Zernike moments that are more sensitive to noise are dampened according to the measured distribution and the present noise model. Further, a new scheme to combine experimental and theoretical PSFs in Zernike space is presented. According to our experimental reconstructions, using the new improved PSF the correlation between reconstructed and original volume is raised by 15% on average cases and up to 85% in the case of thin fibre structures, compared to reconstructions where a non improved PSF was used. Finally, we demonstrate the advantages of our approach on 3D images of confocal microscopes by generating visually improved volumes. Additionally, we are presenting a method to render the reconstructed results using a new volume rendering method that is almost artifact-free. The new approach is based on a Shear-Warp technique, wavelet data encoding techniques and a recent approach to approximate the gray value distribution by a Super spline model.
LONI visualization environment.
Dinov, Ivo D; Valentino, Daniel; Shin, Bae Cheol; Konstantinidis, Fotios; Hu, Guogang; MacKenzie-Graham, Allan; Lee, Erh-Fang; Shattuck, David; Ma, Jeff; Schwartz, Craig; Toga, Arthur W
2006-06-01
Over the past decade, the use of informatics to solve complex neuroscientific problems has increased dramatically. Many of these research endeavors involve examining large amounts of imaging, behavioral, genetic, neurobiological, and neuropsychiatric data. Superimposing, processing, visualizing, or interpreting such a complex cohort of datasets frequently becomes a challenge. We developed a new software environment that allows investigators to integrate multimodal imaging data, hierarchical brain ontology systems, on-line genetic and phylogenic databases, and 3D virtual data reconstruction models. The Laboratory of Neuro Imaging visualization environment (LONI Viz) consists of the following components: a sectional viewer for imaging data, an interactive 3D display for surface and volume rendering of imaging data, a brain ontology viewer, and an external database query system. The synchronization of all components according to stereotaxic coordinates, region name, hierarchical ontology, and genetic labels is achieved via a comprehensive BrainMapper functionality, which directly maps between position, structure name, database, and functional connectivity information. This environment is freely available, portable, and extensible, and may prove very useful for neurobiologists, neurogenetisists, brain mappers, and for other clinical, pedagogical, and research endeavors.
High-power graphic computers for visual simulation: a real-time--rendering revolution
NASA Technical Reports Server (NTRS)
Kaiser, M. K.
1996-01-01
Advances in high-end graphics computers in the past decade have made it possible to render visual scenes of incredible complexity and realism in real time. These new capabilities make it possible to manipulate and investigate the interactions of observers with their visual world in ways once only dreamed of. This paper reviews how these developments have affected two preexisting domains of behavioral research (flight simulation and motion perception) and have created a new domain (virtual environment research) which provides tools and challenges for the perceptual psychologist. Finally, the current limitations of these technologies are considered, with an eye toward how perceptual psychologist might shape future developments.
Scalable Multi-Platform Distribution of Spatial 3d Contents
NASA Astrophysics Data System (ADS)
Klimke, J.; Hagedorn, B.; Döllner, J.
2013-09-01
Virtual 3D city models provide powerful user interfaces for communication of 2D and 3D geoinformation. Providing high quality visualization of massive 3D geoinformation in a scalable, fast, and cost efficient manner is still a challenging task. Especially for mobile and web-based system environments, software and hardware configurations of target systems differ significantly. This makes it hard to provide fast, visually appealing renderings of 3D data throughout a variety of platforms and devices. Current mobile or web-based solutions for 3D visualization usually require raw 3D scene data such as triangle meshes together with textures delivered from server to client, what makes them strongly limited in terms of size and complexity of the models they can handle. In this paper, we introduce a new approach for provisioning of massive, virtual 3D city models on different platforms namely web browsers, smartphones or tablets, by means of an interactive map assembled from artificial oblique image tiles. The key concept is to synthesize such images of a virtual 3D city model by a 3D rendering service in a preprocessing step. This service encapsulates model handling and 3D rendering techniques for high quality visualization of massive 3D models. By generating image tiles using this service, the 3D rendering process is shifted from the client side, which provides major advantages: (a) The complexity of the 3D city model data is decoupled from data transfer complexity (b) the implementation of client applications is simplified significantly as 3D rendering is encapsulated on server side (c) 3D city models can be easily deployed for and used by a large number of concurrent users, leading to a high degree of scalability of the overall approach. All core 3D rendering techniques are performed on a dedicated 3D rendering server, and thin-client applications can be compactly implemented for various devices and platforms.
STRING 3: An Advanced Groundwater Flow Visualization Tool
NASA Astrophysics Data System (ADS)
Schröder, Simon; Michel, Isabel; Biedert, Tim; Gräfe, Marius; Seidel, Torsten; König, Christoph
2016-04-01
The visualization of 3D groundwater flow is a challenging task. Previous versions of our software STRING [1] solely focused on intuitive visualization of complex flow scenarios for non-professional audiences. STRING, developed by Fraunhofer ITWM (Kaiserslautern, Germany) and delta h Ingenieurgesellschaft mbH (Witten, Germany), provides the necessary means for visualization of both 2D and 3D data on planar and curved surfaces. In this contribution we discuss how to extend this approach to a full 3D tool and its challenges in continuation of Michel et al. [2]. This elevates STRING from a post-production to an exploration tool for experts. In STRING moving pathlets provide an intuition of velocity and direction of both steady-state and transient flows. The visualization concept is based on the Lagrangian view of the flow. To capture every detail of the flow an advanced method for intelligent, time-dependent seeding is used building on the Finite Pointset Method (FPM) developed by Fraunhofer ITWM. Lifting our visualization approach from 2D into 3D provides many new challenges. With the implementation of a seeding strategy for 3D one of the major problems has already been solved (see Schröder et al. [3]). As pathlets only provide an overview of the velocity field other means are required for the visualization of additional flow properties. We suggest the use of Direct Volume Rendering and isosurfaces for scalar features. In this regard we were able to develop an efficient approach for combining the rendering through raytracing of the volume and regular OpenGL geometries. This is achieved through the use of Depth Peeling or A-Buffers for the rendering of transparent geometries. Animation of pathlets requires a strict boundary of the simulation domain. Hence, STRING needs to extract the boundary, even from unstructured data, if it is not provided. In 3D we additionally need a good visualization of the boundary itself. For this the silhouette based on the angle of neighboring faces is extracted. Similar algorithms help to find the 2D boundary of cuts through the 3D model. As interactivity plays a big role for an exploration tool the speed of the drawing routines is also important. To achieve this, different pathlet rendering solutions have been developed and benchmarked. These provide a trade-off between the usage of geometry and fragment shaders. We show that point sprite shaders have superior performance and visual quality over geometry-based approaches. Admittedly, the point sprite-based approach has many non-trivial problems of joining the different parts of the pathlet geometry. This research is funded by the Federal Ministry for Economic Affairs and Energy (Germany). [1] T. Seidel, C. König, M. Schäfer, I. Ostermann, T. Biedert, D. Hietel (2014). Intuitive visualization of transient groundwater flow. Computers & Geosciences, Vol. 67, pp. 173-179 [2] I. Michel, S. Schröder, T. Seidel, C. König (2015). Intuitive Visualization of Transient Flow: Towards a Full 3D Tool. Geophysical Research Abstracts, Vol. 17, EGU2015-1670 [3] S. Schröder, I. Michel, T. Seidel, C.M. König (2015). STRING 3: Full 3D visualization of groundwater Flow. In Proceedings of IAMG 2015 Freiberg, pp. 813-822
NASA Astrophysics Data System (ADS)
Gee, L.; Reed, B.; Mayer, L.
2002-12-01
Recent years have seen remarkable advances in sonar technology, positioning capabilities, and computer processing power that have revolutionized the way we image the seafloor. The US Naval Oceanographic Office (NAVOCEANO) has updated its survey vessels and launches to the latest generation of technology and now possesses a tremendous ocean observing and mapping capability. However, the systems produce massive amounts of data that must be validated prior to inclusion in various bathymetry, hydrography, and imagery products. The key to meeting the challenge of the massive data volumes was to change the approach that required every data point be viewed. This was achieved with the replacement of the traditional line-by-line editing approach with an automated cleaning module, and an area-based editor. The approach includes a unique data structure that enables the direct access to the full resolution data from the area based view, including a direct interface to target files and imagery snippets from mosaic and full resolution imagery. The increased data volumes to be processed also offered tremendous opportunities in terms of visualization and analysis, and interactive 3D presentation of the complex multi-attribute data provided a natural complement to the area based processing. If properly geo-referenced and treated, the complex data sets can be presented in a natural and intuitive manner that allows the integration of multiple components each at their inherent level of resolution and without compromising the quantitative nature of the data. Artificial sun-illumination, shading, and 3-D rendering are used with digital bathymetric data to form natural looking and easily interpretable, yet quantitative, landscapes that allow the user to rapidly identify the data requiring further processing or analysis. Color can be used to represent depth or other parameters (like backscatter, quality factors or sediment properties), which can be draped over the DTM, or high resolution imagery can be texture mapped on bathymetric data. The presentation will demonstrate the new approach of the integrated area based processing and 3D visualization with a number of data sets from recent surveys.
Feasibility study: real-time 3-D ultrasound imaging of the brain.
Smith, Stephen W; Chu, Kengyeh; Idriss, Salim F; Ivancevich, Nikolas M; Light, Edward D; Wolf, Patrick D
2004-10-01
We tested the feasibility of real-time, 3-D ultrasound (US) imaging in the brain. The 3-D scanner uses a matrix phased-array transducer of 512 transmit channels and 256 receive channels operating at 2.5 MHz with a 15-mm diameter footprint. The real-time system scans a 65 degrees pyramid, producing up to 30 volumetric scans per second, and features up to five image planes as well as 3-D rendering, 3-D pulsed-wave and color Doppler. In a human subject, the real-time 3-D scans produced simultaneous transcranial horizontal (axial), coronal and sagittal image planes and real-time volume-rendered images of the gross anatomy of the brain. In a transcranial sheep model, we obtained real-time 3-D color flow Doppler scans and perfusion images using bolus injection of contrast agents into the internal carotid artery.
REVIEW ARTICLE: The next 50 years of the SI: a review of the opportunities for the e-Science age
NASA Astrophysics Data System (ADS)
Foster, Marcus P.
2010-12-01
The International System of Units (SI) was declared as a practical and evolving system in 1960 and is now 50 years old. A large amount of theoretical and experimental work has been conducted to change the standards for the base units from artefacts to physical constants, to improve their stability and reproducibility. Less attention, however, has been paid to improving the SI definitions, utility and usability, which suffer from contradictions, ambiguities and inconsistencies. While humans can often resolve these issues contextually, computers cannot. As an ever-increasing volume and proportion of data about physical quantities is collected, exchanged, processed and rendered by computers, this paper argues that the SI definitions, symbols and syntax should be made more rigorous, so they can be represented wholly and unambiguously in ontologies, programs, data and text, and so the SI notation can be rendered faithfully in print and on screen.
Design and implementation of a 3D ocean virtual reality and visualization engine
NASA Astrophysics Data System (ADS)
Chen, Ge; Li, Bo; Tian, Fenglin; Ji, Pengbo; Li, Wenqing
2012-12-01
In this study, a 3D virtual reality and visualization engine for rendering the ocean, named VV-Ocean, is designed for marine applications. The design goals of VV-Ocean aim at high fidelity simulation of ocean environment, visualization of massive and multidimensional marine data, and imitation of marine lives. VV-Ocean is composed of five modules, i.e. memory management module, resources management module, scene management module, rendering process management module and interaction management module. There are three core functions in VV-Ocean: reconstructing vivid virtual ocean scenes, visualizing real data dynamically in real time, imitating and simulating marine lives intuitively. Based on VV-Ocean, we establish a sea-land integration platform which can reproduce drifting and diffusion processes of oil spilling from sea bottom to surface. Environment factors such as ocean current and wind field have been considered in this simulation. On this platform oil spilling process can be abstracted as movements of abundant oil particles. The result shows that oil particles blend with water well and the platform meets the requirement for real-time and interactive rendering. VV-Ocean can be widely used in ocean applications such as demonstrating marine operations, facilitating maritime communications, developing ocean games, reducing marine hazards, forecasting the weather over oceans, serving marine tourism, and so on. Finally, further technological improvements of VV-Ocean are discussed.
Interactive browsing of 3D environment over the Internet
NASA Astrophysics Data System (ADS)
Zhang, Cha; Li, Jin
2000-12-01
In this paper, we describe a system for wandering in a realistic environment over the Internet. The environment is captured by the concentric mosaic, compressed via the reference block coder (RBC), and accessed and delivered over the Internet through the virtual media (Vmedia) access protocol. Capturing the environment through the concentric mosaic is easy. We mount a camera at the end of a level beam, and shoot images as the beam rotates. The huge dataset of the concentric mosaic is then compressed through the RBC, which is specifically designed for both high compression efficiency and just-in-time (JIT) rendering. Through the JIT rendering function, only a portion of the RBC bitstream is accessed, decoded and rendered for each virtual view. A multimedia communication protocol -- the Vmedia protocol, is then proposed to deliver the compressed concentric mosaic data over the Internet. Only the bitstream segments corresponding to the current view are streamed over the Internet. Moreover, the delivered bitstream segments are managed by a local Vmedia cache so that frequently used bitstream segments need not be streamed over the Internet repeatedly, and the Vmedia is able to handle a RBC bitstream larger than its memory capacity. A Vmedia concentric mosaic interactive browser is developed where the user can freely wander in a realistic environment, e.g., rotate around, walk forward/backward and sidestep, even under a tight bandwidth of 33.6 kbps.
Look and Feel: Haptic Interaction for Biomedicine
1995-10-01
algorithm that is evaluated within the topology of the model. During each time step, forces are summed for each mobile atom based on external forces...volumetric properties; (b) conserving computation power by rendering media local to the interaction point; and (c) evaluating the simulation within...alteration of the model topology. Simulation of the DSM state is accomplished by a multi-step algorithm that is evaluated within the topology of the
Expanding the Interaction Lexicon for 3D Graphics
2001-11-01
believe that extending it to work with image-based rendering engines is straightforward. I could modify plenoptic image editing [Seitz] to allow...M. Seitz and Kiriakos N. Kutulakos. Plenoptic Image Editing. International Conference on Computer Vision ‘98, pages 17-24. [ShapeCapture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sewell, Christopher Meyer
This is a set of slides from a guest lecture for a class at the University of Texas, El Paso on visualization and data analysis for high-performance computing. The topics covered are the following: trends in high-performance computing; scientific visualization, such as OpenGL, ray tracing and volume rendering, VTK, and ParaView; data science at scale, such as in-situ visualization, image databases, distributed memory parallelism, shared memory parallelism, VTK-m, "big data", and then an analysis example.
Untermyer, S.
1962-04-10
A boiling reactor having a reactivity which is reduced by an increase in the volume of vaporized coolant therein is described. In this system unvaporized liquid coolant is extracted from the reactor, heat is extracted therefrom, and it is returned to the reactor as sub-cooled liquid coolant. This reduces a portion of the coolant which includes vaporized coolant within the core assembly thereby enhancing the power output of the assembly and rendering the reactor substantially self-regulating. (AEC)
1995-02-01
capabilities for the common benefit of the NATO community; — Providing scientific and technical advice and assistance to the Military Committee in the field...Exchange of scientific and technical information; — Providing assistance to member nations for the purpose of increasing their scientific and...technical potential; — Rendering scientific and technical assistance, as requested, to other NATO bodies and to member nations in connection with
Quality improving techniques for free-viewpoint DIBR
NASA Astrophysics Data System (ADS)
Do, Luat; Zinger, Sveta; de With, Peter H. N.
2010-02-01
Interactive free-viewpoint selection applied to a 3D multi-view signal is a possible attractive feature of the rapidly developing 3D TV media. This paper explores a new rendering algorithm that computes a free-viewpoint based on depth image warping between two reference views from existing cameras. We have developed three quality enhancing techniques that specifically aim at solving the major artifacts. First, resampling artifacts are filled in by a combination of median filtering and inverse warping. Second, contour artifacts are processed while omitting warping of edges at high discontinuities. Third, we employ a depth signal for more accurate disocclusion inpainting. We obtain an average PSNR gain of 3 dB and 4.5 dB for the 'Breakdancers' and 'Ballet' sequences, respectively, compared to recently published results. While experimenting with synthetic data, we observe that the rendering quality is highly dependent on the complexity of the scene. Moreover, experiments are performed using compressed video from surrounding cameras. The overall system quality is dominated by the rendering quality and not by coding.
Michael Sukop,; Cunningham, Kevin J.
2014-01-01
Digital optical borehole images at approximately 2 mm vertical resolution and borehole caliper data were used to create three-dimensional renderings of the distribution of (1) matrix porosity and (2) vuggy megaporosity for the karst carbonate Biscayne aquifer in southeastern Florida. The renderings based on the borehole data were used as input into Lattice Boltzmann methods to obtain intrinsic permeability estimates for this extremely transmissive aquifer, where traditional aquifer test methods may fail due to very small drawdowns and non-Darcian flow that can reduce apparent hydraulic conductivity. Variogram analysis of the borehole data suggests a nearly isotropic rock structure at lag lengths up to the nominal borehole diameter. A strong correlation between the diameter of the borehole and the presence of vuggy megaporosity in the data set led to a bias in the variogram where the computed horizontal spatial autocorrelation is strong at lag distances greater than the nominal borehole size. Lattice Boltzmann simulation of flow across a 0.4 × 0.4 × 17 m (2.72 m3 volume) parallel-walled column of rendered matrix and vuggy megaporosity indicates a high hydraulic conductivity of 53 m s−1. This value is similar to previous Lattice Boltzmann calculations of hydraulic conductivity in smaller limestone samples of the Biscayne aquifer. The development of simulation methods that reproduce dual-porosity systems with higher resolution and fidelity and that consider flow through horizontally longer renderings could provide improved estimates of the hydraulic conductivity and help to address questions about the importance of scale.
NASA Astrophysics Data System (ADS)
Sukop, Michael C.; Cunningham, Kevin J.
2014-11-01
Digital optical borehole images at approximately 2 mm vertical resolution and borehole caliper data were used to create three-dimensional renderings of the distribution of (1) matrix porosity and (2) vuggy megaporosity for the karst carbonate Biscayne aquifer in southeastern Florida. The renderings based on the borehole data were used as input into Lattice Boltzmann methods to obtain intrinsic permeability estimates for this extremely transmissive aquifer, where traditional aquifer test methods may fail due to very small drawdowns and non-Darcian flow that can reduce apparent hydraulic conductivity. Variogram analysis of the borehole data suggests a nearly isotropic rock structure at lag lengths up to the nominal borehole diameter. A strong correlation between the diameter of the borehole and the presence of vuggy megaporosity in the data set led to a bias in the variogram where the computed horizontal spatial autocorrelation is strong at lag distances greater than the nominal borehole size. Lattice Boltzmann simulation of flow across a 0.4 × 0.4 × 17 m (2.72 m3 volume) parallel-walled column of rendered matrix and vuggy megaporosity indicates a high hydraulic conductivity of 53 m s-1. This value is similar to previous Lattice Boltzmann calculations of hydraulic conductivity in smaller limestone samples of the Biscayne aquifer. The development of simulation methods that reproduce dual-porosity systems with higher resolution and fidelity and that consider flow through horizontally longer renderings could provide improved estimates of the hydraulic conductivity and help to address questions about the importance of scale.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gockner, T. L., E-mail: theresa.gockner@med.uni-heidelberg.de; Zelzer, S., E-mail: s.zelzer@dkfz-heidelberg.de; Mokry, T., E-mail: theresa.mokry@med.uni-heidelberg.de
PurposeThis study was designed to compare technical parameters during ablation as well as CT 3D rendering and histopathology of the ablation zone between sphere-enhanced microwave ablation (sMWA) and bland microwave ablation (bMWA).MethodsIn six sheep-livers, 18 microwave ablations were performed with identical system presets (power output: 80 W, ablation time: 120 s). In three sheep, transarterial embolisation (TAE) was performed immediately before microwave ablation using spheres (diameter: 40 ± 10 μm) (sMWA). In the other three sheep, microwave ablation was performed without spheres embolisation (bMWA). Contrast-enhanced CT, sacrifice, and liver harvest followed immediately after microwave ablation. Study goals included technical parameters during ablation (resulting power output,more » ablation time), geometry of the ablation zone applying specific CT 3D rendering with a software prototype (short axis of the ablation zone, volume of the largest aligned ablation sphere within the ablation zone), and histopathology (hematoxylin-eosin, Masson Goldner and TUNEL).ResultsResulting power output/ablation times were 78.7 ± 1.0 W/120 ± 0.0 s for bMWA and 78.4 ± 1.0 W/120 ± 0.0 s for sMWA (n.s., respectively). Short axis/volume were 23.7 ± 3.7 mm/7.0 ± 2.4 cm{sup 3} for bMWA and 29.1 ± 3.4 mm/11.5 ± 3.9 cm{sup 3} for sMWA (P < 0.01, respectively). Histopathology confirmed the signs of coagulation necrosis as well as early and irreversible cell death for bMWA and sMWA. For sMWA, spheres were detected within, at the rim, and outside of the ablation zone without conspicuous features.ConclusionsSpecific CT 3D rendering identifies a larger ablation zone for sMWA compared with bMWA. The histopathological signs and the detectable amount of cell death are comparable for both groups. When comparing sMWA with bMWA, TAE has no effect on the technical parameters during ablation.« less
Virtual Environment for Surgical Room of the Future.
1995-10-01
Design; 1. wire frame Dynamic Interaction 2. surface B. Acoustic Three-Dimensional Modeling; 3. solid based on radiosity modeling B. Dynamic...infection control of people and E. Rendering and Shadowing equipment 1. ray tracing D. Fluid Flow 2. radiosity F. Animation OBJECT RECOGNITION COMMUNICATION
Foundations for a Post-Modern Curriculum.
ERIC Educational Resources Information Center
Doll, William E., Jr.
This paper suggests that present-day curriculum, based on Newtonian thought, has been rendered obsolete by the holistic and interactive "post-modern" world view based on quantum physics, nonlinear mathematics, general systems theory, and Ilya Prigogine's nonequilibrium thermodynamics. The Newtonian world view, which is linear and…
ISS Radiation Shielding and Acoustic Simulation Using an Immersive Environment
NASA Technical Reports Server (NTRS)
Verhage, Joshua E.; Sandridge, Chris A.; Qualls, Garry D.; Rizzi, Stephen A.
2002-01-01
The International Space Station Environment Simulator (ISSES) is a virtual reality application that uses high-performance computing, graphics, and audio rendering to simulate the radiation and acoustic environments of the International Space Station (ISS). This CAVE application allows the user to maneuver to different locations inside or outside of the ISS and interactively compute and display the radiation dose at a point. The directional dose data is displayed as a color-mapped sphere that indicates the relative levels of radiation from all directions about the center of the sphere. The noise environment is rendered in real time over headphones or speakers and includes non-spatial background noise, such as air-handling equipment, and spatial sounds associated with specific equipment racks, such as compressors or fans. Changes can be made to equipment rack locations that produce changes in both the radiation shielding and system noise. The ISSES application allows for interactive investigation and collaborative trade studies between radiation shielding and noise for crew safety and comfort.
A transparently scalable visualization architecture for exploring the universe.
Fu, Chi-Wing; Hanson, Andrew J
2007-01-01
Modern astronomical instruments produce enormous amounts of three-dimensional data describing the physical Universe. The currently available data sets range from the solar system to nearby stars and portions of the Milky Way Galaxy, including the interstellar medium and some extrasolar planets, and extend out to include galaxies billions of light years away. Because of its gigantic scale and the fact that it is dominated by empty space, modeling and rendering the Universe is very different from modeling and rendering ordinary three-dimensional virtual worlds at human scales. Our purpose is to introduce a comprehensive approach to an architecture solving this visualization problem that encompasses the entire Universe while seeking to be as scale-neutral as possible. One key element is the representation of model-rendering procedures using power scaled coordinates (PSC), along with various PSC-based techniques that we have devised to generalize and optimize the conventional graphics framework to the scale domains of astronomical visualization. Employing this architecture, we have developed an assortment of scale-independent modeling and rendering methods for a large variety of astronomical models, and have demonstrated scale-insensitive interactive visualizations of the physical Universe covering scales ranging from human scale to the Earth, to the solar system, to the Milky Way Galaxy, and to the entire observable Universe.
Investigations on landmine detection by neutron-based techniques.
Csikai, J; Dóczi, R; Király, B
2004-07-01
Principles and techniques of some neutron-based methods used to identify the antipersonnel landmines (APMs) are discussed. New results have been achieved in the field of neutron reflection, transmission, scattering and reaction techniques. Some conclusions are as follows: The neutron hand-held detector is suitable for the observation of anomaly caused by a DLM2-like sample in different soils with a scanning speed of 1m(2)/1.5 min; the reflection cross section of thermal neutrons rendered the determination of equivalent thickness of different soil components possible; a simple method was developed for the determination of the thermal neutron flux perturbation factor needed for multi-elemental analysis of bulky samples; unfolded spectra of elastically backscattered neutrons using broad-spectrum sources render the identification of APMs possible; the knowledge of leakage spectra of different source neutrons is indispensable for the determination of the differential and integrated reaction rates and through it the dimension of the interrogated volume; the precise determination of the C/O atom fraction requires the investigations on the angular distribution of the 6.13MeV gamma-ray emitted in the (16)O(n,n'gamma) reaction. These results, in addition to the identification of landmines, render the improvement of the non-intrusive neutron methods possible.
A New Approach to the Visual Rendering of Mantle Tomography
NASA Astrophysics Data System (ADS)
Holtzman, B. K.; Pratt, M. J.; Turk, M.; Hannasch, D. A.
2016-12-01
Visualization of mantle tomographic models requires a range of subjective aesthetic decisions that are often made subconsciously or unarticulated by authors. Many of these decisions affect the interpretations of the model, and therefore should be articulated and understood. In 2D these decisions are manifest in the choice of colormap, including the data values associated with the neutral/transitional colorband, as well as the correspondence between the extrema in the colormap and the parameters of the extrema. For example, we generally choose warm color signifying slow- and cool colors signifying fast velocities (or perturbations), but where is the transition, and the color gradients from transition to extrema? In 3D, volumes are generally rendered by choosing an isosurface of a velocity perturbation (relative to a model at each depth) and coloring it slow to fast. The choice of isosurface is arbitrary or guided by a researcher's intuition, again strongly affecting (or driven by) the interpretation. Here, we present a different approach to 3-D rendering of tomography models, using true volumetric rendering with "yt", a python package for visualization and analysis of data. In our approach, we do not use isosurfaces; instead, we render the extrema in the tomographic model as the most opaque, with an opacity function that touches zero (totally transparent) at dynamically selected values, or at the average value at each depth. The intent is that the most robust aspects of the model are visually clear, and the visualization emphasizes the nature of the interfaces between regions as well as the form of distinct mantle regions. Much of the current scientific discussion in upper mantle tomography focuses on the nature of interfaces, so we will demonstrate how decisions in the definition of the transparent regions influence interpretation of tomographic models. Our aim is to develop a visual language for tomographic visualization that can help focus geodynamic questions.
PPI layouts: BioJS components for the display of Protein-Protein Interactions.
Salazar, Gustavo A; Meintjes, Ayton; Mulder, Nicola
2014-01-01
We present two web-based components for the display of Protein-Protein Interaction networks using different self-organizing layout methods: force-directed and circular. These components conform to the BioJS standard and can be rendered in an HTML5-compliant browser without the need for third-party plugins. We provide examples of interaction networks and how the components can be used to visualize them, and refer to a more complex tool that uses these components. http://github.com/biojs/biojs; http://dx.doi.org/10.5281/zenodo.7753.
Amira: Multi-Dimensional Scientific Visualization for the GeoSciences in the 21st Century
NASA Astrophysics Data System (ADS)
Bartsch, H.; Erlebacher, G.
2003-12-01
amira (www.amiravis.com) is a general purpose framework for 3D scientific visualization that meets the needs of the non-programmer, the script writer, and the advanced programmer alike. Provided modules may be visually assembled in an interactive manner to create complex visual displays. These modules and their associated user interfaces are controlled either through a mouse, or via an interactive scripting mechanism based on Tcl. We provide interactive demonstrations of the various features of Amira and explain how these may be used to enhance the comprehension of datasets in use in the Earth Sciences community. Its features will be illustrated on scalar and vector fields on grid types ranging from Cartesian to fully unstructured. Specialized extension modules developed by some of our collaborators will be illustrated [1]. These include a module to automatically choose values for salient isosurface identification and extraction, and color maps suitable for volume rendering. During the session, we will present several demonstrations of remote networking, processing of very large spatio-temporal datasets, and various other projects that are underway. In particular, we will demonstrate WEB-IS, a java-applet interface to Amira that allows script editing via the web, and selected data analysis [2]. [1] G. Erlebacher, D. A. Yuen, F. Dubuffet, "Case Study: Visualization and Analysis of High Rayleigh Number -- 3D Convection in the Earth's Mantle", Proceedings of Visualization 2002, pp. 529--532. [2] Y. Wang, G. Erlebacher, Z. A. Garbow, D. A. Yuen, "Web-Based Service of a Visualization Package 'amira' for the Geosciences", Visual Geosciences, 2003.
Interactive Volume Rendering of Diffusion Tensor Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hlawitschka, Mario; Weber, Gunther; Anwander, Alfred
As 3D volumetric images of the human body become an increasingly crucial source of information for the diagnosis and treatment of a broad variety of medical conditions, advanced techniques that allow clinicians to efficiently and clearly visualize volumetric images become increasingly important. Interaction has proven to be a key concept in analysis of medical images because static images of 3D data are prone to artifacts and misunderstanding of depth. Furthermore, fading out clinically irrelevant aspects of the image while preserving contextual anatomical landmarks helps medical doctors to focus on important parts of the images without becoming disoriented. Our goal wasmore » to develop a tool that unifies interactive manipulation and context preserving visualization of medical images with a special focus on diffusion tensor imaging (DTI) data. At each image voxel, DTI provides a 3 x 3 tensor whose entries represent the 3D statistical properties of water diffusion locally. Water motion that is preferential to specific spatial directions suggests structural organization of the underlying biological tissue; in particular, in the human brain, the naturally occuring diffusion of water in the axon portion of neurons is predominantly anisotropic along the longitudinal direction of the elongated, fiber-like axons [MMM+02]. This property has made DTI an emerging source of information about the structural integrity of axons and axonal connectivity between brain regions, both of which are thought to be disrupted in a broad range of medical disorders including multiple sclerosis, cerebrovascular disease, and autism [Mos02, FCI+01, JLH+99, BGKM+04, BJB+03].« less
Sorensen, Mads Solvsten; Mosegaard, Jesper; Trier, Peter
2009-06-01
Existing virtual simulators for middle ear surgery are based on 3-dimensional (3D) models from computed tomographic or magnetic resonance imaging data in which image quality is limited by the lack of detail (maximum, approximately 50 voxels/mm3), natural color, and texture of the source material.Virtual training often requires the purchase of a program, a customized computer, and expensive peripherals dedicated exclusively to this purpose. The Visible Ear freeware library of digital images from a fresh-frozen human temporal bone was segmented, and real-time volume rendered as a 3D model of high-fidelity, true color, and great anatomic detail and realism of the surgically relevant structures. A haptic drilling model was developed for surgical interaction with the 3D model. Realistic visualization in high-fidelity (approximately 125 voxels/mm3) and true color, 2D, or optional anaglyph stereoscopic 3D was achieved on a standard Core 2 Duo personal computer with a GeForce 8,800 GTX graphics card, and surgical interaction was provided through a relatively inexpensive (approximately $2,500) Phantom Omni haptic 3D pointing device. This prototype is published for download (approximately 120 MB) as freeware at http://www.alexandra.dk/ves/index.htm.With increasing personal computer performance, future versions may include enhanced resolution (up to 8,000 voxels/mm3) and realistic interaction with deformable soft tissue components such as skin, tympanic membrane, dura, and cholesteatomas-features some of which are not possible with computed tomographic-/magnetic resonance imaging-based systems.
Childhood Cumulative Risk Exposure and Adult Amygdala Volume and Function
Evans, Gary W.; Swain, James E.; King, Anthony P.; Wang, Xin; Javanbakht, Arash; Ho, S. Shaun; Angstadt, Michael; Phan, K. Luan; Xie, Hong; Liberzon, Israel
2015-01-01
Considerable work indicates that early cumulative risk exposure is aversive to human development, but very little research has examined neurological underpinnings of these robust findings. We investigated amygdala volume and reactivity to facial stimuli among adults (M = 23.7 years, n = 54) as a function of cumulative risk exposure during childhood (ages 9 and 13). In addition, we tested whether expected, cumulative risk elevations in amygdala volume would mediate functional reactivity of the amygdala during socio-emotional processing. Risks included substandard housing quality, noise, crowding, family turmoil, child separation from family, and violence. Total and left hemisphere adult amygdala volumes, respectively were positively related to cumulative risk exposure during childhood. The links between childhood cumulative risk exposure and elevated amygdala responses to emotionally neutral facial stimuli in adulthood were mediated by the respective amygdala volumes. Cumulative risk exposure in later adolescence (17 years), however, was unrelated to subsequent, adult amygdala volume or function. Physical and socioemotional risk exposures early in life appear to alter amygdala development, rendering adults more reactive to ambiguous stimuli such as neutral faces. These stress-related differences in childhood amygdala development might contribute to well-documented psychological distress as a function of early risk exposure. PMID:26469872
NASA Astrophysics Data System (ADS)
Edwards, Warren S.; Ritchie, Cameron J.; Kim, Yongmin; Mack, Laurence A.
1995-04-01
We have developed a three-dimensional (3D) imaging system using power Doppler (PD) ultrasound (US). This system can be used for visualizing and analyzing the vascular anatomy of parenchymal organs. To create the 3D PD images, we acquired a series of two-dimensional PD images from a commercial US scanner and recorded the position and orientation of each image using a 3D magnetic position sensor. Three-dimensional volumes were reconstructed using specially designed software and then volume rendered for display. We assessed the feasibility and geometric accuracy of our system with various flow phantoms. The system was then tested on a volunteer by scanning a transplanted kidney. The reconstructed volumes of the flow phantom contained less than 1 mm of geometric distortion and the 3D images of the transplanted kidney depicted the segmental, arcuate, and interlobar vessels.
Moriyama, Muka; Ohno-Matsui, Kyoko; Hayashi, Kengo; Shimada, Noriaki; Yoshida, Takeshi; Tokoro, Takashi; Morita, Ikuo
2011-08-01
To analyze the topography of human eyes with pathologic myopia by high-resolution magnetic resonance imaging (MRI) with volume rendering of the acquired images. Observational case series. Eighty-six eyes of 44 patients with high myopia (refractive error ≥-8.00 diopters [D] or axial length >26.5 mm) were studied. Forty emmetropic eyes were examined as controls. The participants were examined with an MRI scanner (Signa HDxt 1.5T, GE Healthcare, Waukesha, WI), and T(2)-weighted cubes were obtained. Volume renderings of the images from high-resolution 3-dimensional (3D) data were done by computer workstation. The margins of globes were then identified semiautomatically by the signal intensity, and the tissues outside the globes were removed. The 3D topographic characteristic of the globes and the distribution of the 4 distinct shapes of globes according to the symmetry and the radius of curvature of the contour of the posterior segment: the barrel, cylindric, nasally distorted, and temporally distorted types. In 69.8% of the patients with bilateral high myopia, both eyes had the same ocular shape. The most protruded part of the globe existed along the central sagittal axis in 78.3% of eyes and was slightly inferior to the central axis in the remaining eyes. In 38 of 68 eyes (55.9%) with bilateral pathologic myopia, multiple protrusions were observed. The eyes with 2 protrusions were subdivided into those with nasal protrusions and those with temporal protrusions. The eyes with 3 protrusions were subdivided into nasal, temporal superior, and temporal inferior protrusions. The eyes with visual field defects that could not be explained by myopic fundus lesions significantly more frequently had a temporally distorted shape. Eyes with ≥2 protrusions had myopic chorioretinal atrophy significantly more frequently than eyes with ≤1 protrusion. Our results demonstrate that it is possible to obtain a complete topographic image of human eyes by high-resolution MRI with volume-rendering techniques. The results showed that there are different ocular shapes in eyes with pathologic myopia, and that the difference in the ocular shape is correlated with the development of vision-threatening conditions in eyes with pathologic myopia. The author(s) have no proprietary or commercial interest in any materials discussed in this article. Copyright © 2011 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
Skype me! Socially Contingent Interactions Help Toddlers Learn Language
Roseberry, Sarah; Hirsh-Pasek, Kathy; Golinkoff, Roberta Michnick
2013-01-01
Language learning takes place in the context of social interactions, yet the mechanisms that render social interactions useful for learning language remain unclear. This paper focuses on whether social contingency might support word learning. Toddlers aged 24- to 30-months (N=36) were exposed to novel verbs in one of three conditions: live interaction training, socially contingent video training over video chat, and non-contingent video training (yoked video). Results suggest that children only learned novel verbs in socially contingent interactions (live interactions and video chat). The current study highlights the importance of social contingency in interactions for language learning and informs the literature on learning through screen media as the first study to examine word learning through video chat technology. PMID:24112079
Tsai, Wen-Ting; Hassan, Ahmed; Sarkar, Purbasha; Correa, Joaquin; Metlagel, Zoltan; Jorgens, Danielle M.; Auer, Manfred
2014-01-01
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g., signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets. PMID:25145678
Exploring the Impacts of Social Networking Sites on Academic Relations in the University
ERIC Educational Resources Information Center
Rambe, Patient
2011-01-01
Social networking sites (SNS) affordances for persistent interaction, collective generation of knowledge, and formation of peer-based clusters for knowledge sharing render them useful for developing constructivist knowledge environments. However, notwithstanding their academic value, these environments are not necessarily insulated from the…
The Perceived Helpfulness of Rendering Emotional First Aid via Email
ERIC Educational Resources Information Center
Gilat, Itzhak; Reshef, Eyal
2015-01-01
The present research examined the perceived helpfulness of an increasingly widespread mode of psychological assistance, namely, emotional first aid via email. The sample comprised 62 naturally occurring email interactions between distressful clients and trained volunteers operating within the framework of the Israeli Association for Emotional…
Imai, Takashi; Kovalenko, Andriy; Hirata, Fumio
2005-04-14
The three-dimensional reference interaction site model (3D-RISM) theory is applied to the analysis of hydration effects on the partial molar volume of proteins. For the native structure of some proteins, the partial molar volume is decomposed into geometric and hydration contributions using the 3D-RISM theory combined with the geometric volume calculation. The hydration contributions are correlated with the surface properties of the protein. The thermal volume, which is the volume of voids around the protein induced by the thermal fluctuation of water molecules, is directly proportional to the accessible surface area of the protein. The interaction volume, which is the contribution of electrostatic interactions between the protein and water molecules, is apparently governed by the charged atomic groups on the protein surface. The polar atomic groups do not make any contribution to the interaction volume. The volume differences between low- and high-pressure structures of lysozyme are also analyzed by the present method.
Thoma, Daniel S; Buranawat, Borvornwut; Hämmerle, Christoph H F; Held, Ulrike; Jung, Ronald E
2014-04-01
To review the dental literature in terms of efficacy of soft tissue augmentation procedures around dental implants and in partially edentulous sites. A Medline search was performed for human studies augmenting keratinized mucosa (KM) and soft tissue volume around implants and in partially edentulous areas. Due to heterogeneity in between the studies, no meta-analyses could be performed. Nine (KM) and eleven (volume) studies met the inclusion criteria. An apically positioned flap/vestibuloplasty (APF/V) plus a graft material [free gingival graft (FGG)/subepithelial connective tissue graft (SCTG)/collagen matrix (CM)] resulted in an increase of keratinized tissue (1.4-3.3 mm). Statistically significantly better outcomes were obtained for APF/V plus FGG/SCTG compared with controls (APF/V alone; no treatment) (p < 0.05). For surgery time and patient morbidity, statistically significantly more favourable outcomes were reported for CM compared to SCTGs (p < 0.05) in two randomized controlled clinical trials (RCTs), even though rendering less keratinized tissue. SCTGs were the best-documented method for gain of soft tissue volume at implant sites and partially edentulous sites. Aesthetically at immediate implant sites, better papilla fill and higher marginal mucosal levels were obtained using SCTGs compared to non-grafted sites. An APF/V plus FGG/SCTG was the best-documented and most successful method to increase the width of KM. APF/V plus CM demonstrated less gain in KM, but also less patient morbidity and surgery time compared to APF/V plus SCTG based on two RCTs. Autogenous grafts (SCTG) rendered an increase in soft tissue thickness and better aesthetics compared to non-grafted sites. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Minamiguchi, Hiroki; Kawai, Nobuyuki; Sato, Morio; Ikoma, Akira; Sanda, Hiroki; Nakata, Kouhei; Tanaka, Fumihiro; Nakai, Motoki; Sonomura, Tetsuo; Murotani, Kazuhiro; Hosokawa, Seiki; Nishioku, Tadayoshi
2014-01-01
Aortography for detecting hemorrhage is limited when determining the catheter treatment strategy because the artery responsible for hemorrhage commonly overlaps organs and non-responsible arteries. Selective catheterization of untargeted arteries would result in repeated arteriography, large volumes of contrast medium, and extended time. A volume-rendered hemorrhage-responsible arteriogram created with 64 multidetector-row CT (64MDCT) during aortography (MDCTAo) can be used both for hemorrhage mapping and catheter navigation. The MDCTAo depicted hemorrhage in 61 of 71 cases of suspected acute arterial bleeding treated at our institute in the last 3 years. Complete hemostasis by embolization was achieved in all cases. The hemorrhage-responsible arteriogram was used for navigation during catheterization, thus assisting successful embolization. Hemorrhage was not visualized in the remaining 10 patients, of whom 6 had a pseudoaneurysm in a visceral artery; 1 with urinary bladder bleeding and 1 with chest wall hemorrhage had gaze tamponade; and 1 with urinary bladder hemorrhage and 1 with uterine hemorrhage had spastic arteries. Six patients with pseudoaneurysm underwent preventive embolization and the other 4 patients were managed by watchful observation. MDCTAo has the advantage of depicting the arteries responsible for hemoptysis, whether from the bronchial arteries or other systemic arteries, in a single scan. MDCTAo is particularly useful for identifying the source of acute arterial bleeding in the pancreatic arcade area, which is supplied by both the celiac and superior mesenteric arteries. In a case of pelvic hemorrhage, MDCTAo identified the responsible artery from among numerous overlapping visceral arteries that branched from the internal iliac arteries. In conclusion, a hemorrhage-responsible arteriogram created by 64MDCT immediately before catheterization is useful for deciding the catheter treatment strategy for acute arterial bleeding.
Realistic soft tissue deformation strategies for real time surgery simulation.
Shen, Yunhe; Zhou, Xiangmin; Zhang, Nan; Tamma, Kumar; Sweet, Robert
2008-01-01
A volume-preserving deformation method (VPDM) is developed in complement with the mass-spring method (MSM) to improve the deformation quality of the MSM to model soft tissue in surgical simulation. This method can also be implemented as a stand-alone model. The proposed VPDM satisfies the Newton's laws of motion by obtaining the resultant vectors form an equilibrium condition. The proposed method has been tested in virtual surgery systems with haptic rendering demands.
USDA-ARS?s Scientific Manuscript database
Advances in micro-CT, digital computed tomography (CT) scan uses X-rays to make detailed pictures of structures inside of the body. Combining micro-CT with Digital Video Library systems, and linking this to Big Data, will change the way researchers, entomologist, and the public search and use anato...
1993-09-01
who under the terms of the Archeological and Historic Preservation Act must respond within 48 hours of notification. The DCA may render an immediate...1965 The Surviving Chumash. UCLA Archaeological Survey Annual Reports 65:277-302. Grant, Campbell 1973a Chuirash: Introduction. In R.F. Heizer , ed...Coastal Chumash. In R.F. Heizer , et., California. Volume 8, Handbook of North American"Indians, William C. Sturtevant, General Editor. Washington
The Archaeology and History of Lake Ray Roberts. Volume 1. Cultural Resources Survey.
1982-03-01
the survey have rendered the information they contain through the recording process and should be determined ineligible for further study. Fifty-five...clay features were actually human hearths ( Heizer and Brooks 1965), and the possibility that the Clovis point was planted ( Heizer 1974). Recent research...15:157-172. Hart, John Fraser 1976 The look of the land. Prentice-Hall, Englewood Cliffs, New Jersey. * 9-5 Heizer , R.F. 1974 Some thoughts on hoaxes
CrossTalk: The Journal of Defense Software Engineering. Volume 24, Number 2, March/April 2011
2011-04-01
and insider at- tacks, we plan to conduct experiments and collect concrete and empirical evidence. As we have done in prior research projects [11...subsequent service failure.” Yet, a faulty state can continue to render service; an er- roneous state cannot. Consider a system that receives concrete ...that does not satisfy specifications. The faults in the concrete are not detected during (faulty) acceptance testing. A two-deck bridge is built using
Three-dimensional x-ray diffraction nanoscopy
NASA Astrophysics Data System (ADS)
Nikulin, Andrei Y.; Dilanian, Ruben A.; Zatsepin, Nadia A.; Muddle, Barry C.
2008-08-01
A novel approach to x-ray diffraction data analysis for non-destructive determination of the shape of nanoscale particles and clusters in three-dimensions is illustrated with representative examples of composite nanostructures. The technique is insensitive to the x-rays coherence, which allows 3D reconstruction of a modal image without tomographic synthesis and in-situ analysis of large (over a several cubic millimeters) volume of material with a spatial resolution of few nanometers, rendering the approach suitable for laboratory facilities.
Bushong, Eric A; Johnson, Donald D; Kim, Keun-Young; Terada, Masako; Hatori, Megumi; Peltier, Steven T; Panda, Satchidananda; Merkle, Arno; Ellisman, Mark H
2015-02-01
The recently developed three-dimensional electron microscopic (EM) method of serial block-face scanning electron microscopy (SBEM) has rapidly established itself as a powerful imaging approach. Volume EM imaging with this scanning electron microscopy (SEM) method requires intense staining of biological specimens with heavy metals to allow sufficient back-scatter electron signal and also to render specimens sufficiently conductive to control charging artifacts. These more extreme heavy metal staining protocols render specimens light opaque and make it much more difficult to track and identify regions of interest (ROIs) for the SBEM imaging process than for a typical thin section transmission electron microscopy correlative light and electron microscopy study. We present a strategy employing X-ray microscopy (XRM) both for tracking ROIs and for increasing the efficiency of the workflow used for typical projects undertaken with SBEM. XRM was found to reveal an impressive level of detail in tissue heavily stained for SBEM imaging, allowing for the identification of tissue landmarks that can be subsequently used to guide data collection in the SEM. Furthermore, specific labeling of individual cells using diaminobenzidine is detectable in XRM volumes. We demonstrate that tungsten carbide particles or upconverting nanophosphor particles can be used as fiducial markers to further increase the precision and efficiency of SBEM imaging.
Bushong, Eric A.; Johnson, Donald D.; Kim, Keun-Young; Terada, Masako; Hatori, Megumi; Peltier, Steven T.; Panda, Satchidananda; Merkle, Arno; Ellisman, Mark H.
2015-01-01
The recently developed three-dimensional electron microscopic (EM) method of serial block-face scanning electron microscopy (SBEM) has rapidly established itself as a powerful imaging approach. Volume EM imaging with this scanning electron microscopy (SEM) method requires intense staining of biological specimens with heavy metals to allow sufficient back-scatter electron signal and also to render specimens sufficiently conductive to control charging artifacts. These more extreme heavy metal staining protocols render specimens light opaque and make it much more difficult to track and identify regions of interest (ROIs) for the SBEM imaging process than for a typical thin section transmission electron microscopy correlative light and electron microscopy study. We present a strategy employing X-ray microscopy (XRM) both for tracking ROIs and for increasing the efficiency of the workflow used for typical projects undertaken with SBEM. XRM was found to reveal an impressive level of detail in tissue heavily stained for SBEM imaging, allowing for the identification of tissue landmarks that can be subsequently used to guide data collection in the SEM. Furthermore, specific labeling of individual cells using diaminobenzidine is detectable in XRM volumes. We demonstrate that tungsten carbide particles or upconverting nanophosphor particles can be used as fiducial markers to further increase the precision and efficiency of SBEM imaging. PMID:25392009
Ishida, Go; Oishi, Makoto; Jinguji, Shinya; Yoneoka, Yuichiro; Sato, Mitsuya; Fujii, Yukihiko
2011-10-01
To evaluate the anatomy of cranial nerves running in and around the cavernous sinus, we employed three-dimensional reversed fast imaging with steady-state precession (FISP) with diffusion weighted imaging (3D PSIF-DWI) on 3-T magnetic resonance (MR) system. After determining the proper parameters to obtain sufficient resolution of 3D PSIF-DWI, we collected imaging data of 20-side cavernous regions in 10 normal subjects. 3D PSIF-DWI provided high contrast between the cranial nerves and other soft tissues, fluid, and blood in all subjects. We also created volume-rendered images of 3D PSIF-DWI and anatomically evaluated the reliability of visualizing optic, oculomotor, trochlear, trigeminal, and abducens nerves on 3D PSIF-DWI. All 20 sets of cranial nerves were visualized and 12 trochlear nerves and 6 abducens nerves were partially identified. We also presented preliminary clinical experiences in two cases with pituitary adenomas. The anatomical relationship between the tumor and cranial nerves running in and around the cavernous sinus could be three-dimensionally comprehended by 3D PSIF-DWI and the volume-rendered images. In conclusion, 3D PSIF-DWI has great potential to provide high resolution "cranial nerve imaging", which visualizes the whole length of the cranial nerves including the parts in the blood flow as in the cavernous sinus region.
Individual differences in posterior cortical volume correlate with proneness to pride and gratitude
Zahn, Roland; Garrido, Griselda; Moll, Jorge
2014-01-01
Proneness to specific moral sentiments (e.g. pride, gratitude, guilt, indignation) has been linked with individual variations in functional MRI (fMRI) response within anterior brain regions whose lesion leads to inappropriate behaviour. However, the role of structural anatomical differences in rendering individuals prone to particular moral sentiments relative to others is unknown. Here, we investigated grey matter volumes (VBM8) and proneness to specific moral sentiments on a well-controlled experimental task in healthy individuals. Individuals with smaller cuneus, and precuneus volumes were more pride-prone, whereas those with larger right inferior temporal volumes experienced gratitude more readily. Although the primary analysis detected no associations with guilt- or indignation-proneness, subgenual cingulate fMRI responses to guilt were negatively correlated with grey matter volumes in the left superior temporal sulcus and anterior dorsolateral prefrontal cortices (right >left). This shows that individual variations in functional activations within critical areas for moral sentiments were not due to grey matter volume differences in the same areas. Grey matter volume differences between healthy individuals may nevertheless play an important role by affecting posterior cortical brain systems that are non-critical but supportive for the experience of specific moral sentiments. This may be of particular relevance when their experience depends on visuo-spatial elaboration. PMID:24106333
Volume estimation of brain abnormalities in MRI data
NASA Astrophysics Data System (ADS)
Suprijadi, Pratama, S. H.; Haryanto, F.
2014-02-01
The abnormality of brain tissue always becomes a crucial issue in medical field. This medical condition can be recognized through segmentation of certain region from medical images obtained from MRI dataset. Image processing is one of computational methods which very helpful to analyze the MRI data. In this study, combination of segmentation and rendering image were used to isolate tumor and stroke. Two methods of thresholding were employed to segment the abnormality occurrence, followed by filtering to reduce non-abnormality area. Each MRI image is labeled and then used for volume estimations of tumor and stroke-attacked area. The algorithms are shown to be successful in isolating tumor and stroke in MRI images, based on thresholding parameter and stated detection accuracy.
The bicentennial volume of the British Journal of Psychiatry: the winding pathway of mental science.
Tyrer, Peter; Craddock, Nick
2012-01-01
The Asylum Journal, first published in 1853, is now, as the British Journal of Psychiatry, in its 200th volume. It has changed greatly in its breadth and scope, but its core values and concerns--professional respect, removal of stigma, delivery of care, understanding of pathology, and informed treatment--have remained at its heart throughout. We predict some changes for the future, but not dramatic ones, and conclude that the impinging advances of science will elucidate and refine, but not remove, the need for a journal that is proud to represent psychiatry or, in the words of John Bucknill, its first editor, 'to render prominent its characteristics and to stamp it as a specialty'.
SemVisM: semantic visualizer for medical image
NASA Astrophysics Data System (ADS)
Landaeta, Luis; La Cruz, Alexandra; Baranya, Alexander; Vidal, María.-Esther
2015-01-01
SemVisM is a toolbox that combines medical informatics and computer graphics tools for reducing the semantic gap between low-level features and high-level semantic concepts/terms in the images. This paper presents a novel strategy for visualizing medical data annotated semantically, combining rendering techniques, and segmentation algorithms. SemVisM comprises two main components: i) AMORE (A Modest vOlume REgister) to handle input data (RAW, DAT or DICOM) and to initially annotate the images using terms defined on medical ontologies (e.g., MesH, FMA or RadLex), and ii) VOLPROB (VOlume PRObability Builder) for generating the annotated volumetric data containing the classified voxels that belong to a particular tissue. SemVisM is built on top of the semantic visualizer ANISE.1
Predictability, Force and (Anti-)Resonance in Complex Object Control.
Maurice, Pauline; Hogan, Neville; Sternad, Dagmar
2018-04-18
Manipulation of complex objects as in tool use is ubiquitous and has given humans an evolutionary advantage. This study examined the strategies humans choose when manipulating an object with underactuated internal dynamics, such as a cup of coffee. The object's dynamics renders the temporal evolution complex, possibly even chaotic, and difficult to predict. A cart-and-pendulum model, loosely mimicking coffee sloshing in a cup, was implemented in a virtual environment with a haptic interface. Participants rhythmically manipulated the virtual cup containing a rolling ball; they could choose the oscillation frequency, while the amplitude was prescribed. Three hypotheses were tested: 1) humans decrease interaction forces between hand and object; 2) humans increase the predictability of the object dynamics; 3) humans exploit the resonances of the coupled object-hand system. Analysis revealed that humans chose either a high-frequency strategy with anti-phase cup-and-ball movements or a low-frequency strategy with in-phase cup-and-ball movements. Counter Hypothesis 1, they did not decrease interaction force; instead, they increased the predictability of the interaction dynamics, quantified by mutual information, supporting Hypothesis 2. To address Hypothesis 3, frequency analysis of the coupled hand-object system revealed two resonance frequencies separated by an anti-resonance frequency. The low-frequency strategy exploited one resonance, while the high-frequency strategy afforded more choice, consistent with the frequency response of the coupled system; both strategies avoided the anti-resonance. Hence, humans did not prioritize interaction force, but rather strategies that rendered interactions predictable. These findings highlight that physical interactions with complex objects pose control challenges not present in unconstrained movements.
O'Brien, Caroline C; Kolandaivelu, Kumaran; Brown, Jonathan; Lopes, Augusto C; Kunio, Mie; Kolachalama, Vijaya B; Edelman, Elazer R
2016-01-01
Stacking cross-sectional intravascular images permits three-dimensional rendering of endovascular implants, yet introduces between-frame uncertainties that limit characterization of device placement and the hemodynamic microenvironment. In a porcine coronary stent model, we demonstrate enhanced OCT reconstruction with preservation of between-frame features through fusion with angiography and a priori knowledge of stent design. Strut positions were extracted from sequential OCT frames. Reconstruction with standard interpolation generated discontinuous stent structures. By computationally constraining interpolation to known stent skeletons fitted to 3D 'clouds' of OCT-Angio-derived struts, implant anatomy was resolved, accurately rendering features from implant diameter and curvature (n = 1 vessels, r2 = 0.91, 0.90, respectively) to individual strut-wall configurations (average displacement error ~15 μm). This framework facilitated hemodynamic simulation (n = 1 vessel), showing the critical importance of accurate anatomic rendering in characterizing both quantitative and basic qualitative flow patterns. Discontinuities with standard approaches systematically introduced noise and bias, poorly capturing regional flow effects. In contrast, the enhanced method preserved multi-scale (local strut to regional stent) flow interactions, demonstrating the impact of regional contexts in defining the hemodynamic consequence of local deployment errors. Fusion of planar angiography and knowledge of device design permits enhanced OCT image analysis of in situ tissue-device interactions. Given emerging interests in simulation-derived hemodynamic assessment as surrogate measures of biological risk, such fused modalities offer a new window into patient-specific implant environments.
Note: Nonpolar solute partial molar volume response to attractive interactions with water.
Williams, Steven M; Ashbaugh, Henry S
2014-01-07
The impact of attractive interactions on the partial molar volumes of methane-like solutes in water is characterized using molecular simulations. Attractions account for a significant 20% volume drop between a repulsive Weeks-Chandler-Andersen and full Lennard-Jones description of methane interactions. The response of the volume to interaction perturbations is characterized by linear fits to our simulations and a rigorous statistical thermodynamic expression for the derivative of the volume to increasing attractions. While a weak non-linear response is observed, an average effective slope accurately captures the volume decrease. This response, however, is anticipated to become more non-linear with increasing solute size.
Note: Nonpolar solute partial molar volume response to attractive interactions with water
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Steven M.; Ashbaugh, Henry S., E-mail: hanka@tulane.edu
2014-01-07
The impact of attractive interactions on the partial molar volumes of methane-like solutes in water is characterized using molecular simulations. Attractions account for a significant 20% volume drop between a repulsive Weeks-Chandler-Andersen and full Lennard-Jones description of methane interactions. The response of the volume to interaction perturbations is characterized by linear fits to our simulations and a rigorous statistical thermodynamic expression for the derivative of the volume to increasing attractions. While a weak non-linear response is observed, an average effective slope accurately captures the volume decrease. This response, however, is anticipated to become more non-linear with increasing solute size.
Perception of force and stiffness in the presence of low-frequency haptic noise
Gurari, Netta; Okamura, Allison M.; Kuchenbecker, Katherine J.
2017-01-01
Objective This work lays the foundation for future research on quantitative modeling of human stiffness perception. Our goal was to develop a method by which a human’s ability to perceive suprathreshold haptic force stimuli and haptic stiffness stimuli can be affected by adding haptic noise. Methods Five human participants performed a same-different task with a one-degree-of-freedom force-feedback device. Participants used the right index finger to actively interact with variations of force (∼5 and ∼8 N) and stiffness (∼290 N/m) stimuli that included one of four scaled amounts of haptically rendered noise (None, Low, Medium, High). The haptic noise was zero-mean Gaussian white noise that was low-pass filtered with a 2 Hz cut-off frequency; the resulting low-frequency signal was added to the force rendered while the participant interacted with the force and stiffness stimuli. Results We found that the precision with which participants could identify the magnitude of both the force and stiffness stimuli was affected by the magnitude of the low-frequency haptically rendered noise added to the haptic stimulus, as well as the magnitude of the haptic stimulus itself. The Weber fraction strongly correlated with the standard deviation of the low-frequency haptic noise with a Pearson product-moment correlation coefficient of ρ > 0.83. The mean standard deviation of the low-frequency haptic noise in the haptic stimuli ranged from 0.184 N to 1.111 N across the four haptically rendered noise levels, and the corresponding mean Weber fractions spanned between 0.042 and 0.101. Conclusions The human ability to perceive both suprathreshold haptic force and stiffness stimuli degrades in the presence of added low-frequency haptic noise. Future work can use the reported methods to investigate how force perception and stiffness perception may relate, with possible applications in haptic watermarking and in the assessment of the functionality of peripheral pathways in individuals with haptic impairments. PMID:28575068
2002-09-01
Management .........................15 5. Time Management ..............................16 6. Data Distribution Management .................16 D...50 b. Ownership Management .....................51 c. Data Distribution Management .............51 2. Additional Objects and Interactions...16 Figure 6. Data Distribution Management . (From: ref. 2) ...16 Figure 7. RTI and Federate Code Responsibilities. (From: ref. 2
Arizzi, Anna; Viles, Heather; Martín-Sanchez, Inés; Cultrone, Giuseppe
2016-01-15
Hemp-based composites are eco-friendly building materials as they improve energy efficiency in buildings and entail low waste production and pollutant emissions during their manufacturing process. Nevertheless, the organic nature of hemp enhances the bio-receptivity of the material, with likely negative consequences for its long-term performance in the building. The main purpose of this study was to study the response at macro- and micro-scale of hemp-lime renders subjected to weathering simulations in an environmental cabinet (one year was condensed in twelve days), so as to predict their long-term durability in coastal and inland areas with Mediterranean, Tropical and Semi-arid climates, also in relation with the lime type used. The simulated climatic conditions caused almost unnoticeable mass, volume and colour changes in hemp-lime renders. No efflorescence or physical breakdown was detected in samples subjected to NaCl, because the salt mainly precipitates on the surface of samples and is washed away by the rain. Although there was no visible microbial colonisation, alkaliphilic fungi (mainly Penicillium and Aspergillus) and bacteria (mainly Bacillus and Micrococcus) were isolated in all samples. Microbial growth and diversification were higher under Tropical climate, due to heavier rainfall. The influence of the bacterial activity on the hardening of samples has also been discussed here and related with the formation and stabilisation of vaterite in hemp-lime mixes. This study has demonstrated that hemp-lime renders show good durability towards a wide range of environmental conditions and factors. However, it might be useful to take some specific preventive and maintenance measures to reduce the bio-receptivity of this material, thus ensuring a longer durability on site. Copyright © 2015 Elsevier B.V. All rights reserved.
Fast DRR generation for 2D to 3D registration on GPUs.
Tornai, Gábor János; Cserey, György; Pappas, Ion
2012-08-01
The generation of digitally reconstructed radiographs (DRRs) is the most time consuming step on the CPU in intensity based two-dimensional x-ray to three-dimensional (CT or 3D rotational x-ray) medical image registration, which has application in several image guided interventions. This work presents optimized DRR rendering on graphical processor units (GPUs) and compares performance achievable on four commercially available devices. A ray-cast based DRR rendering was implemented for a 512 × 512 × 72 CT volume. The block size parameter was optimized for four different GPUs for a region of interest (ROI) of 400 × 225 pixels with different sampling ratios (1.1%-9.1% and 100%). Performance was statistically evaluated and compared for the four GPUs. The method and the block size dependence were validated on the latest GPU for several parameter settings with a public gold standard dataset (512 × 512 × 825 CT) for registration purposes. Depending on the GPU, the full ROI is rendered in 2.7-5.2 ms. If sampling ratio of 1.1%-9.1% is applied, execution time is in the range of 0.3-7.3 ms. On all GPUs, the mean of the execution time increased linearly with respect to the number of pixels if sampling was used. The presented results outperform other results from the literature. This indicates that automatic 2D to 3D registration, which typically requires a couple of hundred DRR renderings to converge, can be performed quasi on-line, in less than a second or depending on the application and hardware in less than a couple of seconds. Accordingly, a whole new field of applications is opened for image guided interventions, where the registration is continuously performed to match the real-time x-ray.
Kim, K; Lee, S
2015-05-01
Diagnosis of skin conditions is dependent on the assessment of skin surface properties that are represented by more tactile properties such as stiffness, roughness, and friction than visual information. Due to this reason, adding tactile feedback to existing vision based diagnosis systems can help dermatologists diagnose skin diseases or disorders more accurately. The goal of our research was therefore to develop a tactile rendering system for skin examinations by dynamic touch. Our development consists of two stages: converting a single image to a 3D haptic surface and rendering the generated haptic surface in real-time. Converting to 3D surfaces from 2D single images was implemented with concerning human perception data collected by a psychophysical experiment that measured human visual and haptic sensibility to 3D skin surface changes. For the second stage, we utilized real skin biomechanical properties found by prior studies. Our tactile rendering system is a standalone system that can be used with any single cameras and haptic feedback devices. We evaluated the performance of our system by conducting an identification experiment with three different skin images with five subjects. The participants had to identify one of the three skin surfaces by using a haptic device (Falcon) only. No visual cue was provided for the experiment. The results indicate that our system provides sufficient performance to render discernable tactile rendering with different skin surfaces. Our system uses only a single skin image and automatically generates a 3D haptic surface based on human haptic perception. Realistic skin interactions can be provided in real-time for the purpose of skin diagnosis, simulations, or training. Our system can also be used for other applications like virtual reality and cosmetic applications. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Effect of topical ophthalmic epinastine and olopatadine on tear volume in mice.
Villareal, Arturo L; Farley, William; Pflugfelder, Stephen C
2006-12-01
To investigate the effects of topical epinastine and olopatadine on tear volume by using a mouse model. Eighty-five C57BL6 mice (170 eyes) were treated twice daily with topical ophthalmic epinastine 0.05%, olopatadine 0.1%, or atropine 1% or served as untreated controls. A thread-wetting assay was used to measure tear volume at baseline and 15, 45, 90, 120, and 240 minutes after the last instillation of the drug on days 2 and 4 of treatment. After 2 days of treatment, epinastine-treated mice showed greater mean tear volumes than olopatadine-treated mice did at 15, 45, 90, and 240 minutes, with statistical significance at 15 and 45 minutes (P<0.001). Olopatadine significantly reduced tear volume versus untreated controls at 15 and 45 minutes (P<0.001). After 4 days, tear volumes with epinastine treatment exceeded those with olopatadine treatment at all time points, with statistical significance at 45 minutes (P<0.05). Atropine rendered tears undetectable at 15, 45, and 90 minutes; tear volume returned to baseline levels at 240 minutes. Topical epinastine did not inhibit tear secretion, whereas olopatadine caused a significant decrease in tear volume. Because of its neutral impact on the lacrimal functional unit, epinastine may be an especially good choice for the treatment of allergic conjunctivitis in patients with dry eye disease or in those who are at risk for developing dry eye.
Kong, Li; Herold, Christina J; Zöllner, Frank; Salat, David H; Lässer, Marc M; Schmid, Lena A; Fellhauer, Iven; Thomann, Philipp A; Essig, Marco; Schad, Lothar R; Erickson, Kirk I; Schröder, Johannes
2015-02-28
Grey matter volume and cortical thickness are the two most widely used measures for detecting grey matter morphometric changes in various diseases such as schizophrenia. However, these two measures only share partial overlapping regions in identifying morphometric changes. Few studies have investigated the contributions of the potential factors to the differences of grey matter volume and cortical thickness. To investigate this question, 3T magnetic resonance images from 22 patients with schizophrenia and 20 well-matched healthy controls were chosen for analyses. Grey matter volume and cortical thickness were measured by VBM and Freesurfer. Grey matter volume results were then rendered onto the surface template of Freesurfer to compare the differences from cortical thickness in anatomical locations. Discrepancy regions of the grey matter volume and thickness where grey matter volume significantly decreased but without corresponding evidence of cortical thinning involved the rostral middle frontal, precentral, lateral occipital and superior frontal gyri. Subsequent region-of-interest analysis demonstrated that changes in surface area, grey/white matter intensity contrast and curvature accounted for the discrepancies. Our results suggest that the differences between grey matter volume and thickness could be jointly driven by surface area, grey/white matter intensity contrast and curvature. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Skype me! Socially contingent interactions help toddlers learn language.
Roseberry, Sarah; Hirsh-Pasek, Kathy; Golinkoff, Roberta M
2014-01-01
Language learning takes place in the context of social interactions, yet the mechanisms that render social interactions useful for learning language remain unclear. This study focuses on whether social contingency might support word learning. Toddlers aged 24-30 months (N = 36) were exposed to novel verbs in one of three conditions: live interaction training, socially contingent video training over video chat, and noncontingent video training (yoked video). Results suggest that children only learned novel verbs in socially contingent interactions (live interactions and video chat). This study highlights the importance of social contingency in interactions for language learning and informs the literature on learning through screen media as the first study to examine word learning through video chat technology. © 2013 The Authors. Child Development © 2013 Society for Research in Child Development, Inc.
PPI layouts: BioJS components for the display of Protein-Protein Interactions
Salazar, Gustavo A.; Meintjes, Ayton; Mulder, Nicola
2014-01-01
Summary: We present two web-based components for the display of Protein-Protein Interaction networks using different self-organizing layout methods: force-directed and circular. These components conform to the BioJS standard and can be rendered in an HTML5-compliant browser without the need for third-party plugins. We provide examples of interaction networks and how the components can be used to visualize them, and refer to a more complex tool that uses these components. Availability: http://github.com/biojs/biojs; http://dx.doi.org/10.5281/zenodo.7753 PMID:25075288
NASA Technical Reports Server (NTRS)
Mohlenbrink, Christoph P.; Omar, Faisal Gamal; Homola, Jeffrey R.
2017-01-01
This is a video replay of system data that was generated from the UAS Traffic Management (UTM) Technical Capability Level (TCL) 2 flight demonstration in Nevada and rendered in Google Earth. What is depicted in the replay is a particular set of flights conducted as part of what was referred to as the Ocean scenario. The test range and surrounding area are presented followed by an overview of operational volumes. System messaging is also displayed as well as a replay of all of the five test flights as they occurred.
Creation of anatomical models from CT data
NASA Astrophysics Data System (ADS)
Alaytsev, Innokentiy K.; Danilova, Tatyana V.; Manturov, Alexey O.; Mareev, Gleb O.; Mareev, Oleg V.
2018-04-01
Computed tomography is a great source of biomedical data because it allows a detailed exploration of complex anatomical structures. Some structures are not visible on CT scans, and some are hard to distinguish due to partial volume effect. CT datasets require preprocessing before using them as anatomical models in a simulation system. The work describes segmentation and data transformation methods for an anatomical model creation from the CT data. The result models may be used for visual and haptic rendering and drilling simulation in a virtual surgery system.
2009-05-01
demonstrated to degrade a specific kidney segment (proximal tubule and glomerulus, respectively). In this study a total of seventeen protein biomarkers were...exposure. Two experimental nephrotoxins were interrogated, D-serine and puromycin, each previously demonstrated to degrade a specific kidney segment...to degradation during isolation from sample render it unlikely to develop into a fieldable, self-contained assay system within the near future
1998-12-01
Soft Sphere Molecular Model for Inverse-Power-Law or Lennard Jones Potentials , Physics of Fluids A, Vol. 3, No. 10, pp. 2459-2465. 42. Legge, H...information; — Providing assistance to member nations for the purpose of increasing their scientific and technical potential ; — Rendering scientific and...nal, 34:756-763, 1996. [22] W. Jones and B. Launder. The Prediction of Laminarization with a Two-Equation Model of Turbulence. Int. Journal of Heat
Generating soft shadows with a depth buffer algorithm
NASA Technical Reports Server (NTRS)
Brotman, L. S.; Badler, N. I.
1984-01-01
Computer-synthesized shadows used to appear with a sharp edge when cast onto a surface. At present the production of more realistic, soft shadows is considered. However, significant costs arise in connection with such a representation. The current investigation is concerned with a pragmatic approach, which combines an existing shadowing method with a popular visible surface rendering technique, called a 'depth buffer', to generate soft shadows resulting from light sources of finite extent. The considered method represents an extension of Crow's (1977) shadow volume algorithm.
The Coast Artillery Journal. Volume 73, Number 1, July 1930
1930-07-01
preserve them in such a manner that they may vindicate themselveswhatever the disadvantages may be. As was the case after the Revolution, the public mind has...individually for the signal service that yon can render. ADDRESS TO GRADUATES Address of Major -General John W. Gulick, Chief of Coast Artillery It is a pleasure...exceed three years, the normal tour being not less than two years. This matter is being studied and I hope to work out a satisfactory plan to give more
Congenital anatomic variants of the kidney and ureter: a pictorial essay.
Srinivas, M R; Adarsh, K M; Jeeson, Riya; Ashwini, C; Nagaraj, B R
2016-03-01
Congenital renal parenchymal and pelvicalyceal abnormalities have a wide spectrum. Most of them are asymptomatic, like that of ectopia, cross fused kidney, horseshoe kidney, etc., while a few of them become complicated, leading to renal failure and death. It is very important for the radiologist to identify these anatomic variants and guide the clinicians for surgical and therapeutic procedures. Cross-sectional imaging with a volume rendered technique/maximum intensity projection has overcome ultrasonography and IVU for identification and interpretation of some of these variants.
Segmentation of Unstructured Datasets
NASA Technical Reports Server (NTRS)
Bhat, Smitha
1996-01-01
Datasets generated by computer simulations and experiments in Computational Fluid Dynamics tend to be extremely large and complex. It is difficult to visualize these datasets using standard techniques like Volume Rendering and Ray Casting. Object Segmentation provides a technique to extract and quantify regions of interest within these massive datasets. This thesis explores basic algorithms to extract coherent amorphous regions from two-dimensional and three-dimensional scalar unstructured grids. The techniques are applied to datasets from Computational Fluid Dynamics and from Finite Element Analysis.
2007-01-01
fatigued. The majority of the OIL and TTP listed here are oriented to the Level I management of combat CAX or management at the point of injury (POI) or...carried into the field by medical personnel. Time to evacuation from the POI or other casualty evacuation point (CEP) to an MTF may vary considerably...must be avoided during this time. Care must be rendered once the mission has reached an anticipated evacuation point , without pursuit, awaiting CASEVAC
1988-01-01
activities Joe D. Elms , for their editorial evaluation of the vironmental Assessmant Program. Additional depends to a large extent on weathcr cond...winds of 25 knots lower. icing causes slippery decks, renders moving (13 meters per second) or more, and air tempera- parts inoperable, and, in extreme...try to avoid foul weather an thereby bias the oceanic climatology towards fair weather. A recent study by Elms (1986), in which he compared the
Imaging method for monitoring delivery of high dose rate brachytherapy
Weisenberger, Andrew G; Majewski, Stanislaw
2012-10-23
A method for in-situ monitoring both the balloon/cavity and the radioactive source in brachytherapy treatment utilizing using at least one pair of miniature gamma cameras to acquire separate images of: 1) the radioactive source as it is moved in the tumor volume during brachytherapy; and 2) a relatively low intensity radiation source produced by either an injected radiopharmaceutical rendering cancerous tissue visible or from a radioactive solution filling a balloon surgically implanted into the cavity formed by the surgical resection of a tumor.
Acoustic-tactile rendering of visual information
NASA Astrophysics Data System (ADS)
Silva, Pubudu Madhawa; Pappas, Thrasyvoulos N.; Atkins, Joshua; West, James E.; Hartmann, William M.
2012-03-01
In previous work, we have proposed a dynamic, interactive system for conveying visual information via hearing and touch. The system is implemented with a touch screen that allows the user to interrogate a two-dimensional (2-D) object layout by active finger scanning while listening to spatialized auditory feedback. Sound is used as the primary source of information for object localization and identification, while touch is used both for pointing and for kinesthetic feedback. Our previous work considered shape and size perception of simple objects via hearing and touch. The focus of this paper is on the perception of a 2-D layout of simple objects with identical size and shape. We consider the selection and rendition of sounds for object identification and localization. We rely on the head-related transfer function for rendering sound directionality, and consider variations of sound intensity and tempo as two alternative approaches for rendering proximity. Subjective experiments with visually-blocked subjects are used to evaluate the effectiveness of the proposed approaches. Our results indicate that intensity outperforms tempo as a proximity cue, and that the overall system for conveying a 2-D layout is quite promising.
Calibration, reconstruction, and rendering of cylindrical millimeter-wave image data
NASA Astrophysics Data System (ADS)
Sheen, David M.; Hall, Thomas E.
2011-05-01
Cylindrical millimeter-wave imaging systems and technology have been under development at the Pacific Northwest National Laboratory (PNNL) for several years. This technology has been commercialized, and systems are currently being deployed widely across the United States and internationally. These systems are effective at screening for concealed items of all types; however, new sensor designs, image reconstruction techniques, and image rendering algorithms could potentially improve performance. At PNNL, a number of specific techniques have been developed recently to improve cylindrical imaging methods including wideband techniques, combining data from full 360-degree scans, polarimetric imaging techniques, calibration methods, and 3-D data visualization techniques. Many of these techniques exploit the three-dimensionality of the cylindrical imaging technique by optimizing the depth resolution of the system and using this information to enhance detection. Other techniques, such as polarimetric methods, exploit scattering physics of the millimeter-wave interaction with concealed targets on the body. In this paper, calibration, reconstruction, and three-dimensional rendering techniques will be described that optimize the depth information in these images and the display of the images to the operator.
Tangible display systems: direct interfaces for computer-based studies of surface appearance
NASA Astrophysics Data System (ADS)
Darling, Benjamin A.; Ferwerda, James A.
2010-02-01
When evaluating the surface appearance of real objects, observers engage in complex behaviors involving active manipulation and dynamic viewpoint changes that allow them to observe the changing patterns of surface reflections. We are developing a class of tangible display systems to provide these natural modes of interaction in computer-based studies of material perception. A first-generation tangible display was created from an off-the-shelf laptop computer containing an accelerometer and webcam as standard components. Using these devices, custom software estimated the orientation of the display and the user's viewing position. This information was integrated with a 3D rendering module so that rotating the display or moving in front of the screen would produce realistic changes in the appearance of virtual objects. In this paper, we consider the design of a second-generation system to improve the fidelity of the virtual surfaces rendered to the screen. With a high-quality display screen and enhanced tracking and rendering capabilities, a secondgeneration system will be better able to support a range of appearance perception applications.
MovieMaker: a web server for rapid rendering of protein motions and interactions
Maiti, Rajarshi; Van Domselaar, Gary H.; Wishart, David S.
2005-01-01
MovieMaker is a web server that allows short (∼10 s), downloadable movies of protein motions to be generated. It accepts PDB files or PDB accession numbers as input and automatically calculates, renders and merges the necessary image files to create colourful animations covering a wide range of protein motions and other dynamic processes. Users have the option of animating (i) simple rotation, (ii) morphing between two end-state conformers, (iii) short-scale, picosecond vibrations, (iv) ligand docking, (v) protein oligomerization, (vi) mid-scale nanosecond (ensemble) motions and (vii) protein folding/unfolding. MovieMaker does not perform molecular dynamics calculations. Instead it is an animation tool that uses a sophisticated superpositioning algorithm in conjunction with Cartesian coordinate interpolation to rapidly and automatically calculate the intermediate structures needed for many of its animations. Users have extensive control over the rendering style, structure colour, animation quality, background and other image features. MovieMaker is intended to be a general-purpose server that allows both experts and non-experts to easily generate useful, informative protein animations for educational and illustrative purposes. MovieMaker is accessible at . PMID:15980488
Xanthopoulos, Emily; Hutchinson, Charles E; Adams, Judith E; Bruce, Ian N; Nash, Anthony F P; Holmes, Andrew P; Taylor, Christopher J; Waterton, John C
2007-01-01
Contrast-enhanced MRI is of value in assessing rheumatoid pannus in the hand, but the images are not always easy to quantitate. To develop and evaluate an improved measurement of volume of enhancing pannus (VEP) in the hand in human rheumatoid arthritis (RA). MR images of the hand and wrist were obtained for 14 patients with RA at 0, 1 and 13 weeks. Volume of enhancing pannus was measured on images created by subtracting precontrast T1-weighted images from contrast-enhanced T1-weighted images using a shuffle transformation technique. Maximum intensity projection (MIP) and 3D volume rendering of the images were used as a guide to identify the pannus and any contrast-enhanced veins. Visualisation of pannus was much improved following the shuffle transform. Between 0 weeks and 1 week, the mean value of the within-subject coefficient of variation (CoV) was 0.13 and the estimated total CoV was 0.15. There was no evidence of significant increased variability within the 13-week interval for the complete sample of patients. Volume of enhancing pannus can be measured reproducibly in the rheumatoid hand using 3D contrast-enhanced MRI and shuffle transform.
Correlations among Brain Gray Matter Volumes, Age, Gender, and Hemisphere in Healthy Individuals
Taki, Yasuyuki; Thyreau, Benjamin; Kinomura, Shigeo; Sato, Kazunori; Goto, Ryoi; Kawashima, Ryuta; Fukuda, Hiroshi
2011-01-01
To determine the relationship between age and gray matter structure and how interactions between gender and hemisphere impact this relationship, we examined correlations between global or regional gray matter volume and age, including interactions of gender and hemisphere, using a general linear model with voxel-based and region-of-interest analyses. Brain magnetic resonance images were collected from 1460 healthy individuals aged 20–69 years; the images were linearly normalized and segmented and restored to native space for analysis of global gray matter volume. Linearly normalized images were then non-linearly normalized and smoothed for analysis of regional gray matter volume. Analysis of global gray matter volume revealed a significant negative correlation between gray matter ratio (gray matter volume divided by intracranial volume) and age in both genders, and a significant interaction effect of age × gender on the gray matter ratio. In analyzing regional gray matter volume, the gray matter volume of all regions showed significant main effects of age, and most regions, with the exception of several including the inferior parietal lobule, showed a significant age × gender interaction. Additionally, the inferior temporal gyrus showed a significant age × gender × hemisphere interaction. No regional volumes showed significant age × hemisphere interactions. Our study may contribute to clarifying the mechanism(s) of normal brain aging in each brain region. PMID:21818377
High-resolution three-dimensional magnetic resonance imaging of mouse lung in situ.
Scadeng, Miriam; Rossiter, Harry B; Dubowitz, David J; Breen, Ellen C
2007-01-01
This study establishes a method for high-resolution isotropic magnetic resonance (MR) imaging of mouse lungs using tracheal liquid-instillation to remove MR susceptibility artifacts. C57BL/6J mice were instilled sequentially with perfluorocarbon and phosphate-buffered saline to an airway pressure of 10, 20, or 30 cm H2O. Imaging was performed in a 7T MR scanner using a 2.5-cm Quadrature volume coil and a 3-dimensional (3D) FLASH imaging sequence. Liquid-instillation removed magnetic susceptibility artifacts and allowed lung structure to be viewed at an isotropic resolution of 78-90 microm. Instilled liquid and modeled lung volumes were well correlated (R = 0.92; P < 0.05) and differed by a constant tissue volume (220 +/- 92 microL). 3D image renderings allowed differences in structural dimensions (volumes and areas) to be accurately measured at each inflation pressure. These data demonstrate the efficacy of pulmonary liquid instillation for in situ high-resolution MR imaging of mouse lungs for accurate measurement of pulmonary airway, parenchymal, and vascular structures.
Luo, Xiongbiao; Mori, Kensaku
2014-06-01
Endoscope 3-D motion tracking, which seeks to synchronize pre- and intra-operative images in endoscopic interventions, is usually performed as video-volume registration that optimizes the similarity between endoscopic video and pre-operative images. The tracking performance, in turn, depends significantly on whether a similarity measure can successfully characterize the difference between video sequences and volume rendering images driven by pre-operative images. The paper proposes a discriminative structural similarity measure, which uses the degradation of structural information and takes image correlation or structure, luminance, and contrast into consideration, to boost video-volume registration. By applying the proposed similarity measure to endoscope tracking, it was demonstrated to be more accurate and robust than several available similarity measures, e.g., local normalized cross correlation, normalized mutual information, modified mean square error, or normalized sum squared difference. Based on clinical data evaluation, the tracking error was reduced significantly from at least 14.6 mm to 4.5 mm. The processing time was accelerated more than 30 frames per second using graphics processing unit.
Interactive high-resolution isosurface ray casting on multicore processors.
Wang, Qin; JaJa, Joseph
2008-01-01
We present a new method for the interactive rendering of isosurfaces using ray casting on multi-core processors. This method consists of a combination of an object-order traversal that coarsely identifies possible candidate 3D data blocks for each small set of contiguous pixels, and an isosurface ray casting strategy tailored for the resulting limited-size lists of candidate 3D data blocks. While static screen partitioning is widely used in the literature, our scheme performs dynamic allocation of groups of ray casting tasks to ensure almost equal loads among the different threads running on multi-cores while maintaining spatial locality. We also make careful use of memory management environment commonly present in multi-core processors. We test our system on a two-processor Clovertown platform, each consisting of a Quad-Core 1.86-GHz Intel Xeon Processor, for a number of widely different benchmarks. The detailed experimental results show that our system is efficient and scalable, and achieves high cache performance and excellent load balancing, resulting in an overall performance that is superior to any of the previous algorithms. In fact, we achieve an interactive isosurface rendering on a 1024(2) screen for all the datasets tested up to the maximum size of the main memory of our platform.
Compression and accelerated rendering of volume data using DWT
NASA Astrophysics Data System (ADS)
Kamath, Preyas; Akleman, Ergun; Chan, Andrew K.
1998-09-01
2D images cannot convey information on object depth and location relative to the surfaces. The medical community is increasingly using 3D visualization techniques to view data from CT scans, MRI etc. 3D images provide more information on depth and location in the spatial domain to help surgeons making better diagnoses of the problem. 3D images can be constructed from 2D images using 3D scalar algorithms. With recent advances in communication techniques, it is possible for doctors to diagnose and plan treatment of a patient who lives at a remote location. It is made possible by transmitting relevant data of the patient via telephone lines. If this information is to be reconstructed in 3D, then 2D images must be transmitted. However 2D dataset storage occupies a lot of memory. In addition, visualization algorithms are slow. We describe in this paper a scheme which reduces the data transfer time by only transmitting information that the doctor wants. Compression is achieved by reducing the amount of data transfer. This is possible by using the 3D wavelet transform applied to 3D datasets. Since the wavelet transform is localized in frequency and spatial domain, we transmit detail only in the region where the doctor needs it. Since only ROM (Region of Interest) is reconstructed in detail, we need to render only ROI in detail, thus we can reduce the rendering time.
Cousins, Claire
2015-01-01
The search for once-habitable locations on Mars is increasingly focused on environments dominated by fluvial and lacustrine processes, such as those investigated by the Mars Science Laboratory Curiosity rover. The availability of liquid water coupled with the potential longevity of such systems renders these localities prime targets for the future exploration of Martian biosignatures. Fluvial-lacustrine environments associated with basaltic volcanism are highly relevant to Mars, but their terrestrial counterparts have been largely overlooked as a field analogue. Such environments are common in Iceland, where basaltic volcanism interacts with glacial ice and surface snow to produce large volumes of meltwater within an otherwise cold and dry environment. This meltwater can be stored to create subglacial, englacial, and proglacial lakes, or be released as catastrophic floods and proglacial fluvial systems. Sedimentary deposits produced by the resulting fluvial-lacustrine activity are extensive, with lithologies dominated by basaltic minerals, low-temperature alteration assemblages (e.g., smectite clays, calcite), and amorphous, poorly crystalline phases (basaltic glass, palagonite, nanophase iron oxides). This paper reviews examples of these environments, including their sedimentary deposits and microbiology, within the context of utilising these localities for future Mars analogue studies and instrument testing. PMID:25692905
Three-Dimensional Analysis of the Fundus of the Human Internal Acoustic Canal.
Schart-Morén, Nadine; Larsson, Sune; Rask-Andersen, Helge; Li, Hao
Documentation of the nerve components in the internal acoustic canal is essential before cochlea implantation surgery. Interpretations may be challenged by wide anatomical variations of the VIIIth nerve and their ramifications. Malformations may further defy proper nerve identification. Using microcomputed tomography, we analyzed the fundus bone channels in an archival collection of 113 macerated human temporal bones and 325 plastic inner molds. Data were subsequently processed by volume-rendering software using a bony tissue algorithm. Three-dimensional reconstructions were made, and through orthogonal sections, the topographic anatomy was established. The technique provided additional information regarding the anatomy of the nerve foramina/channels of the human fundus region, including variations and destinations. Channel anastomosis were found beyond the level of the fundus. A foramen of the transverse crest was identified. Three-dimensional reconstructions and cropping outlined the bone canals and demonstrated the highly variable VIIIth nerve anatomy at the fundus of the human inner acoustic canal. Myriad channel interconnections suggested an intricate system of neural interactive pathways in humans. Particularly striking was the variable anatomy of the saccule nerve channels. The results may assist in the preoperative interpretation of the VIIIth nerve anatomy.
Cousins, Claire
2015-02-16
The search for once-habitable locations on Mars is increasingly focused on environments dominated by fluvial and lacustrine processes, such as those investigated by the Mars Science Laboratory Curiosity rover. The availability of liquid water coupled with the potential longevity of such systems renders these localities prime targets for the future exploration of Martian biosignatures. Fluvial-lacustrine environments associated with basaltic volcanism are highly relevant to Mars, but their terrestrial counterparts have been largely overlooked as a field analogue. Such environments are common in Iceland, where basaltic volcanism interacts with glacial ice and surface snow to produce large volumes of meltwater within an otherwise cold and dry environment. This meltwater can be stored to create subglacial, englacial, and proglacial lakes, or be released as catastrophic floods and proglacial fluvial systems. Sedimentary deposits produced by the resulting fluvial-lacustrine activity are extensive, with lithologies dominated by basaltic minerals, low-temperature alteration assemblages (e.g., smectite clays, calcite), and amorphous, poorly crystalline phases (basaltic glass, palagonite, nanophase iron oxides). This paper reviews examples of these environments, including their sedimentary deposits and microbiology, within the context of utilising these localities for future Mars analogue studies and instrument testing.
Geometric modeling of the temporal bone for cochlea implant simulation
NASA Astrophysics Data System (ADS)
Todd, Catherine A.; Naghdy, Fazel; O'Leary, Stephen
2004-05-01
The first stage in the development of a clinically valid surgical simulator for training otologic surgeons in performing cochlea implantation is presented. For this purpose, a geometric model of the temporal bone has been derived from a cadaver specimen using the biomedical image processing software package Analyze (AnalyzeDirect, Inc) and its three-dimensional reconstruction is examined. Simulator construction begins with registration and processing of a Computer Tomography (CT) medical image sequence. Important anatomical structures of the middle and inner ear are identified and segmented from each scan in a semi-automated threshold-based approach. Linear interpolation between image slices produces a three-dimensional volume dataset: the geometrical model. Artefacts are effectively eliminated using a semi-automatic seeded region-growing algorithm and unnecessary bony structures are removed. Once validated by an Ear, Nose and Throat (ENT) specialist, the model may be imported into the Reachin Application Programming Interface (API) (Reachin Technologies AB) for visual and haptic rendering associated with a virtual mastoidectomy. Interaction with the model is realized with haptics interfacing, providing the user with accurate torque and force feedback. Electrode array insertion into the cochlea will be introduced in the final stage of design.
NASA Astrophysics Data System (ADS)
Gay, Aurélien
2017-06-01
The initial sediment lithification starts with complex interactions involving minerals, surface water, decomposing organic matter and living organisms. This is the eogenesis domain (0 to 2 km below the seafloor) in which the sediments are subject to physical, chemical and mechanical transformations defining the early fabric of rocks. This interval is intensively prospected for its energy/mining resources (hydrocarbons, metal deposits, geothermal energy). In most basins worldwide it is composed of very fine-grained sediments and it is supposed to play the role of a seal for fluids migration. However, it is affected by polygonal faulting due to a volume loss during burial by contraction of clay sediments with a high smectite content. This process is of high interest for fractured reservoirs and/or cover integrity but it is not well constrained giving an uncertainty as this interval can either promote the migration of deeper fluids and the mineralized fluids intensifies diagenesis in the fracture planes, rendering this interval all the more impermeable. The next challenge will be to define where, when and how does this polygonal fault interval occur and this can only be done by understanding the behavior of clay grains and fluids during early burial.
Yoon, Eun Jin; Choi, Jung-Seok; Kim, Heejung; Sohn, Bo Kyung; Jung, Hee Yeon; Lee, Jun-Young; Kim, Dai-Jin; Park, Sun-Won; Kim, Yu Kyeong
2017-07-18
Internet gaming disorder (IGD) has been conceptualized as a behavioral addiction and shares clinical, neuropsychological, and personality characteristics with alcohol use disorder (AUD), but IGD dose not entail brain exposure to toxic agents, which renders it different from AUD. To achieve a clear understanding of the neurobiological features of IGD, we aimed to identify morphological and functional changes in IGD and compare them with those in AUD. Individuals with IGD showed larger volume in the hippocampus/amygdala and precuneus than healthy controls (HCs). The volume in the hippocampus positively correlated with the symptom severity of IGD. Moreover, functional connectivity analysis with the hippocampus/amygdala cluster revealed that the left ventromedial prefrontal cortex showed stronger functional connectivity in individuals with IGD compared to those with AUD. In contrast, individuals with AUD exhibited the smaller cerebellar volume and thinner medial frontal cortex than HCs. The volume in the cerebellum correlated with impaired working memory function as well as duration of illness in AUD group. Findings suggested that altered volume and functional connectivity in the hippocampus/amygdala in IGD might be associated with abnormally enhanced memory process of gaming-related cues, while abnormal cortical changes and cognitive impairments in AUD might be associated with neurotoxic effects of alcohol.
Childhood Cumulative Risk Exposure and Adult Amygdala Volume and Function.
Evans, Gary W; Swain, James E; King, Anthony P; Wang, Xin; Javanbakht, Arash; Ho, S Shaun; Angstadt, Michael; Phan, K Luan; Xie, Hong; Liberzon, Israel
2016-06-01
Considerable work indicates that early cumulative risk exposure is aversive to human development, but very little research has examined the neurological underpinnings of these robust findings. This study investigates amygdala volume and reactivity to facial stimuli among adults (mean 23.7 years of age, n = 54) as a function of cumulative risk exposure during childhood (9 and 13 years of age). In addition, we test to determine whether expected cumulative risk elevations in amygdala volume would mediate functional reactivity of the amygdala during socioemotional processing. Risks included substandard housing quality, noise, crowding, family turmoil, child separation from family, and violence. Total and left hemisphere adult amygdala volumes were positively related to cumulative risk exposure during childhood. The links between childhood cumulative risk exposure and elevated amygdala responses to emotionally neutral facial stimuli in adulthood were mediated by the corresponding amygdala volumes. Cumulative risk exposure in later adolescence (17 years of age), however, was unrelated to subsequent adult amygdala volume or function. Physical and socioemotional risk exposures early in life appear to alter amygdala development, rendering adults more reactive to ambiguous stimuli such as neutral faces. These stress-related differences in childhood amygdala development might contribute to the well-documented psychological distress as a function of early risk exposure. © 2015 Wiley Periodicals, Inc.
Method for net-shaping using aerogels
Brinker, C. Jeffrey; Ashey, Carol S.; Reed, Scott T.; Sriram, Chunangad S.; Harris, Thomas M.
2001-01-01
A method of net-shaping using aerogel materials is provided by first forming a sol, aging the sol to form a gel, with the gel having a fluid component and having been formed into a medium selected from the group consisting of a powder, bulk material, or granular aerobeads, derivatizing the surface of the gel to render the surface unreactive toward further condensation, removing a portion of the fluid component of the final shaped gel to form a partially dried medium, placing the medium into a cavity, wherein the volume of said medium is less that the volume of the cavity, and removing a portion of the fluid component of the medium. The removal, such as by heating at a temperature of approximately less than 50.degree. C., applying a vacuum, or both, causes the volume of the medium to increase and to form a solid aerogel. The material can be easily removed by exposing the material to a solvent, thereby reducing the volume of the material. In another embodiment, the gel is derivatized and then formed into a shaped medium, where subsequent drying reduces the volume of the shaped medium, forming a net-shaping material. Upon further drying, the material increases in volume to fill a cavity. The present invention is both a method of net-shaping and the material produced by the method.
Humor as Safe House in the Foreign Language Classroom
ERIC Educational Resources Information Center
Pomerantz, Anne; Bell, Nancy D.
2011-01-01
Analyses of second language (L2) classroom interaction often categorize joking and other humorous talk by students as disruptive, off-task behavior, thereby rendering it important only from a classroom management perspective. Studies of language play, however, have illustrated that such jocular talk not only allows students to construct a broader…
Simple Solutions to Complex Problems--MOOCs as a Panacea?
ERIC Educational Resources Information Center
Bass, Scott A.
2014-01-01
This article is critical of the implementation of massive open online courses (MOOCs) by institutions seeking the deep student learning often found in general education learning outcomes. Customized student interaction with an expert in the field is rendered impossible by the scale of mooc enrollment. Concerns are also raised about the economic…
Social vulnerability and Ebola virus disease in rural Liberia
John A. Stanturf; Scott L. Goodrick; Melvin L. Warren; Susan Charnley; Christie M. Stegall
2015-01-01
The Ebola virus disease (EVD) epidemic that has stricken thousands of people in the three West African countries of Liberia, Sierra Leone, and Guinea highlights the lack of adaptive capacity in post-conflict countries. The scarcity of health services in particular renders these populations vulnerable to multiple interacting stressors including food insecurity, climate...
[The institutionalization of health care in Russia: actual trends].
Erugina, M V; Krom, I L
2016-01-01
Since XX century, health care is a first-rate social institution. The analysis of tendencies of functioning of institution of health care in modern Russia is presented by the article in methodological plane of the system of social structural functions (AGIL) proposed by T. Parsons. The patient is the main participant of medical organizational process. The activity of other participants of process of organization of medical care is to be focused primarily on satisfaction of needs of patient during medical care rendering. The society implements training of subjects for executing their professional roles that determines professionalization of executed functions. The most important purpose of modern training programs in medical education is determined by leading level of cognition, forecasting and achievement of socially significant aftermaths of future during structuring of educational process. In the context of integrative function the coordination of activities of participants of interaction is implemented. In conditions of actual tendencies of market economy the interaction of participants of the process of medical care rendering and the process of quality control of medical care is developed on the basis of implementation of standards of medical care. In Russia, the institutionalization of health care presupposes cooperation and interaction of subjects of system differing by degree and amount of collaborative work. The latent function (maintenance of value pattern) determines regularity, predictability, stability of functioning of social relationships. The social control supports expedient behavior of participants of process of medical care rendering. The dysfunctional practices of modern Russian health care are considered in the context of concept of effective interaction of participants of medical organizational process targeted to maintenance of rights of patients for accessible and qualitative medical care. As a result of applied analysis, the problems were revealed decreasing effectiveness of functioning of system of organization of medical care of population. The minimization of risks of development of social inequity and increasing of accessibility of qualified medical care are considered in the context of verification and overcoming of dysfunctions during organization of medical care of population.
Sarma, Debanga; Barua, Sasanka K; Rajeev, T P; Baruah, Saumar J
2012-10-01
Nuclear renal scan is currently the gold standard imaging study to determine differential renal function. We propose helical CT as single modality for both the anatomical and functional evaluation of kidney with impaired function. In the present study renal parenchymal volume is measured and percent total renal volume is used as a surrogate marker for differential renal function. The objective of this study is to correlate between differential renal function estimation using CT-based renal parenchymal volume measurement with differential renal function estimation using (99m)TC - DTPA renal scan. Twenty-one patients with unilateral obstructive uropathy were enrolled in this prospective comparative study. They were subjected to (99m)Tc - DTPA renal scan and 64 slice helical CT scan which estimates the renal volume depending on the reconstruction of arterial phase images followed by volume rendering and percent renal volume was calculated. Percent renal volume was correlated with percent renal function, as determined by nuclear renal scan using Pearson coefficient. RESULTS AND OBSERVATION: A strong correlation is observed between percent renal volume and percent renal function in obstructed units (r = 0.828, P < 0.001) as well as in nonobstructed units (r = 0.827, P < 0.001). There is a strong correlation between percent renal volume determined by CT scan and percent renal function determined by (99m)TC - DTPA renal scan both in obstructed and in normal units. CT-based percent renal volume can be used as a single radiological tests for both functional and anatomical assessment of impaired renal units.
Asymmetric cooperative catalysis of strong Brønsted acid-promoted reactions using chiral ureas.
Xu, Hao; Zuend, Stephan J; Woll, Matthew G; Tao, Ye; Jacobsen, Eric N
2010-02-19
Cationic organic intermediates participate in a wide variety of useful synthetic transformations, but their high reactivity can render selectivity in competing pathways difficult to control. Here, we describe a strategy for inducing enantioselectivity in reactions of protio-iminium ions, wherein a chiral catalyst interacts with the highly reactive intermediate through a network of noncovalent interactions. This interaction leads to an attenuation of the reactivity of the iminium ion and allows high enantioselectivity in cycloadditions with electron-rich alkenes (the Povarov reaction). A detailed experimental and computational analysis of this catalyst system has revealed the precise nature of the catalyst-substrate interactions and the likely basis for enantioinduction.
Chechetkin, A V; Popova, N N; Kuz'min, N S; Fedotov, Iu P
2004-01-01
The authors analyze the experience of organization of the wounded and patient provision with hemotransfusion materials to render assistance during antiterrorist operation in Republic of Dagestan and Republic of Chechnya in 1999-2001. It is shown that supply of military treatment-and-prophylactic institutions deployed in the zone of military conflict, of specialized hospitals with hemotransfusion materials was the following: with blood preparations (at the expense of centralized deliveries), with blood components (91.8%, at the expense of centralized deliveries form the district blood transfusion station). The volume of stored blood taken from emergency reserve donors in the military treatment-and-prophylactic institutions located near the zone of military actions constituted 8.2% of the total volume of blood components received.
Heat Melt Compaction as an Effective Treatment for Eliminating Microorganisms from Solid Waste
NASA Technical Reports Server (NTRS)
Hummerick, Mary P.; Strayer, Richard F.; McCoy, Lashelle E.; Richards, Jeffrey T.; Ruby, Anna Maria; Wheeler, Ray; Fisher, John
2013-01-01
One of the technologies being tested at NASA Ames Research Center (ARC) for the Advance Exploration Systems program and as part of the logistics and repurposing project is heat melt compaction (HMC) of solid waste. Reduces volume, removes water and renders a biologically stable and safe product. The HMC compacts and reduces the trash volume as much as 90o/o greater than the current manual compaction used by the crew.The project has three primary goals or tasks. 1. Microbiological analysis of HMC hardware surfaces before and after operation. 2. Microbiological and physical characterizations of heat melt tiles made from trash at different processing times and temperatures. 3. Long term storage and stability of HMC trash tiles or "Do the bugs grow back?"
Lee, Jung-Rok; Appelmann, Iris; Miething, Cornelius; Shultz, Tyler O.; Ruderman, Daniel; Kim, Dokyoon; Mallick, Parag; Lowe, Scott W.; Wang, Shan X.
2018-01-01
Cancer proteomics is the manifestation of relevant biological processes in cancer development. Thus, it reflects the activities of tumor cells, host-tumor interactions, and systemic responses to cancer therapy. To understand the causal effects of tumorigenesis or therapeutic intervention, longitudinal studies are greatly needed. However, most of the conventional mouse experiments are unlikely to accommodate frequent collection of serum samples with a large enough volume for multiple protein assays towards single-object analysis. Here, we present a technique based on magneto-nanosensors to longitudinally monitor the protein profiles in individual mice of lymphoma models using a small volume of a sample for multiplex assays. Methods: Drug-sensitive and -resistant cancer cell lines were used to develop the mouse models that render different outcomes upon the drug treatment. Two groups of mice were inoculated with each cell line, and treated with either cyclophosphamide or vehicle solution. Serum samples taken longitudinally from each mouse in the groups were measured with 6-plex magneto-nanosensor cytokine assays. To find the origin of IL-6, experiments were performed using IL-6 knock-out mice. Results: The differences in serum IL-6 and GCSF levels between the drug-treated and untreated groups were revealed by the magneto-nanosensor measurement on individual mice. Using the multiplex assays and mouse models, we found that IL-6 is secreted by the host in the presence of tumor cells upon the drug treatment. Conclusion: The multiplex magneto-nanosensor assays enable longitudinal proteomic studies on mouse tumor models to understand tumor development and therapy mechanisms more precisely within a single biological object. PMID:29507628
Lee, Jung-Rok; Appelmann, Iris; Miething, Cornelius; Shultz, Tyler O; Ruderman, Daniel; Kim, Dokyoon; Mallick, Parag; Lowe, Scott W; Wang, Shan X
2018-01-01
Cancer proteomics is the manifestation of relevant biological processes in cancer development. Thus, it reflects the activities of tumor cells, host-tumor interactions, and systemic responses to cancer therapy. To understand the causal effects of tumorigenesis or therapeutic intervention, longitudinal studies are greatly needed. However, most of the conventional mouse experiments are unlikely to accommodate frequent collection of serum samples with a large enough volume for multiple protein assays towards single-object analysis. Here, we present a technique based on magneto-nanosensors to longitudinally monitor the protein profiles in individual mice of lymphoma models using a small volume of a sample for multiplex assays. Methods: Drug-sensitive and -resistant cancer cell lines were used to develop the mouse models that render different outcomes upon the drug treatment. Two groups of mice were inoculated with each cell line, and treated with either cyclophosphamide or vehicle solution. Serum samples taken longitudinally from each mouse in the groups were measured with 6-plex magneto-nanosensor cytokine assays. To find the origin of IL-6, experiments were performed using IL-6 knock-out mice. Results: The differences in serum IL-6 and GCSF levels between the drug-treated and untreated groups were revealed by the magneto-nanosensor measurement on individual mice. Using the multiplex assays and mouse models, we found that IL-6 is secreted by the host in the presence of tumor cells upon the drug treatment. Conclusion: The multiplex magneto-nanosensor assays enable longitudinal proteomic studies on mouse tumor models to understand tumor development and therapy mechanisms more precisely within a single biological object.
Individual differences in posterior cortical volume correlate with proneness to pride and gratitude.
Zahn, Roland; Garrido, Griselda; Moll, Jorge; Grafman, Jordan
2014-11-01
Proneness to specific moral sentiments (e.g. pride, gratitude, guilt, indignation) has been linked with individual variations in functional MRI (fMRI) response within anterior brain regions whose lesion leads to inappropriate behaviour. However, the role of structural anatomical differences in rendering individuals prone to particular moral sentiments relative to others is unknown. Here, we investigated grey matter volumes (VBM8) and proneness to specific moral sentiments on a well-controlled experimental task in healthy individuals. Individuals with smaller cuneus, and precuneus volumes were more pride-prone, whereas those with larger right inferior temporal volumes experienced gratitude more readily. Although the primary analysis detected no associations with guilt- or indignation-proneness, subgenual cingulate fMRI responses to guilt were negatively correlated with grey matter volumes in the left superior temporal sulcus and anterior dorsolateral prefrontal cortices (right >left). This shows that individual variations in functional activations within critical areas for moral sentiments were not due to grey matter volume differences in the same areas. Grey matter volume differences between healthy individuals may nevertheless play an important role by affecting posterior cortical brain systems that are non-critical but supportive for the experience of specific moral sentiments. This may be of particular relevance when their experience depends on visuo-spatial elaboration. Published by Oxford University Press 2013. This work is written by US Government employees and is in the public domain in the US.
Beaulieu, C F; Jeffrey, R B; Karadi, C; Paik, D S; Napel, S
1999-07-01
To determine the sensitivity of radiologist observers for detecting colonic polyps by using three different data review (display) modes for computed tomographic (CT) colonography, or "virtual colonoscopy." CT colonographic data in a patient with a normal colon were used as base data for insertion of digitally synthesized polyps. Forty such polyps (3.5, 5, 7, and 10 mm in diameter) were randomly inserted in four copies of the base data. Axial CT studies, volume-rendered virtual endoscopic movies, and studies from a three-dimensional mode termed "panoramic endoscopy" were reviewed blindly and independently by two radiologists. Detection improved with increasing polyp size. Trends in sensitivity were dependent on whether all inserted lesions or only visible lesions were considered, because modes differed in how completely the colonic surface was depicted. For both reviewers and all polyps 7 mm or larger, panoramic endoscopy resulted in significantly greater sensitivity (90%) than did virtual endoscopy (68%, P = .014). For visible lesions only, the sensitivities were 85%, 81%, and 60% for one reader and 65%, 62%, and 28% for the other for virtual endoscopy, panoramic endoscopy, and axial CT, respectively. Three-dimensional displays were more sensitive than two-dimensional displays (P < .05). The sensitivity of panoramic endoscopy is higher than that of virtual endoscopy, because the former displays more of the colonic surface. Higher sensitivities for three-dimensional displays may justify the additional computation and review time.
VisFlow - Web-based Visualization Framework for Tabular Data with a Subset Flow Model.
Yu, Bowen; Silva, Claudio T
2017-01-01
Data flow systems allow the user to design a flow diagram that specifies the relations between system components which process, filter or visually present the data. Visualization systems may benefit from user-defined data flows as an analysis typically consists of rendering multiple plots on demand and performing different types of interactive queries across coordinated views. In this paper, we propose VisFlow, a web-based visualization framework for tabular data that employs a specific type of data flow model called the subset flow model. VisFlow focuses on interactive queries within the data flow, overcoming the limitation of interactivity from past computational data flow systems. In particular, VisFlow applies embedded visualizations and supports interactive selections, brushing and linking within a visualization-oriented data flow. The model requires all data transmitted by the flow to be a data item subset (i.e. groups of table rows) of some original input table, so that rendering properties can be assigned to the subset unambiguously for tracking and comparison. VisFlow features the analysis flexibility of a flow diagram, and at the same time reduces the diagram complexity and improves usability. We demonstrate the capability of VisFlow on two case studies with domain experts on real-world datasets showing that VisFlow is capable of accomplishing a considerable set of visualization and analysis tasks. The VisFlow system is available as open source on GitHub.
Integrating segmentation methods from the Insight Toolkit into a visualization application.
Martin, Ken; Ibáñez, Luis; Avila, Lisa; Barré, Sébastien; Kaspersen, Jon H
2005-12-01
The Insight Toolkit (ITK) initiative from the National Library of Medicine has provided a suite of state-of-the-art segmentation and registration algorithms ideally suited to volume visualization and analysis. A volume visualization application that effectively utilizes these algorithms provides many benefits: it allows access to ITK functionality for non-programmers, it creates a vehicle for sharing and comparing segmentation techniques, and it serves as a visual debugger for algorithm developers. This paper describes the integration of image processing functionalities provided by the ITK into VolView, a visualization application for high performance volume rendering. A free version of this visualization application is publicly available and is available in the online version of this paper. The process for developing ITK plugins for VolView according to the publicly available API is described in detail, and an application of ITK VolView plugins to the segmentation of Abdominal Aortic Aneurysms (AAAs) is presented. The source code of the ITK plugins is also publicly available and it is included in the online version.
Choi, Hyungsuk; Choi, Woohyuk; Quan, Tran Minh; Hildebrand, David G C; Pfister, Hanspeter; Jeong, Won-Ki
2014-12-01
As the size of image data from microscopes and telescopes increases, the need for high-throughput processing and visualization of large volumetric data has become more pressing. At the same time, many-core processors and GPU accelerators are commonplace, making high-performance distributed heterogeneous computing systems affordable. However, effectively utilizing GPU clusters is difficult for novice programmers, and even experienced programmers often fail to fully leverage the computing power of new parallel architectures due to their steep learning curve and programming complexity. In this paper, we propose Vivaldi, a new domain-specific language for volume processing and visualization on distributed heterogeneous computing systems. Vivaldi's Python-like grammar and parallel processing abstractions provide flexible programming tools for non-experts to easily write high-performance parallel computing code. Vivaldi provides commonly used functions and numerical operators for customized visualization and high-throughput image processing applications. We demonstrate the performance and usability of Vivaldi on several examples ranging from volume rendering to image segmentation.
Recent advances in biocompatible semiconductor nanocrystals for immunobiological applications.
Nanda, Sitansu Sekhar; Kim, Min Jik; Kim, Kwangmeyung; Papaefthymiou, Georgia C; Selvan, Subramanian Tamil; Yi, Dong Kee
2017-11-01
Quantum confinement in inorganic semiconductor nanocrystals produces brightly luminescent nanoparticles endowed with unique photo-physical properties, such as tunable optical properties. These have found widespread applications in nanotechnology. The ability to render such nanostructures biocompatible, while maintaining their tunable radiation in the visible range of the electromagnetic spectrum, renders them appropriate for bio-applications. Promising in vitro and in vivo diagnostic applications have been demonstrated, such as fluorescence-based detection of biological interactions, single molecule tracking, multiplexing and immunoassaying. In particular, these fluorescent inorganic semiconductor nanocrystals, generally known as quantum dots, have the potential of remarkable immunobiological applications. This review focuses on the current status of biocompatible quantum dots and their applications in immunobiology - immunosensing, immunofluorescent imaging and immunotherapy. Copyright © 2017 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Benthem, Mark H.
2016-05-04
This software is employed for 3D visualization of X-ray diffraction (XRD) data with functionality for slicing, reorienting, isolating and plotting of 2D color contour maps and 3D renderings of large datasets. The program makes use of the multidimensionality of textured XRD data where diffracted intensity is not constant over a given set of angular positions (as dictated by the three defined dimensional angles of phi, chi, and two-theta). Datasets are rendered in 3D with intensity as a scaler which is represented as a rainbow color scale. A GUI interface and scrolling tools along with interactive function via the mouse allowmore » for fast manipulation of these large datasets so as to perform detailed analysis of diffraction results with full dimensionality of the diffraction space.« less
A new approach to subjectively assess quality of plenoptic content
NASA Astrophysics Data System (ADS)
Viola, Irene; Řeřábek, Martin; Ebrahimi, Touradj
2016-09-01
Plenoptic content is becoming increasingly popular thanks to the availability of acquisition and display devices. Thanks to image-based rendering techniques, a plenoptic content can be rendered in real time in an interactive manner allowing virtual navigation through the captured scenes. This way of content consumption enables new experiences, and therefore introduces several challenges in terms of plenoptic data processing, transmission and consequently visual quality evaluation. In this paper, we propose a new methodology to subjectively assess the visual quality of plenoptic content. We also introduce a prototype software to perform subjective quality assessment according to the proposed methodology. The proposed methodology is further applied to assess the visual quality of a light field compression algorithm. Results show that this methodology can be successfully used to assess the visual quality of plenoptic content.
Screen Space Ambient Occlusion Based Multiple Importance Sampling for Real-Time Rendering
NASA Astrophysics Data System (ADS)
Zerari, Abd El Mouméne; Babahenini, Mohamed Chaouki
2018-03-01
We propose a new approximation technique for accelerating the Global Illumination algorithm for real-time rendering. The proposed approach is based on the Screen-Space Ambient Occlusion (SSAO) method, which approximates the global illumination for large, fully dynamic scenes at interactive frame rates. Current algorithms that are based on the SSAO method suffer from difficulties due to the large number of samples that are required. In this paper, we propose an improvement to the SSAO technique by integrating it with a Multiple Importance Sampling technique that combines a stratified sampling method with an importance sampling method, with the objective of reducing the number of samples. Experimental evaluation demonstrates that our technique can produce high-quality images in real time and is significantly faster than traditional techniques.
Decreased hypothalamus volumes in generalized anxiety disorder but not in panic disorder.
Terlevic, Robert; Isola, Miriam; Ragogna, Maria; Meduri, Martina; Canalaz, Francesca; Perini, Laura; Rambaldelli, Gianluca; Travan, Luciana; Crivellato, Enrico; Tognin, Stefania; Como, Giuseppe; Zuiani, Chiara; Bazzocchi, Massimo; Balestrieri, Matteo; Brambilla, Paolo
2013-04-25
The hypothalamus is a brain structure involved in the neuroendocrine aspect of stress and anxiety. Evidence suggests that generalized anxiety disorder (GAD) and panic disorder (PD) might be accompanied by dysfunction of the hypothalamus-pituitary-adrenal axis (HPA), but so far structural alterations were not studied. We investigated hypothalamic volumes in patients with either GAD or PD and in healthy controls. Twelve GAD patients, 11 PD patients and 21 healthy controls underwent a 1.5T MRI scan. Hypothalamus volumes were manually traced by a rater blind to subjects' identity. General linear model for repeated measures (GLM-RM) was used to compare groups on hypothalamic volumes, controlling for total intracranial volume, age and sex. The hypothalamus volume was significantly reduced (p=0.04) in GAD patients, with significant reductions in both the left (p=0.02) and right side (p=0.04). Patients with PD did not differ significantly (p=0.73). Anxiety scores were inversely correlated with hypothalamic volumes. The small sample size could reduce the generalizability of the results while the lack of stress hormone measurements renders functional assessment of the hypothalamus-pituitary-adrenal axis not feasible. The present study showed decreased hypothalamic volumes in GAD patients but not in those with PD. Future longitudinal studies should combine volumetric data with measurements of stress hormones to better elucidate the role of the HPA axis in GAD. Copyright © 2012 Elsevier B.V. All rights reserved.
Theoretical study for volume changes associated with the helix-coil transition of peptides.
Imai, T; Harano, Y; Kovalenko, A; Hirata, F
2001-12-01
We calculate the partial molar volumes and their changes associated with the coil(extended)-to-helix transition of two types of peptide, glycine-oligomer and glutamic acid-oligomer, in aqueous solutions by using the Kirkwood-Buff solution theory coupled with the three-dimensional reference interaction site model (3D-RISM) theory. The volume changes associated with the transition are small and positive. The volume is analyzed by decomposing it into five contributions following the procedure proposed by Chalikian and Breslauer: the ideal volume, the van der Waals volume, the void volume, the thermal volume, and the interaction volume. The ideal volumes and the van der Waals volumes do not change appreciably upon the transition. In the both cases of glycine-peptide and glutamic acid-peptide, the changes in the void volumes are positive, while those in the thermal volumes are negative, and tend to balance those in the void volumes. The change in the interaction volume of glycine-peptide does not significantly contribute, while that of glutamic acid-peptide makes a negative contribution. Copyright 2001 John Wiley & Sons, Inc. Biopolymers 59: 512-519, 2001
ERIC Educational Resources Information Center
Newman, Ehren L.; Caplan, Jeremy B.; Kirschen, Matthew P.; Korolev, Igor O.; Sekuler, Robert; Kahana, Michael J.
2007-01-01
By having subjects drive a virtual taxicab through a computer-rendered town, we examined how landmark and layout information interact during spatial navigation. Subject-drivers searched for passengers, and then attempted to take the most efficient route to the requested destinations (one of several target stores). Experiment 1 demonstrated that…
USDA-ARS?s Scientific Manuscript database
Potato virus Y (PVY) is an economically important and reemerging potato pathogen in North America. PVY infection reduces yield, and some necrotic and recombinant strains render tubers unmarketable. Although PVYO is the most prevalent strain in the United States, the necrotic and recombinant strains ...
(DCT-FY08) Target Detection Using Multiple Modality Airborne and Ground Based Sensors
2013-03-01
Plenoptic modeling: an image-based rendering system,” in SIGGRAPH ’95: Proceedings of the 22nd annual conference on Computer graphics and interactive...techniques. New York, NY, USA: ACM, 1995, pp. 39–46. [21] D. G. Aliaga and I. Carlbom, “ Plenoptic stitching: a scalable method for reconstructing 3D
NASA Astrophysics Data System (ADS)
Tanaka, S.; Hasegawa, K.; Okamoto, N.; Umegaki, R.; Wang, S.; Uemura, M.; Okamoto, A.; Koyamada, K.
2016-06-01
We propose a method for the precise 3D see-through imaging, or transparent visualization, of the large-scale and complex point clouds acquired via the laser scanning of 3D cultural heritage objects. Our method is based on a stochastic algorithm and directly uses the 3D points, which are acquired using a laser scanner, as the rendering primitives. This method achieves the correct depth feel without requiring depth sorting of the rendering primitives along the line of sight. Eliminating this need allows us to avoid long computation times when creating natural and precise 3D see-through views of laser-scanned cultural heritage objects. The opacity of each laser-scanned object is also flexibly controllable. For a laser-scanned point cloud consisting of more than 107 or 108 3D points, the pre-processing requires only a few minutes, and the rendering can be executed at interactive frame rates. Our method enables the creation of cumulative 3D see-through images of time-series laser-scanned data. It also offers the possibility of fused visualization for observing a laser-scanned object behind a transparent high-quality photographic image placed in the 3D scene. We demonstrate the effectiveness of our method by applying it to festival floats of high cultural value. These festival floats have complex outer and inner 3D structures and are suitable for see-through imaging.
Interactional Competence in Japanese as an Additional Language. Pragmatics & Interaction. Volume 4
ERIC Educational Resources Information Center
Greer, Tim, Ed.; Ishida, Midori, Ed.; Tateyama, Yumiko, Ed.
2017-01-01
In the research literature on interactional competence in talk among second language speakers and their coparticipants, this volume of "Pragmatics & Interaction" is the first to focus on interaction in Japanese. The chapters examine the use and development of interactional practices in a wide range of social settings, from everyday…
Local structure of percolating gels at very low volume fractions
NASA Astrophysics Data System (ADS)
Griffiths, Samuel; Turci, Francesco; Royall, C. Patrick
2017-01-01
The formation of colloidal gels is strongly dependent on the volume fraction of the system and the strength of the interactions between the colloids. Here we explore very dilute solutions by the means of numerical simulations and show that, in the absence of hydrodynamic interactions and for sufficiently strong interactions, percolating colloidal gels can be realised at very low values of the volume fraction. Characterising the structure of the network of the arrested material we find that, when reducing the volume fraction, the gels are dominated by low-energy local structures, analogous to the isolated clusters of the interaction potential. Changing the strength of the interaction allows us to tune the compactness of the gel as characterised by the fractal dimension, with low interaction strength favouring more chain-like structures.
Dissortativity and duplications in oral cancer
NASA Astrophysics Data System (ADS)
Shinde, Pramod; Yadav, Alok; Rai, Aparna; Jalan, Sarika
2015-08-01
More than 300 000 new cases worldwide are being diagnosed with oral cancer annually. Complexity of oral cancer renders designing drug targets very difficult. We analyse protein-protein interaction network for the normal and oral cancer tissue and detect crucial changes in the structural properties of the networks in terms of the interactions of the hub proteins and the degree-degree correlations. Further analysis of the spectra of both the networks, while exhibiting universal statistical behaviour, manifest distinction in terms of the zero degeneracy, providing insight to the complexity of the underlying system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Hongcai J
In the past decades, there has been an escalation of interest in the study of MOFs due to their fascinating structures and intriguing application potentials. Their exceptionally high surface areas, uniform yet tunable pore sizes, and well-defined adsorbate-MOF interaction sites make them suitable for hydrogen storage. Various strategies to increase the hydrogen capacity of MOFs, such as constructing pore sizes comparable to hydrogen molecules, increasing surface area and pore volume, utilizing catenation, and introducing coordinatively unsaturated metal centers (UMCs) have been widely explored to increase the hydrogen uptake of the MOFs. MOFs with hydrogen uptake approaching the DOE gravimetric storagemore » goal under reasonable pressure but cryo- temperature (typically 77 K) were achieved. However, the weak interaction between hydrogen molecules and MOFs has been the major hurdle limiting the hydrogen uptake of MOFs at ambient temperature. Along the road, we have realized both high surface area and strong interaction between framework and hydrogen are equally essential for porous materials to be practically applicable in Hydrogen storage. Increasing the isosteric heats of adsorption for hydrogen through the introduction of active centers into the framework could have great potential on rendering the framework with strong interaction toward hydrogen. Approaches on increasing the surface areas and improving hydrogen affinity by optimizing size and structure of the pores and the alignment of active centers around the pores in frameworks have been pursued, for example: (a) the introduction of coordinatively UMC (represents a metal center missing multiple ligands) with potential capability of multiple dihydrogen-binding (Kubas type, non-dissociative) per UMC, (b) the design and synthesis of proton-rich MOFs in which a + H3 binds dihydrogen just like a metal ion does, and (c) the preparation of MOFs and PPNs with well aligned internal electric fields. We believe the accomplishments of this DOE supported research will greatly benefit the future pursuit of hydrogen storage materials. The ultimate goal to increase the gravimetric and volumetric hydrogen storage capacity to meet DOE targets for Light-Duty Vehicles is achievable.« less
The Volume Field Model about Strong Interaction and Weak Interaction
NASA Astrophysics Data System (ADS)
Liu, Rongwu
2016-03-01
For a long time researchers have believed that strong interaction and weak interaction are realized by exchanging intermediate particles. This article proposes a new mechanism as follows: Volume field is a form of material existence in plane space, it takes volume-changing motion in the form of non-continuous motion, volume fields have strong interaction or weak interaction between them by overlapping their volume fields. Based on these concepts, this article further proposes a ``bag model'' of volume field for atomic nucleus, which includes three sub-models of the complex structure of fundamental body (such as quark), the atom-like structure of hadron, and the molecule-like structure of atomic nucleus. This article also proposes a plane space model and formulates a physics model of volume field in the plane space, as well as a model of space-time conversion. The model of space-time conversion suggests that: Point space-time and plane space-time convert each other by means of merging and rupture respectively, the essence of space-time conversion is the mutual transformations of matter and energy respectively; the process of collision of high energy hadrons, the formation of black hole, and the Big Bang of universe are three kinds of space-time conversions.
Rodman, J S; Reckler, J M; Israel, A R
1981-08-01
Following surgery for branched renal calculi, hemiacidrin irrigation may be useful to dissolve any residual stones. Struvite, the mineral in these stones, is itself an alkaline buffer and can raise the pH of the irrigating solution rendering it ineffective. Large volumes of hemiacidrin must reach the stone remnants or they are unlikely to dissolve. Two cases are described in which creative positioning of the patient or the irrigation catheters was necessary to permit adequate amounts of hemiacidrin to reach and dissolve stone remnants.
1998-05-26
therefore, produce higher propagation losses. A. Theory The presence of losses in the cladding modes renders their propagation constants complex...growth theory [10, 11] by tf(L,F,Ga)= ’ n + \\ „4-1 (" + l) 0 F \\ L <C (1) where L is the service length, L0 is the fiber gauge length, and m is...single input pulse, (p. 114) 8:30am BMB2 ■ Ultrashort purse propagation through fiber gratings: theory and experiment, L.R. Chen, S.D. Benjamin
Shwirl: Meaningful coloring of spectral cube data with volume rendering
NASA Astrophysics Data System (ADS)
Vohl, Dany
2017-04-01
Shwirl visualizes spectral data cubes with meaningful coloring methods. The program has been developed to investigate transfer functions, which combines volumetric elements (or voxels) to set the color, and graphics shaders, functions used to compute several properties of the final image such as color, depth, and/or transparency, as enablers for scientific visualization of astronomical data. The program uses Astropy (ascl:1304.002) to handle FITS files and World Coordinate System, Qt (and PyQt) for the user interface, and VisPy, an object-oriented Python visualization library binding onto OpenGL.
Plane-Based Sampling for Ray Casting Algorithm in Sequential Medical Images
Lin, Lili; Chen, Shengyong; Shao, Yan; Gu, Zichun
2013-01-01
This paper proposes a plane-based sampling method to improve the traditional Ray Casting Algorithm (RCA) for the fast reconstruction of a three-dimensional biomedical model from sequential images. In the novel method, the optical properties of all sampling points depend on the intersection points when a ray travels through an equidistant parallel plan cluster of the volume dataset. The results show that the method improves the rendering speed at over three times compared with the conventional algorithm and the image quality is well guaranteed. PMID:23424608
Image Matrix Processor for Volumetric Computations Final Report CRADA No. TSB-1148-95
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roberson, G. Patrick; Browne, Jolyon
The development of an Image Matrix Processor (IMP) was proposed that would provide an economical means to perform rapid ray-tracing processes on volume "Giga Voxel" data sets. This was a multi-phased project. The objective of the first phase of the IMP project was to evaluate the practicality of implementing a workstation-based Image Matrix Processor for use in volumetric reconstruction and rendering using hardware simulation techniques. Additionally, ARACOR and LLNL worked together to identify and pursue further funding sources to complete a second phase of this project.
Real-time interactive virtual tour on the World Wide Web (WWW)
NASA Astrophysics Data System (ADS)
Yoon, Sanghyuk; Chen, Hai-jung; Hsu, Tom; Yoon, Ilmi
2003-12-01
Web-based Virtual Tour has become a desirable and demanded application, yet challenging due to the nature of web application's running environment such as limited bandwidth and no guarantee of high computation power on the client side. Image-based rendering approach has attractive advantages over traditional 3D rendering approach in such Web Applications. Traditional approach, such as VRML, requires labor-intensive 3D modeling process, high bandwidth and computation power especially for photo-realistic virtual scenes. QuickTime VR and IPIX as examples of image-based approach, use panoramic photos and the virtual scenes that can be generated from photos directly skipping the modeling process. But, these image-based approaches may require special cameras or effort to take panoramic views and provide only one fixed-point look-around and zooming in-out rather than 'walk around', that is a very important feature to provide immersive experience to virtual tourists. The Web-based Virtual Tour using Tour into the Picture employs pseudo 3D geometry with image-based rendering approach to provide viewers with immersive experience of walking around the virtual space with several snap shots of conventional photos.
MovieMaker: a web server for rapid rendering of protein motions and interactions.
Maiti, Rajarshi; Van Domselaar, Gary H; Wishart, David S
2005-07-01
MovieMaker is a web server that allows short ( approximately 10 s), downloadable movies of protein motions to be generated. It accepts PDB files or PDB accession numbers as input and automatically calculates, renders and merges the necessary image files to create colourful animations covering a wide range of protein motions and other dynamic processes. Users have the option of animating (i) simple rotation, (ii) morphing between two end-state conformers, (iii) short-scale, picosecond vibrations, (iv) ligand docking, (v) protein oligomerization, (vi) mid-scale nanosecond (ensemble) motions and (vii) protein folding/unfolding. MovieMaker does not perform molecular dynamics calculations. Instead it is an animation tool that uses a sophisticated superpositioning algorithm in conjunction with Cartesian coordinate interpolation to rapidly and automatically calculate the intermediate structures needed for many of its animations. Users have extensive control over the rendering style, structure colour, animation quality, background and other image features. MovieMaker is intended to be a general-purpose server that allows both experts and non-experts to easily generate useful, informative protein animations for educational and illustrative purposes. MovieMaker is accessible at http://wishart.biology.ualberta.ca/moviemaker.
Hippocampal subfield segmentation in temporal lobe epilepsy: Relation to outcomes.
Kreilkamp, B A K; Weber, B; Elkommos, S B; Richardson, M P; Keller, S S
2018-06-01
To investigate the clinical and surgical outcome correlates of preoperative hippocampal subfield volumes in patients with refractory temporal lobe epilepsy (TLE) using a new magnetic resonance imaging (MRI) multisequence segmentation technique. We recruited 106 patients with TLE and hippocampal sclerosis (HS) who underwent conventional T1-weighted and T2 short TI inversion recovery MRI. An automated hippocampal segmentation algorithm was used to identify twelve subfields in each hippocampus. A total of 76 patients underwent amygdalohippocampectomy and postoperative seizure outcome assessment using the standardized ILAE classification. Semiquantitative hippocampal internal architecture (HIA) ratings were correlated with hippocampal subfield volumes. Patients with left TLE had smaller volumes of the contralateral presubiculum and hippocampus-amygdala transition area compared to those with right TLE. Patients with right TLE had reduced contralateral hippocampal tail volumes and improved outcomes. In all patients, there were no significant relationships between hippocampal subfield volumes and clinical variables such as duration and age at onset of epilepsy. There were no significant differences in any hippocampal subfield volumes between patients who were rendered seizure free and those with persistent postoperative seizure symptoms. Ipsilateral but not contralateral HIA ratings were significantly correlated with gross hippocampal and subfield volumes. Our results suggest that ipsilateral hippocampal subfield volumes are not related to the chronicity/severity of TLE. We did not find any hippocampal subfield volume or HIA rating differences in patients with optimal and unfavorable outcomes. In patients with TLE and HS, sophisticated analysis of hippocampal architecture on MRI may have limited value for prediction of postoperative outcome. © 2018 The Authors. Acta Neurologica Scandinavica Published by John Wiley & Sons Ltd.
Relation between cannabis use and subcortical volumes in people at clinical high risk of psychosis
Buchy, Lisa; Mathalon, Daniel H.; Cannon, Tyrone D.; Cadenhead, Kristin S.; Cornblatt, Barbara A.; McGlashan, Thomas H.; Perkins, Diana O.; Seidman, Larry J.; Tsuang, Ming T.; Walker, Elaine F.; Woods, Scott W.; Bearden, Carrie E.; Addington, Jean
2016-01-01
Among people at genetic risk of schizophrenia, those who use cannabis show smaller thalamic and hippocampal volumes. We evaluated this relationship in people at clinical high risk (CHR) of psychosis. The Alcohol and Drug Use Scale was used to identify 132 CHR cannabis users, the majority of whom were non-dependent cannabis users, 387 CHR non-users, and 204 healthy control non-users, and all participants completed magnetic resonance imaging scans. Volumes of the thalamus, hippocampus and amygdala were extracted with FreeSurfer, and compared across groups. Comparing all CHR participants with healthy control participants revealed no significant differences in volumes of any ROI. However, when comparing CHR users to CHR non-users, a significant ROI × Cannabis group effect emerged: CHR users showed significantly smaller amygdala compared to CHR non-users. However, when limiting analysis to CHR subjects who reported using alcohol at a ‘use without impairment’ severity level, the amygdala effect was non-significant; rather, smaller hippocampal volumes were seen in CHR cannabis users compared to non-users. Controlling statistically for effects of alcohol and tobacco use rendered all results non-significant. These results highlight the importance of controlling for residual confounding effects of other substance use when examining the relationship between cannabis use and neural structure. PMID:27289213
Microfabricated instrument for tissue biopsy and analysis
Krulevitch, Peter A.; Lee, Abraham P.; Northrup, M. Allen; Benett, William J.
2001-01-01
A microfabricated biopsy/histology instrument which has several advantages over the conventional procedures, including minimal specimen handling, smooth cutting edges with atomic sharpness capable of slicing very thin specimens (approximately 2 .mu.m or greater), micro-liter volumes of chemicals for treating the specimens, low cost, disposable, fabrication process which renders sterile parts, and ease of use. The cutter is a "cheese-grater" style design comprising a block or substrate of silicon and which uses anisotropic etching of the silicon to form extremely sharp and precise cutting edges. As a specimen is cut, it passes through the silicon cutter and lies flat on a piece of glass which is bonded to the cutter. Microchannels are etched into the glass or silicon substrates for delivering small volumes of chemicals for treating the specimen. After treatment, the specimens can be examined through the glass substrate.
Utilization of volume correlation filters for underwater mine identification in LIDAR imagery
NASA Astrophysics Data System (ADS)
Walls, Bradley
2008-04-01
Underwater mine identification persists as a critical technology pursued aggressively by the Navy for fleet protection. As such, new and improved techniques must continue to be developed in order to provide measurable increases in mine identification performance and noticeable reductions in false alarm rates. In this paper we show how recent advances in the Volume Correlation Filter (VCF) developed for ground based LIDAR systems can be adapted to identify targets in underwater LIDAR imagery. Current automated target recognition (ATR) algorithms for underwater mine identification employ spatial based three-dimensional (3D) shape fitting of models to LIDAR data to identify common mine shapes consisting of the box, cylinder, hemisphere, truncated cone, wedge, and annulus. VCFs provide a promising alternative to these spatial techniques by correlating 3D models against the 3D rendered LIDAR data.
Bernal-Rusiel, Jorge L; Rannou, Nicolas; Gollub, Randy L; Pieper, Steve; Murphy, Shawn; Robertson, Richard; Grant, Patricia E; Pienaar, Rudolph
2017-01-01
In this paper we present a web-based software solution to the problem of implementing real-time collaborative neuroimage visualization. In both clinical and research settings, simple and powerful access to imaging technologies across multiple devices is becoming increasingly useful. Prior technical solutions have used a server-side rendering and push-to-client model wherein only the server has the full image dataset. We propose a rich client solution in which each client has all the data and uses the Google Drive Realtime API for state synchronization. We have developed a small set of reusable client-side object-oriented JavaScript modules that make use of the XTK toolkit, a popular open-source JavaScript library also developed by our team, for the in-browser rendering and visualization of brain image volumes. Efficient realtime communication among the remote instances is achieved by using just a small JSON object, comprising a representation of the XTK image renderers' state, as the Google Drive Realtime collaborative data model. The developed open-source JavaScript modules have already been instantiated in a web-app called MedView , a distributed collaborative neuroimage visualization application that is delivered to the users over the web without requiring the installation of any extra software or browser plugin. This responsive application allows multiple physically distant physicians or researchers to cooperate in real time to reach a diagnosis or scientific conclusion. It also serves as a proof of concept for the capabilities of the presented technological solution.
Imai, Haruki; Tanaka, Yoji; Nomura, Naoyuki; Doi, Hisashi; Tsutsumi, Yusuke; Ono, Takashi; Hanawa, Takao
2017-02-01
Zr-Ag composites were fabricated to decrease the magnetic susceptibility by compensating for the magnetic susceptibility of their components. The Zr-Ag composites with a different Zr-Ag ratio were swaged, and their magnetic susceptibility, artifact volume, and mechanical properties were evaluated by magnetic balance, three-dimensional (3-D) artifact rendering, and a tensile test, respectively. These properties were correlated with the volume fraction of Ag using the linear rule of mixture. We successfully obtained the swaged Zr-Ag composites up to the reduction ratio of 96% for Zr-4, 16, 36, 64Ag and 86% for Zr-81Ag. However, the volume fraction of Ag after swaging tended to be lower than that before swaging, especially for Ag-rich Zr-Ag composites. The magnetic susceptibility of the composites linearly decreased with the increasing volume fraction of Ag. No artifact could be estimated with the Ag volume fraction in the range from 93.7% to 95.4% in three conditions. Young's modulus, ultimate tensile strength (UTS), and 0.2% yield strength of Zr-Ag composites showed slightly lower values compared to the estimated values using a linear rule of mixture. The decrease in magnetic susceptibility of Zr and Ag by alloying or combining would contribute to the decrease of the Ag fraction, leading to the improvement of mechanical properties. Copyright © 2016 Elsevier Ltd. All rights reserved.
View planetary differentiation process through high-resolution 3D imaging
NASA Astrophysics Data System (ADS)
Fei, Y.
2011-12-01
Core-mantle separation is one of the most important processes in planetary evolution, defining the structure and chemical distribution in the planets. Iron-dominated core materials could migrate through silicate mantle to the core by efficient liquid-liquid separation and/or by percolation of liquid metal through solid silicate matrix. We can experimentally simulate these processes to examine the efficiency and time of core formation and its geochemical signatures. The quantitative measure of the efficiency of percolation is usually the dihedral angle, related to the interfacial energies of the liquid and solid phases. To determine the true dihedral angle at high pressure and temperatures, it is necessary to measure the relative frequency distributions of apparent dihedral angles between the quenched liquid metal and silicate grains for each experiment. Here I present a new imaging technique to visualize the distribution of liquid metal in silicate matrix in 3D by combination of focus ion beam (FIB) milling and high-resolution SEM image. The 3D volume rendering provides precise determination of the dihedral angle and quantitative measure of volume fraction and connectivity. I have conducted a series of experiments using mixtures of San Carlos olivine and Fe-S (10wt%S) metal with different metal-silicate ratios, up to 25 GPa and at temperatures above 1800C. High-quality 3D volume renderings were reconstructed from FIB serial sectioning and imaging with 10-nm slice thickness and 14-nm image resolution for each quenched sample. The unprecedented spatial resolution at nano scale allows detailed examination of textural features and precise determination of the dihedral angle as a function of pressure, temperature and composition. The 3D reconstruction also allows direct assessment of connectivity in multi-phase matrix, providing a new way to investigate the efficiency of metal percolation in a real silicate mantle.
NASA Astrophysics Data System (ADS)
Voepel, H.; Hodge, R. A.; Leyland, J.; Sear, D. A.; Ahmed, S. I.
2014-12-01
Uncertainty for bedload estimates in gravel bed rivers is largely driven by our inability to characterize the arrangement and orientation of the sediment grains within the bed. The characteristics of the surface structure are produced by the water working of grains, which leads to structural differences in bedforms through differential patterns of grain sorting, packing, imbrication, mortaring and degree of bed armoring. Until recently the technical and logistical difficulties of characterizing the arrangement of sediment in 3D have prohibited a full understanding of how grains interact with stream flow and the feedback mechanisms that exist. Micro-focus X-ray CT has been used for non-destructive 3D imaging of grains within a series of intact sections of river bed taken from key morphological units (see Figure 1). Volume, center of mass, points of contact, protrusion and spatial orientation of individual surface grains are derived from these 3D images, which in turn, facilitates estimates of 3D static force properties at the grain-scale such as pivoting angles, buoyancy and gravity forces, and grain exposure. By aggregating representative samples of grain-scale properties of localized interacting sediment into overall metrics, we can compare and contrast bed stability at a macro-scale with respect to stream bed morphology. Understanding differences in bed stability through representative metrics derived at the grain-scale will ultimately lead to improved bedload estimates with reduced uncertainty and increased understanding of interactions between grain-scale properties on channel morphology. Figure 1. CT-Scans of a water worked gravel-filled pot. a. 3D rendered scan showing the outer mesh, and b. the same pot with the mesh removed. c. vertical change in porosity of the gravels sampled in 5mm volumes. Values are typical of those measured in the field and lab. d. 2-D slices through the gravels at 20% depth from surface (porosity = 0.35), and e. 75% depth from surface (porosity = 0.24), showing the presence of fine sediments 'mortaring' the larger gravels. f. shows a longitudinal slide from which pivot angle measurements can be determined for contact points between particles. g. Example of two particle extraction from the CT scan showing how particle contact areas can be measured (dark area).
Co-Transplantation of Nanofat Enhances Neovascularization and Fat Graft Survival in Nude Mice.
Yu, Qian; Cai, Yizuo; Huang, He; Wang, Zhenxing; Xu, Peng; Wang, Xiangsheng; Zhang, Lu; Zhang, Wenjie; Li, Wei
2018-05-15
Autologous fat grafting is commonly used for soft-tissue augmentation and reconstruction. However, this technique is limited by a high rate of graft absorption. Thus, approaches to improve fat graft survival that promote neovascularization are of great interest. Nanofat has several beneficial features that may render it more suitable for clinical applications than other stem-cell based approaches. We aimed to determine whether nanofat could enhance new vessel formation and improve the long-term retention of fat grafts. Nanofat was processed via mechanical emulsification and filtration. Fat grafts were transplanted subcutaneously under the scalps of nude mice with different nanofat volumes or without nanofat. The grafted fat was dissected 12 weeks after transplantation. Graft weight and volume were measured, and histological evaluations, including capillary density measurement, were performed. The co-transplantation of fat with nanofat showed higher graft weight and volume retention, better histological structure, and higher capillary density compared to that in controls. However, there were no significant differences between the two nanofat volumes utilized. Nanofat can enhance neovascularization and improve fat graft survival, providing a potential clinically viable approach to fat graft supplementation in plastic and reconstructive surgery.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Castelluccio, Gustavo M.; McDowell, David L.
The number of cycles required to form and grow microstructurally small fatigue cracks in metals exhibits substantial variability, particularly for low applied strain amplitudes. This variability is commonly attributed to the heterogeneity of cyclic plastic deformation within the microstructure, and presents a challenge to minimum life design of fatigue resistant components. Our paper analyzes sources of variability that contribute to the driving force of transgranular fatigue cracks within nucleant grains. We also employ crystal plasticity finite element simulations that explicitly render the polycrystalline microstructure and Fatigue Indicator Parameters (FIPs) averaged over different volume sizes and shapes relative to the anticipatedmore » fatigue damage process zone. Volume averaging is necessary to both achieve description of a finite fatigue damage process zone and to regularize mesh dependence in simulations. Furthermore, results from constant amplitude remote applied straining are characterized in terms of the extreme value distributions of volume averaged FIPs. Grain averaged FIP values effectively mitigate mesh sensitivity, but they smear out variability within grains. Furthermore, volume averaging over bands that encompass critical transgranular slip planes appear to present the most attractive approach to mitigate mesh sensitivity while preserving variability within grains.« less
Castelluccio, Gustavo M.; McDowell, David L.
2015-05-22
The number of cycles required to form and grow microstructurally small fatigue cracks in metals exhibits substantial variability, particularly for low applied strain amplitudes. This variability is commonly attributed to the heterogeneity of cyclic plastic deformation within the microstructure, and presents a challenge to minimum life design of fatigue resistant components. Our paper analyzes sources of variability that contribute to the driving force of transgranular fatigue cracks within nucleant grains. We also employ crystal plasticity finite element simulations that explicitly render the polycrystalline microstructure and Fatigue Indicator Parameters (FIPs) averaged over different volume sizes and shapes relative to the anticipatedmore » fatigue damage process zone. Volume averaging is necessary to both achieve description of a finite fatigue damage process zone and to regularize mesh dependence in simulations. Furthermore, results from constant amplitude remote applied straining are characterized in terms of the extreme value distributions of volume averaged FIPs. Grain averaged FIP values effectively mitigate mesh sensitivity, but they smear out variability within grains. Furthermore, volume averaging over bands that encompass critical transgranular slip planes appear to present the most attractive approach to mitigate mesh sensitivity while preserving variability within grains.« less
Visualizing Human Migration Trhough Space and Time
NASA Astrophysics Data System (ADS)
Zambotti, G.; Guan, W.; Gest, J.
2015-07-01
Human migration has been an important activity in human societies since antiquity. Since 1890, approximately three percent of the world's population has lived outside of their country of origin. As globalization intensifies in the modern era, human migration persists even as governments seek to more stringently regulate flows. Understanding this phenomenon, its causes, processes and impacts often starts from measuring and visualizing its spatiotemporal patterns. This study builds a generic online platform for users to interactively visualize human migration through space and time. This entails quickly ingesting human migration data in plain text or tabular format; matching the records with pre-established geographic features such as administrative polygons; symbolizing the migration flow by circular arcs of varying color and weight based on the flow attributes; connecting the centroids of the origin and destination polygons; and allowing the user to select either an origin or a destination feature to display all flows in or out of that feature through time. The method was first developed using ArcGIS Server for world-wide cross-country migration, and later applied to visualizing domestic migration patterns within China between provinces, and between states in the United States, all through multiple years. The technical challenges of this study include simplifying the shapes of features to enhance user interaction, rendering performance and application scalability; enabling the temporal renderers to provide time-based rendering of features and the flow among them; and developing a responsive web design (RWD) application to provide an optimal viewing experience. The platform is available online for the public to use, and the methodology is easily adoptable to visualizing any flow, not only human migration but also the flow of goods, capital, disease, ideology, etc., between multiple origins and destinations across space and time.
NASA Astrophysics Data System (ADS)
Wolf, Ivo; Nolden, Marco; Schwarz, Tobias; Meinzer, Hans-Peter
2010-02-01
The Medical Imaging Interaction Toolkit (MITK) and the eXtensible Imaging Platform (XIP) both aim at facilitating the development of medical imaging applications, but provide support on different levels. MITK offers support from the toolkit level, whereas XIP comes with a visual programming environment. XIP is strongly based on Open Inventor. Open Inventor with its scene graph-based rendering paradigm was not specifically designed for medical imaging, but focuses on creating dedicated visualizations. MITK has a visualization concept with a model-view-controller like design that assists in implementing multiple, consistent views on the same data, which is typically required in medical imaging. In addition, MITK defines a unified means of describing position, orientation, bounds, and (if required) local deformation of data and views, supporting e.g. images acquired with gantry tilt and curved reformations. The actual rendering is largely delegated to the Visualization Toolkit (VTK). This paper presents an approach of how to integrate the visualization concept of MITK with XIP, especially into the XIP-Builder. This is a first step of combining the advantages of both platforms. It enables experimenting with algorithms in the XIP visual programming environment without requiring a detailed understanding of Open Inventor. Using MITK-based add-ons to XIP, any number of data objects (images, surfaces, etc.) produced by algorithms can simply be added to an MITK DataStorage object and rendered into any number of slice-based (2D) or 3D views. Both MITK and XIP are open-source C++ platforms. The extensions presented in this paper will be available from www.mitk.org.
SVGenes: a library for rendering genomic features in scalable vector graphic format.
Etherington, Graham J; MacLean, Daniel
2013-08-01
Drawing genomic features in attractive and informative ways is a key task in visualization of genomics data. Scalable Vector Graphics (SVG) format is a modern and flexible open standard that provides advanced features including modular graphic design, advanced web interactivity and animation within a suitable client. SVGs do not suffer from loss of image quality on re-scaling and provide the ability to edit individual elements of a graphic on the whole object level independent of the whole image. These features make SVG a potentially useful format for the preparation of publication quality figures including genomic objects such as genes or sequencing coverage and for web applications that require rich user-interaction with the graphical elements. SVGenes is a Ruby-language library that uses SVG primitives to render typical genomic glyphs through a simple and flexible Ruby interface. The library implements a simple Page object that spaces and contains horizontal Track objects that in turn style, colour and positions features within them. Tracks are the level at which visual information is supplied providing the full styling capability of the SVG standard. Genomic entities like genes, transcripts and histograms are modelled in Glyph objects that are attached to a track and take advantage of SVG primitives to render the genomic features in a track as any of a selection of defined glyphs. The feature model within SVGenes is simple but flexible and not dependent on particular existing gene feature formats meaning graphics for any existing datasets can easily be created without need for conversion. The library is provided as a Ruby Gem from https://rubygems.org/gems/bio-svgenes under the MIT license, and open source code is available at https://github.com/danmaclean/bioruby-svgenes also under the MIT License. dan.maclean@tsl.ac.uk.
Learning from Childhood: Children Tell Us Who They Are through Online Dialogical Interaction
ERIC Educational Resources Information Center
Saracco, Susanna
2016-01-01
Philosophy of childhood is a field of inquiry in which the protagonists are adults, who are trying to understand children, and children, who are trying to be understood by adults. These two operating agents must find a common ground that renders their communication possible. This piece develops and illustrates the notion that no theorisation can…
Using ANSYS Fluent on the Peregrine System | High-Performance Computing |
two ways to run ANSYS CFD interactively on NREL HPC systems. When graphics rendering is not a critical when used as above is quite low (e.g., windows take a long time to come up). For small tasks, it may be , go to Category/Connection/SSH, and check off the box "enable compression". When graphics
On the putative essential discreteness of q-generalized entropies
NASA Astrophysics Data System (ADS)
Plastino, A.; Rocca, M. C.
2017-12-01
It has been argued in Abe (2010), entitled Essential discreteness in generalized thermostatistics with non-logarithmic entropy, that ;continuous Hamiltonian systems with long-range interactions and the so-called q-Gaussian momentum distributions are seen to be outside the scope of non-extensive statistical mechanics;. The arguments are clever and appealing. We show here that, however, some mathematical subtleties render them unconvincing.
Zibrek, Katja; Kokkinara, Elena; Mcdonnell, Rachel
2018-04-01
Virtual characters that appear almost photo-realistic have been shown to induce negative responses from viewers in traditional media, such as film and video games. This effect, described as the uncanny valley, is the reason why realism is often avoided when the aim is to create an appealing virtual character. In Virtual Reality, there have been few attempts to investigate this phenomenon and the implications of rendering virtual characters with high levels of realism on user enjoyment. In this paper, we conducted a large-scale experiment on over one thousand members of the public in order to gather information on how virtual characters are perceived in interactive virtual reality games. We were particularly interested in whether different render styles (realistic, cartoon, etc.) would directly influence appeal, or if a character's personality was the most important indicator of appeal. We used a number of perceptual metrics such as subjective ratings, proximity, and attribution bias in order to test our hypothesis. Our main result shows that affinity towards virtual characters is a complex interaction between the character's appearance and personality, and that realism is in fact a positive choice for virtual characters in virtual reality.
[Construction of information management-based virtual forest landscape and its application].
Chen, Chongcheng; Tang, Liyu; Quan, Bing; Li, Jianwei; Shi, Song
2005-11-01
Based on the analysis of the contents and technical characteristics of different scale forest visualization modeling, this paper brought forward the principles and technical systems of constructing an information management-based virtual forest landscape. With the combination of process modeling and tree geometric structure description, a software method of interactively and parameterized tree modeling was developed, and the corresponding renderings and geometrical elements simplification algorithms were delineated to speed up rendering run-timely. As a pilot study, the geometrical model bases associated with the typical tree categories in Zhangpu County of Fujian Province, southeast China were established as template files. A Virtual Forest Management System prototype was developed with GIS component (ArcObject), OpenGL graphics environment, and Visual C++ language, based on forest inventory and remote sensing data. The prototype could be used for roaming between 2D and 3D, information query and analysis, and virtual and interactive forest growth simulation, and its reality and accuracy could meet the needs of forest resource management. Some typical interfaces of the system and the illustrative scene cross-sections of simulated masson pine growth under conditions of competition and thinning were listed.
NASA Astrophysics Data System (ADS)
Roth, Eatai; Howell, Darrin; Beckwith, Cydney; Burden, Samuel A.
2017-05-01
Humans, interacting with cyber-physical systems (CPS), formulate beliefs about the system's dynamics. It is natural to expect that human operators, tasked with teleoperation, use these beliefs to control the remote robot. For tracking tasks in the resulting human-cyber-physical system (HCPS), theory suggests that human operators can achieve exponential tracking (in stable systems) without state estimation provided they possess an accurate model of the system's dynamics. This internalized inverse model, however, renders a portion of the system state unobservable to the human operator—the zero dynamics. Prior work shows humans can track through observable linear dynamics, thus we focus on nonlinear dynamics rendered unobservable through tracking control. We propose experiments to assess the human operator's ability to learn and invert such models, and distinguish this behavior from that achieved by pure feedback control.
Third Party Interaction in the Medical Context: Code-switching and Control
Vickers, Caroline H.; Goble, Ryan; Deckert, Sharon K.
2015-01-01
The purpose of this paper is to examine the micro-interactional co-construction of power within Spanish language concordant medical consultations in California involving a third party family member. Findings indicate the third party instigates code-switching to English on the part of medical providers, a language that the patient does not understand, rendering the patient a non-participant in the medical consultation. In these consultations involving a third party family member, monolingual Spanish-speaking patients are stripped of control in ways that are similar to other powerless groups in medical consultations. Implications include the need to further examine how micro-level interactions reproduce societal ideologies and shape policy on the ground. PMID:27667896
NASA Astrophysics Data System (ADS)
Le, Minh Tuan; Nguyen, Congdu; Yoon, Dae-Il; Jung, Eun Ku; Kim, Hae-Kwang
2007-12-01
In this paper, we introduce a graphics to Scalable Vector Graphics (SVG) adaptation framework with a mechanism of vector graphics transmission to overcome the shortcoming of real-time representation and interaction experiences of 3D graphics application running on mobile devices. We therefore develop an interactive 3D visualization system based on the proposed framework for rapidly representing a 3D scene on mobile devices without having to download it from the server. Our system scenario is composed of a client viewer and a graphic to SVG adaptation server. The client viewer offers the user to access to the same 3D contents with different devices according to consumer interactions.
NASA Astrophysics Data System (ADS)
Levit, Creon; Gazis, P.
2006-06-01
The graphics processing units (GPUs) built in to all professional desktop and laptop computers currently on the market are capable of transforming, filtering, and rendering hundreds of millions of points per second. We present a prototype open-source cross-platform (windows, linux, Apple OSX) application which leverages some of the power latent in the GPU to enable smooth interactive exploration and analysis of large high-dimensional data using a variety of classical and recent techniques. The targeted application area is the interactive analysis of complex, multivariate space science and astrophysics data sets, with dimensionalities that may surpass 100 and sample sizes that may exceed 10^6-10^8.
Human-scale interaction for virtual model displays: a clear case for real tools
NASA Astrophysics Data System (ADS)
Williams, George C.; McDowall, Ian E.; Bolas, Mark T.
1998-04-01
We describe a hand-held user interface for interacting with virtual environments displayed on a Virtual Model Display. The tool, constructed entirely of transparent materials, is see-through. We render a graphical counterpart of the tool on the display and map it one-to-one with the real tool. This feature, combined with a capability for touch- sensitive, discrete input, results in a useful spatial input device that is visually versatile. We discuss the tool's design and interaction techniques it supports. Briefly, we look at the human factors issues and engineering challenges presented by this tool and, in general, by the class of hand-held user interfaces that are see-through.
Asymmetric Cooperative Catalysis of Strong Brønsted Acid-Promoted Reactions Using Chiral Ureas
Xu, Hao; Zuend, Stephan J.; Woll, Matthew G.; Tao, Ye; Jacobsen, Eric N.
2010-01-01
Cationic organic intermediates participate in a wide variety of useful synthetic transformations, but their high reactivity can render selectivity in competing pathways difficult to control. We describe a strategy for inducing enantioselectivity in reactions of protio-iminium ions, wherein a chiral catalyst interacts with the highly reactive intermediate through a network of non-covalent interactions. This leads to an attenuation of the reactivity of the iminium ion, and allows high enantioselectivity in cycloadditions with electron-rich alkenes (the Povarov reaction). A detailed experimental and computational analysis of this catalyst system has revealed the precise nature of the catalyst-substrate interactions and the likely basis for enantioinduction. PMID:20167783
Clinical Application of an Open-Source 3D Volume Rendering Software to Neurosurgical Approaches.
Fernandes de Oliveira Santos, Bruno; Silva da Costa, Marcos Devanir; Centeno, Ricardo Silva; Cavalheiro, Sergio; Antônio de Paiva Neto, Manoel; Lawton, Michael T; Chaddad-Neto, Feres
2018-02-01
Preoperative recognition of the anatomic individualities of each patient can help to achieve more precise and less invasive approaches. It also may help to anticipate potential complications and intraoperative difficulties. Here we describe the use, accuracy, and precision of a free tool for planning microsurgical approaches using 3-dimensional (3D) reconstructions from magnetic resonance imaging (MRI). We used the 3D volume rendering tool of a free open-source software program for 3D reconstruction of images of surgical sites obtained by MRI volumetric acquisition. We recorded anatomic reference points, such as the sulcus and gyrus, and vascularization patterns for intraoperative localization of lesions. Lesion locations were confirmed during surgery by intraoperative ultrasound and/or electrocorticography and later by postoperative MRI. Between August 2015 and September 2016, a total of 23 surgeries were performed using this technique for 9 low-grade gliomas, 7 high-grade gliomas, 4 cortical dysplasias, and 3 arteriovenous malformations. The technique helped delineate lesions with an overall accuracy of 2.6 ± 1.0 mm. 3D reconstructions were successfully performed in all patients, and images showed sulcus, gyrus, and venous patterns corresponding to the intraoperative images. All lesion areas were confirmed both intraoperatively and at the postoperative evaluation. With the technique described herein, it was possible to successfully perform 3D reconstruction of the cortical surface. This reconstruction tool may serve as an adjunct to neuronavigation systems or may be used alone when such a system is unavailable. Copyright © 2017 Elsevier Inc. All rights reserved.
Kim, Yeun; Perinpanayagam, Hiran; Lee, Jong-Ki; Yoo, Yeon-Jee; Oh, Soram; Gu, Yu; Lee, Seung-Pyo; Chang, Seok Woo; Lee, Woocheol; Baek, Seung-Ho; Zhu, Qiang; Kum, Kee-Yeon
2015-08-01
Micro-computed tomography (MCT) with alternative image reformatting techniques shows complex and detailed root canal anatomy. This study compared two-dimensional (2D) and 3D MCT image reformatting with standard tooth clearing for studying mandibular first molar mesial root canal morphology. Extracted human mandibular first molar mesial roots (n=31) were scanned by MCT (Skyscan 1172). 2D thin-slab minimum intensity projection (TS-MinIP) and 3D volume rendered images were constructed. The same teeth were then processed by clearing and staining. For each root, images obtained from clearing, 2D, 3D and combined 2D and 3D techniques were examined independently by four endodontists and categorized according to Vertucci's classification. Fine anatomical structures such as accessory canals, intercanal communications and loops were also identified. Agreement among the four techniques for Vertucci's classification was 45.2% (14/31). The most frequent were Vertucci's type IV and then type II, although many had complex configurations that were non-classifiable. Generally, complex canal systems were more clearly visible in MCT images than with standard clearing and staining. Fine anatomical structures such as intercanal communications, accessory canals and loops were mostly detected with a combination of 2D TS-MinIP and 3D volume-rendering MCT images. Canal configurations and fine anatomic structures were more clearly observed in the combined 2D and 3D MCT images than the clearing technique. The frequency of non-classifiable configurations demonstrated the complexity of mandibular first molar mesial root canal anatomy.
Imaging system for creating 3D block-face cryo-images of whole mice
NASA Astrophysics Data System (ADS)
Roy, Debashish; Breen, Michael; Salvado, Olivier; Heinzel, Meredith; McKinley, Eliot; Wilson, David
2006-03-01
We developed a cryomicrotome/imaging system that provides high resolution, high sensitivity block-face images of whole mice or excised organs, and applied it to a variety of biological applications. With this cryo-imaging system, we sectioned cryo-preserved tissues at 2-40 μm thickness and acquired high resolution brightfield and fluorescence images with microscopic in-plane resolution (as good as 1.2 μm). Brightfield images of normal and pathological anatomy show exquisite detail, especially in the abdominal cavity. Multi-planar reformatting and 3D renderings allow one to interrogate 3D structures. In this report, we present brightfield images of mouse anatomy, as well as 3D renderings of organs. For BPK mice model of polycystic kidney disease, we compared brightfield cryo-images and kidney volumes to MRI. The color images provided greater contrast and resolution of cysts as compared to in vivo MRI. We note that color cryo-images are closer to what a researcher sees in dissection, making it easier for them to interpret image data. The combination of field of view, depth of field, ultra high resolution and color/fluorescence contrast enables cryo-image volumes to provide details that cannot be found through in vivo imaging or other ex vivo optical imaging approaches. We believe that this novel imaging system will have applications that include identification of mouse phenotypes, characterization of diseases like blood vessel disease, kidney disease, and cancer, assessment of drug and gene therapy delivery and efficacy and validation of other imaging modalities.
Persson, A; Brismar, T B; Lundström, C; Dahlström, N; Othberg, F; Smedby, O
2006-03-01
To compare three methods for standardizing volume rendering technique (VRT) protocols by studying aortic diameter measurements in magnetic resonance angiography (MRA) datasets. Datasets from 20 patients previously examined with gadolinium-enhanced MRA and with digital subtraction angiography (DSA) for abdominal aortic aneurysm were retrospectively evaluated by three independent readers. The MRA datasets were viewed using VRT with three different standardized transfer functions: the percentile method (Pc-VRT), the maximum-likelihood method (ML-VRT), and the partial range histogram method (PRH-VRT). The aortic diameters obtained with these three methods were compared with freely chosen VRT parameters (F-VRT) and with maximum intensity projection (MIP) concerning inter-reader variability and agreement with the reference method DSA. F-VRT parameters and PRH-VRT gave significantly higher diameter values than DSA, whereas Pc-VRT gave significantly lower values than DSA. The highest interobserver variability was found for F-VRT parameters and MIP, and the lowest for Pc-VRT and PRH-VRT. All standardized VRT methods were significantly superior to both MIP and F-VRT in this respect. The agreement with DSA was best for PRH-VRT, which was the only method with a mean error below 1 mm and which also had the narrowest limits of agreement (95% of cases between 2.1 mm below and 3.1 mm above DSA). All the standardized VRT methods compare favorably with MIP and VRT with freely selected parameters as regards interobserver variability. The partial range histogram method, although systematically overestimating vessel diameters, gives results closest to those of DSA.
Roughness based perceptual analysis towards digital skin imaging system with haptic feedback.
Kim, K
2016-08-01
To examine psoriasis or atopic eczema, analyzing skin roughness by palpation is essential to precisely diagnose skin diseases. However, optical sensor based skin imaging systems do not allow dermatologists to touch skin images. To solve the problem, a new haptic rendering technology that can accurately display skin roughness must be developed. In addition, the rendering algorithm must be able to filter spatial noises created during 2D to 3D image conversion without losing the original roughness on the skin image. In this study, a perceptual way to design a noise filter that will remove spatial noises and in the meantime recover maximized roughness is introduced by understanding human sensitivity on surface roughness. A visuohaptic rendering system that can provide a user with seeing and touching digital skin surface roughness has been developed including a geometric roughness estimation method from a meshed surface. In following, a psychophysical experiment was designed and conducted with 12 human subjects to measure human perception with the developed visual and haptic interfaces to examine surface roughness. From the psychophysical experiment, it was found that touch is more sensitive at lower surface roughness, and vice versa. Human perception with both senses, vision and touch, becomes less sensitive to surface distortions as roughness increases. When interact with both channels, visual and haptic interfaces, the performance to detect abnormalities on roughness is greatly improved by sensory integration with the developed visuohaptic rendering system. The result can be used as a guideline to design a noise filter that can perceptually remove spatial noises while recover maximized roughness values from a digital skin image obtained by optical sensors. In addition, the result also confirms that the developed visuohaptic rendering system can help dermatologists or skin care professionals examine skin conditions by using vision and touch at the same time. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Bruns, S.; Stipp, S. L. S.; Sørensen, H. O.
2017-09-01
Digital rock physics carries the dogmatic concept of having to segment volume images for quantitative analysis but segmentation rejects huge amounts of signal information. Information that is essential for the analysis of difficult and marginally resolved samples, such as materials with very small features, is lost during segmentation. In X-ray nanotomography reconstructions of Hod chalk we observed partial volume voxels with an abundance that limits segmentation based analysis. Therefore, we investigated the suitability of greyscale analysis for establishing statistical representative elementary volumes (sREV) for the important petrophysical parameters of this type of chalk, namely porosity, specific surface area and diffusive tortuosity, by using volume images without segmenting the datasets. Instead, grey level intensities were transformed to a voxel level porosity estimate using a Gaussian mixture model. A simple model assumption was made that allowed formulating a two point correlation function for surface area estimates using Bayes' theory. The same assumption enables random walk simulations in the presence of severe partial volume effects. The established sREVs illustrate that in compacted chalk, these simulations cannot be performed in binary representations without increasing the resolution of the imaging system to a point where the spatial restrictions of the represented sample volume render the precision of the measurement unacceptable. We illustrate this by analyzing the origins of variance in the quantitative analysis of volume images, i.e. resolution dependence and intersample and intrasample variance. Although we cannot make any claims on the accuracy of the approach, eliminating the segmentation step from the analysis enables comparative studies with higher precision and repeatability.
El-Najjar, Nahed; Jantsch, Jonathan; Gessner, André
2017-08-28
Cancer remains a leading cause of mortality and morbidity worldwide. In addition to organ failure, the most frequent reasons for admission of cancer patients to intensive care units (ICU) are: infections and sepsis. As critically ill, the complexity of the health situation of cancer patients renders the standard antimicrobial regimen more complex and even inadequate which results in increased mortality rates. This is due to pathophysiological changes in the volume of distribution, increased clearance, as well as to organ dysfunction. While in the former cases a decrease in drug efficacy is observed, the hallmark of the latter one is overdosing leading to increased toxicity at the expense of efficacy. Furthermore, an additional risk factor is the potential drug-drug interaction between antibiotics and antineoplastic agents. Therefore, therapeutic drug monitoring (TDM) is a necessity to improve the clinical outcome of antimicrobial therapy in cancer patients. To be applied in routine analysis the method used for TDM should be cheap, fast and highly accurate/sensitive. Furthermore, as ICU patients are treated with a cocktail of antibiotics the method has to cover the simultaneous analysis of antibiotics used as a first/second line of treatment. The aim of the current review is to briefly survey the pitfalls in the current antimicrobial therapy and the central role of TDM in dose adjustment and drug-drug interaction's evaluation. A major section is dedicated to summarize the currently published analytical methods and to shed light on the difficulties and potential problems that can be encountered during method development.
A framework for interactive visualization of digital medical images.
Koehring, Andrew; Foo, Jung Leng; Miyano, Go; Lobe, Thom; Winer, Eliot
2008-10-01
The visualization of medical images obtained from scanning techniques such as computed tomography and magnetic resonance imaging is a well-researched field. However, advanced tools and methods to manipulate these data for surgical planning and other tasks have not seen widespread use among medical professionals. Radiologists have begun using more advanced visualization packages on desktop computer systems, but most physicians continue to work with basic two-dimensional grayscale images or not work directly with the data at all. In addition, new display technologies that are in use in other fields have yet to be fully applied in medicine. It is our estimation that usability is the key aspect in keeping this new technology from being more widely used by the medical community at large. Therefore, we have a software and hardware framework that not only make use of advanced visualization techniques, but also feature powerful, yet simple-to-use, interfaces. A virtual reality system was created to display volume-rendered medical models in three dimensions. It was designed to run in many configurations, from a large cluster of machines powering a multiwalled display down to a single desktop computer. An augmented reality system was also created for, literally, hands-on interaction when viewing models of medical data. Last, a desktop application was designed to provide a simple visualization tool, which can be run on nearly any computer at a user's disposal. This research is directed toward improving the capabilities of medical professionals in the tasks of preoperative planning, surgical training, diagnostic assistance, and patient education.
Visualization and simulation techniques for surgical simulators using actual patient's data.
Radetzky, Arne; Nürnberger, Andreas
2002-11-01
Because of the increasing complexity of surgical interventions research in surgical simulation became more and more important over the last years. However, the simulation of tissue deformation is still a challenging problem, mainly due to the short response times that are required for real-time interaction. The demands to hard and software are even larger if not only the modeled human anatomy is used but the anatomy of actual patients. This is required if the surgical simulator should be used as training medium for expert surgeons rather than students. In this article, suitable visualization and simulation methods for surgical simulation utilizing actual patient's datasets are described. Therefore, the advantages and disadvantages of direct and indirect volume rendering for the visualization are discussed and a neuro-fuzzy system is described, which can be used for the simulation of interactive tissue deformations. The neuro-fuzzy system makes it possible to define the deformation behavior based on a linguistic description of the tissue characteristics or to learn the dynamics by using measured data of real tissue. Furthermore, a simulator for minimally-invasive neurosurgical interventions is presented that utilizes the described visualization and simulation methods. The structure of the simulator is described in detail and the results of a system evaluation by an experienced neurosurgeon--a quantitative comparison between different methods of virtual endoscopy as well as a comparison between real brain images and virtual endoscopies--are given. The evaluation proved that the simulator provides a higher realism of the visualization and simulation then other currently available simulators. Copyright 2002 Elsevier Science B.V.
Heasly, Benjamin S; Cottaris, Nicolas P; Lichtman, Daniel P; Xiao, Bei; Brainard, David H
2014-02-07
RenderToolbox3 provides MATLAB utilities and prescribes a workflow that should be useful to researchers who want to employ graphics in the study of vision and perhaps in other endeavors as well. In particular, RenderToolbox3 facilitates rendering scene families in which various scene attributes and renderer behaviors are manipulated parametrically, enables spectral specification of object reflectance and illuminant spectra, enables the use of physically based material specifications, helps validate renderer output, and converts renderer output to physical units of radiance. This paper describes the design and functionality of the toolbox and discusses several examples that demonstrate its use. We have designed RenderToolbox3 to be portable across computer hardware and operating systems and to be free and open source (except for MATLAB itself). RenderToolbox3 is available at https://github.com/DavidBrainard/RenderToolbox3.
NASA Astrophysics Data System (ADS)
Joshi, Rajan L.
2006-03-01
In medical imaging, the popularity of image capture modalities such as multislice CT and MRI is resulting in an exponential increase in the amount of volumetric data that needs to be archived and transmitted. At the same time, the increased data is taxing the interpretation capabilities of radiologists. One of the workflow strategies recommended for radiologists to overcome the data overload is the use of volumetric navigation. This allows the radiologist to seek a series of oblique slices through the data. However, it might be inconvenient for a radiologist to wait until all the slices are transferred from the PACS server to a client, such as a diagnostic workstation. To overcome this problem, we propose a client-server architecture based on JPEG2000 and JPEG2000 Interactive Protocol (JPIP) for rendering oblique slices through 3D volumetric data stored remotely at a server. The client uses the JPIP protocol for obtaining JPEG2000 compressed data from the server on an as needed basis. In JPEG2000, the image pixels are wavelet-transformed and the wavelet coefficients are grouped into precincts. Based on the positioning of the oblique slice, compressed data from only certain precincts is needed to render the slice. The client communicates this information to the server so that the server can transmit only relevant compressed data. We also discuss the use of caching on the client side for further reduction in bandwidth requirements. Finally, we present simulation results to quantify the bandwidth savings for rendering a series of oblique slices.
Interactive Web-based Visualization of Atomic Position-time Series Data
NASA Astrophysics Data System (ADS)
Thapa, S.; Karki, B. B.
2017-12-01
Extracting and interpreting the information contained in large sets of time-varying three dimensional positional data for the constituent atoms of simulated material is a challenging task. We have recently implemented a web-based visualization system to analyze the position-time series data extracted from the local or remote hosts. It involves a pre-processing step for data reduction, which involves skipping uninteresting parts of the data uniformly (at full atomic configuration level) or non-uniformly (at atomic species level or individual atom level). Atomic configuration snapshot is rendered using the ball-stick representation and can be animated by rendering successive configurations. The entire atomic dynamics can be captured as the trajectories by rendering the atomic positions at all time steps together as points. The trajectories can be manipulated at both species and atomic levels so that we can focus on one or more trajectories of interest, and can be also superimposed with the instantaneous atomic structure. The implementation was done using WebGL and Three.js for graphical rendering, HTML5 and Javascript for GUI, and Elasticsearch and JSON for data storage and retrieval within the Grails Framework. We have applied our visualization system to the simulation datatsets for proton-bearing forsterite (Mg2SiO4) - an abundant mineral of Earths upper mantle. Visualization reveals that protons (hydrogen ions) incorporated as interstitials are much more mobile than protons substituting the host Mg and Si cation sites. The proton diffusion appears to be anisotropic with high mobility along the x-direction, showing limited discrete jumps in other two directions.
Self-Interaction Chromatography of mAbs: Accurate Measurement of Dead Volumes.
Hedberg, S H M; Heng, J Y Y; Williams, D R; Liddell, J M
2015-12-01
Measurement of the second virial coefficient B22 for proteins using self-interaction chromatography (SIC) is becoming an increasingly important technique for studying their solution behaviour. In common with all physicochemical chromatographic methods, measuring the dead volume of the SIC packed column is crucial for accurate retention data; this paper examines best practise for dead volume determination. SIC type experiments using catalase, BSA, lysozyme and a mAb as model systems are reported, as well as a number of dead column measurements. It was observed that lysozyme and mAb interacted specifically with Toyopearl AF-Formyl dead columns depending upon pH and [NaCl], invalidating their dead volume usage. Toyopearl AF-Amino packed dead columns showed no such problems and acted as suitable dead columns without any solution condition dependency. Dead volume determinations using dextran MW standards with protein immobilised SIC columns provided dead volume estimates close to those obtained using Toyopearl AF-Amino dead columns. It is concluded that specific interactions between proteins, including mAbs, and select SIC support phases can compromise the use of some standard approaches for estimating the dead volume of SIC columns. Two other methods were shown to provide good estimates for the dead volume.
JAtlasView: a Java atlas-viewer for browsing biomedical 3D images and atlases.
Feng, Guangjie; Burton, Nick; Hill, Bill; Davidson, Duncan; Kerwin, Janet; Scott, Mark; Lindsay, Susan; Baldock, Richard
2005-03-09
Many three-dimensional (3D) images are routinely collected in biomedical research and a number of digital atlases with associated anatomical and other information have been published. A number of tools are available for viewing this data ranging from commercial visualization packages to freely available, typically system architecture dependent, solutions. Here we discuss an atlas viewer implemented to run on any workstation using the architecture neutral Java programming language. We report the development of a freely available Java based viewer for 3D image data, descibe the structure and functionality of the viewer and how automated tools can be developed to manage the Java Native Interface code. The viewer allows arbitrary re-sectioning of the data and interactive browsing through the volume. With appropriately formatted data, for example as provided for the Electronic Atlas of the Developing Human Brain, a 3D surface view and anatomical browsing is available. The interface is developed in Java with Java3D providing the 3D rendering. For efficiency the image data is manipulated using the Woolz image-processing library provided as a dynamically linked module for each machine architecture. We conclude that Java provides an appropriate environment for efficient development of these tools and techniques exist to allow computationally efficient image-processing libraries to be integrated relatively easily.