Science.gov

Sample records for 3d volume rendering

  1. 3-D Volume Rendering of Sand Specimen

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Computed tomography (CT) images of resin-impregnated Mechanics of Granular Materials (MGM) specimens are assembled to provide 3-D volume renderings of density patterns formed by dislocation under the external loading stress profile applied during the experiments. Experiments flown on STS-79 and STS-89. Principal Investigator: Dr. Stein Sture

  2. Volume rendering for interactive 3D segmentation

    NASA Astrophysics Data System (ADS)

    Toennies, Klaus D.; Derz, Claus

    1997-05-01

    Combined emission/absorption and reflection/transmission volume rendering is able to display poorly segmented structures from 3D medical image sequences. Visual cues such as shading and color let the user distinguish structures in the 3D display that are incompletely extracted by threshold segmentation. In order to be truly helpful, analyzed information needs to be quantified and transferred back into the data. We extend our previously presented scheme for such display be establishing a communication between visual analysis and the display process. The main tool is a selective 3D picking device. For being useful on a rather rough segmentation, the device itself and the display offer facilities for object selection. Selective intersection planes let the user discard information prior to choosing a tissue of interest. Subsequently, a picking is carried out on the 2D display by casting a ray into the volume. The picking device is made pre-selective using already existing segmentation information. Thus, objects can be picked that are visible behind semi-transparent surfaces of other structures. Information generated by a later connected- component analysis can then be integrated into the data. Data examination is continued on an improved display letting the user actively participate in the analysis process. Results of this display-and-interaction scheme proved to be very effective. The viewer's ability to extract relevant information form a complex scene is combined with the computer's ability to quantify this information. The approach introduces 3D computer graphics methods into user- guided image analysis creating an analysis-synthesis cycle for interactive 3D segmentation.

  3. Incremental volume reconstruction and rendering for 3-D ultrasound imaging

    NASA Astrophysics Data System (ADS)

    Ohbuchi, Ryutarou; Chen, David; Fuchs, Henry

    1992-09-01

    In this paper, we present approaches toward an interactive visualization of a real time input, applied to 3-D visualizations of 2-D ultrasound echography data. The first, 3 degrees-of- freedom (DOF) incremental system visualizes a 3-D volume acquired as a stream of 2-D slices with location and orientation with 3 DOF. As each slice arrives, the system reconstructs a regular 3-D volume and renders it. Rendering is done by an incremental image-order ray- casting algorithm which stores and reuses the results of expensive resampling along the rays for speed. The second is our first experiment toward real-time 6 DOF acquisition and visualization. Two-dimensional slices with 6 DOF are reconstructed off-line, and visualized at an interactive rate using a parallel volume rendering code running on the graphics multicomputer Pixel-Planes 5.

  4. 3D virtual colonoscopy with real-time volume rendering

    NASA Astrophysics Data System (ADS)

    Wan, Ming; Li, Wei J.; Kreeger, Kevin; Bitter, Ingmar; Kaufman, Arie E.; Liang, Zhengrong; Chen, Dongqing; Wax, Mark R.

    2000-04-01

    In our previous work, we developed a virtual colonoscopy system on a high-end 16-processor SGI Challenge with an expensive hardware graphics accelerator. The goal of this work is to port the system to a low cost PC in order to increase its availability for mass screening. Recently, Mitsubishi Electric has developed a volume-rendering PC board, called VolumePro, which includes 128 MB of RAM and vg500 rendering chip. The vg500 chip, based on Cube-4 technology, can render a 2563 volume at 30 frames per second. High image quality of volume rendering inside the colon is guaranteed by the full lighting model and 3D interpolation supported by the vg500 chip. However, the VolumePro board is lacking some features required by our interactive colon navigation. First, VolumePro currently does not support perspective projection which is paramount for interior colon navigation. Second, the patient colon data is usually much larger than 2563 and cannot be rendered in real-time. In this paper, we present our solutions to these problems, including simulated perspective projection and axis aligned boxing techniques, and demonstrate the high performance of our virtual colonoscopy system on low cost PCs.

  5. Real-time volume rendering of 4D image using 3D texture mapping

    NASA Astrophysics Data System (ADS)

    Hwang, Jinwoo; Kim, June-Sic; Kim, Jae Seok; Kim, In Young; Kim, Sun Il

    2001-05-01

    Four dimensional image is 3D volume data that varies with time. It is used to express deforming or moving object in virtual surgery of 4D ultrasound. It is difficult to render 4D image by conventional ray-casting or shear-warp factorization methods because of their time-consuming rendering time or pre-processing stage whenever the volume data are changed. Even 3D texture mapping is used, repeated volume loading is also time-consuming in 4D image rendering. In this study, we propose a method to reduce data loading time using coherence between currently loaded volume and previously loaded volume in order to achieve real time rendering based on 3D texture mapping. Volume data are divided into small bricks and each brick being loaded is tested for similarity to one which was already loaded in memory. If the brick passed the test, it is defined as 3D texture by OpenGL functions. Later, the texture slices of the brick are mapped into polygons and blended by OpenGL blending functions. All bricks undergo this test. Continuously deforming fifty volumes are rendered in interactive time with SGI ONYX. Real-time volume rendering based on 3D texture mapping is currently available on PC.

  6. Points based reconstruction and rendering of 3D shapes from large volume dataset

    NASA Astrophysics Data System (ADS)

    Zhao, Mingchang; Tian, Jie; He, Huiguang; Li, Guangming

    2003-05-01

    In the field of medical imaging, researchers often need visualize lots of 3D datasets to get the informaiton contained in these datasets. But the huge data genreated by modern medical imaging device challenge the real time processing and rendering algorithms at all the time. Spurring by the great achievement of Points Based Rendering (PBR) in the fields of computer graphics to render very large meshes, we propose a new algorithm to use the points as basic primitive of surface reconstruction and rendering to interactively reconstruct and render very large volume dataset. By utilizing the special characteristics of medical image datasets, we obtain a fast and efficient points-based reconstruction and rendering algorithm in common PC. The experimental results show taht this algorithm is feasible and efficient.

  7. Volume rendering segmented data using 3D textures: a practical approach for intra-operative visualization

    NASA Astrophysics Data System (ADS)

    Subramanian, Navneeth; Mullick, Rakesh; Vaidya, Vivek

    2006-03-01

    Volume rendering has high utility in visualization of segmented datasets. However, volume rendering of the segmented labels along with the original data causes undesirable intermixing/bleeding artifacts arising from interpolation at the sharp boundaries. This issue is further amplified in 3D textures based volume rendering due to the inaccessibility of the interpolation stage. We present an approach which helps minimize intermixing artifacts while maintaining the high performance of 3D texture based volume rendering - both of which are critical for intra-operative visualization. Our approach uses a 2D transfer function based classification scheme where label distinction is achieved through an encoding that generates unique gradient values for labels. This helps ensure that labelled voxels always map to distinct regions in the 2D transfer function, irrespective of interpolation. In contrast to previously reported algorithms, our algorithm does not require multiple passes for rendering and supports greater than 4 masks. It also allows for real-time modification of the colors/opacities of the segmented structures along with the original data. Additionally, these capabilities are available with minimal texture memory requirements amongst comparable algorithms. Results are presented on clinical and phantom data.

  8. 3D colour visualization of label images using volume rendering techniques.

    PubMed

    Vandenhouten, R; Kottenhoff, R; Grebe, R

    1995-01-01

    Volume rendering methods for the visualization of 3D image data sets have been developed and collected in a C library. The core algorithm consists of a perspective ray casting technique for a natural and realistic view of the 3D scene. New edge operator shading methods are employed for a fast and information preserving representation of surfaces. Control parameters of the algorithm can be tuned to have either smoothed surfaces or a very detailed rendering of the geometrical structure. Different objects can be distinguished by different colours. Shadow ray tracing has been implemented to improve the realistic impression of the 3D image. For a simultaneous representation of objects in different depths, hiding each other, two types of transparency mode are used (wireframe and glass transparency). Single objects or groups of objects can be excluded from the rendering (peeling). Three orthogonal cutting planes or one arbitrarily placed cutting plane can be applied to the rendered objects in order to get additional information about inner structures, contours, and relative positions. PMID:8569308

  9. 3D Reconstruction from X-ray Fluoroscopy for Clinical Veterinary Medicine using Differential Volume Rendering

    NASA Astrophysics Data System (ADS)

    Khongsomboon, Khamphong; Hamamoto, Kazuhiko; Kondo, Shozo

    3D reconstruction from ordinary X-ray equipment which is not CT or MRI is required in clinical veterinary medicine. Authors have already proposed a 3D reconstruction technique from X-ray photograph to present bone structure. Although the reconstruction is useful for veterinary medicine, the thechnique has two problems. One is about exposure of X-ray and the other is about data acquisition process. An x-ray equipment which is not special one but can solve the problems is X-ray fluoroscopy. Therefore, in this paper, we propose a method for 3D-reconstruction from X-ray fluoroscopy for clinical veterinary medicine. Fluoroscopy is usually used to observe a movement of organ or to identify a position of organ for surgery by weak X-ray intensity. Since fluoroscopy can output a observed result as movie, the previous two problems which are caused by use of X-ray photograph can be solved. However, a new problem arises due to weak X-ray intensity. Although fluoroscopy can present information of not only bone structure but soft tissues, the contrast is very low and it is very difficult to recognize some soft tissues. It is very useful to be able to observe not only bone structure but soft tissues clearly by ordinary X-ray equipment in the field of clinical veterinary medicine. To solve this problem, this paper proposes a new method to determine opacity in volume rendering process. The opacity is determined according to 3D differential coefficient of 3D reconstruction. This differential volume rendering can present a 3D structure image of multiple organs volumetrically and clearly for clinical veterinary medicine. This paper shows results of simulation and experimental investigation of small dog and evaluation by veterinarians.

  10. Adaptive volume rendering of cardiac 3D ultrasound images: utilizing blood pool statistics

    NASA Astrophysics Data System (ADS)

    Åsen, Jon Petter; Steen, Erik; Kiss, Gabriel; Thorstensen, Anders; Rabben, Stein Inge

    2012-03-01

    In this paper we introduce and investigate an adaptive direct volume rendering (DVR) method for real-time visualization of cardiac 3D ultrasound. DVR is commonly used in cardiac ultrasound to visualize interfaces between tissue and blood. However, this is particularly challenging with ultrasound images due to variability of the signal within tissue as well as variability of noise signal within the blood pool. Standard DVR involves a global mapping of sample values to opacity by an opacity transfer function (OTF). While a global OTF may represent the interface correctly in one part of the image, it may result in tissue dropouts, or even artificial interfaces within the blood pool in other parts of the image. In order to increase correctness of the rendered image, the presented method utilizes blood pool statistics to do regional adjustments of the OTF. The regional adaptive OTF was compared with a global OTF in a dataset of apical recordings from 18 subjects. For each recording, three renderings from standard views (apical 4-chamber (A4C), inverted A4C (IA4C) and mitral valve (MV)) were generated for both methods, and each rendering was tuned to the best visual appearance by a physician echocardiographer. For each rendering we measured the mean absolute error (MAE) between the rendering depth buffer and a validated left ventricular segmentation. The difference d in MAE between the global and regional method was calculated and t-test results are reported with significant improvements for the regional adaptive method (dA4C = 1.5 +/- 0.3 mm, dIA4C = 2.5 +/- 0.4 mm, dMV = 1.7 +/- 0.2 mm, d.f. = 17, all p < 0.001). This improvement by the regional adaptive method was confirmed through qualitative visual assessment by an experienced physician echocardiographer who concluded that the regional adaptive method produced rendered images with fewer tissue dropouts and less spurious structures inside the blood pool in the vast majority of the renderings. The algorithm has been

  11. A PC-based high-quality and interactive virtual endoscopy navigating system using 3D texture based volume rendering.

    PubMed

    Hwang, Jin-Woo; Lee, Jong-Min; Kim, In-Young; Song, In-Ho; Lee, Yong-Hee; Kim, SunI

    2003-05-01

    As an alternative method to optical endoscopy, visual quality and interactivity are crucial for virtual endoscopy. One solution is to use the 3D texture map based volume rendering method that offers high rendering speed without reducing visual quality. However, it is difficult to apply the method to virtual endoscopy. First, 3D texture mapping requires a high-end graphic workstation. Second, texture memory limits reduce the frame-rate. Third, lack of shading reduces visual quality significantly. As 3D texture mapping has become available on personal computers recently, we developed an interactive navigation system using 3D texture mapping on a personal computer. We divided the volume data into small cubes and tested whether the cubes had meaningful data. Only the cubes that passed the test were loaded into the texture memory and rendered. With the amount of data to be rendered minimized, rendering speed increased remarkably. We also improved visual quality by implementing full Phong shading based on the iso-surface shading method without sacrificing interactivity. With the developed navigation system, 256 x 256 x 256 sized brain MRA data was interactively explored with good image quality. PMID:12725966

  12. Hyoid bone development: An assessment of optimal CT scanner parameters and 3D volume rendering techniques

    PubMed Central

    Cotter, Meghan M.; Whyms, Brian J.; Kelly, Michael P.; Doherty, Benjamin M.; Gentry, Lindell R.; Bersu, Edward T.; Vorperian, Houri K.

    2015-01-01

    The hyoid bone anchors and supports the vocal tract. Its complex shape is best studied in three dimensions, but it is difficult to capture on computed tomography (CT) images and three-dimensional volume renderings. The goal of this study was to determine the optimal CT scanning and rendering parameters to accurately measure the growth and developmental anatomy of the hyoid and to determine whether it is feasible and necessary to use these parameters in the measurement of hyoids from in vivo CT scans. Direct linear and volumetric measurements of skeletonized hyoid bone specimens were compared to corresponding CT images to determine the most accurate scanning parameters and three-dimensional rendering techniques. A pilot study was undertaken using in vivo scans from a retrospective CT database to determine feasibility of quantifying hyoid growth. Scanning parameters and rendering technique affected accuracy of measurements. Most linear CT measurements were within 10% of direct measurements; however, volume was overestimated when CT scans were acquired with a slice thickness greater than 1.25 mm. Slice-by-slice thresholding of hyoid images decreased volume overestimation. The pilot study revealed that the linear measurements tested correlate with age. A fine-tuned rendering approach applied to small slice thickness CT scans produces the most accurate measurements of hyoid bones. However, linear measurements can be accurately assessed from in vivo CT scans at a larger slice thickness. Such findings imply that investigation into the growth and development of the hyoid bone, and the vocal tract as a whole, can now be performed using these techniques. PMID:25810349

  13. Exploring Brushlet Based 3D Textures in Transfer Function Specification for Direct Volume Rendering of Abdominal Organs.

    PubMed

    Alper Selver, M

    2015-02-01

    Intuitive and differentiating domains for transfer function (TF) specification for direct volume rendering is an important research area for producing informative and useful 3D images. One of the emerging branches of this research is the texture based transfer functions. Although several studies in two, three, and four dimensional image processing show the importance of using texture information, these studies generally focus on segmentation. However, TFs can also be built effectively using appropriate texture information. To accomplish this, methods should be developed to collect wide variety of shape, orientation, and texture of biological tissues and organs. In this study, volumetric data (i.e., domain of a TF) is enhanced using brushlet expansion, which represents both low and high frequency textured structures at different quadrants in transform domain. Three methods (i.e., expert based manual, atlas and machine learning based automatic) are proposed for selection of the quadrants. Non-linear manipulation of the complex brushlet coefficients is also used prior to the tiling of selected quadrants and reconstruction of the volume. Applications to abdominal data sets acquired with CT, MR, and PET show that the proposed volume enhancement effectively improves the quality of 3D rendering using well-known TF specification techniques. PMID:26357028

  14. Accuracy and reliability of measurements obtained from computed tomography 3D volume rendered images.

    PubMed

    Stull, Kyra E; Tise, Meredith L; Ali, Zabiullah; Fowler, David R

    2014-05-01

    Forensic pathologists commonly use computed tomography (CT) images to assist in determining the cause and manner of death as well as for mass disaster operations. Even though the design of the CT machine does not inherently produce distortion, most techniques within anthropology rely on metric variables, thus concern exists regarding the accuracy of CT images reflecting an object's true dimensions. Numerous researchers have attempted to validate the use of CT images, however the comparisons have only been conducted on limited elements and/or comparisons were between measurements taken from a dry element and measurements taken from the 3D-CT image of the same dry element. A full-body CT scan was performed prior to autopsy at the Office of the Chief Medical Examiner for the State of Maryland. Following autopsy, the remains were processed to remove all soft tissues and the skeletal elements were subject to an additional CT scan. Percent differences and Bland-Altman plots were used to assess the accuracy between osteometric variables obtained from the dry skeletal elements and from CT images with and without soft tissues. An additional seven crania were scanned, measured by three observers, and the reliability was evaluated by technical error of measurement (TEM) and relative technical error of measurement (%TEM). Average percent differences between the measurements obtained from the three data sources ranged from 1.4% to 2.9%. Bland-Altman plots illustrated the two sets of measurements were generally within 2mm for each comparison between data sources. Intra-observer TEM and %TEM for three observers and all craniometric variables ranged between 0.46mm and 0.77mm and 0.56% and 1.06%, respectively. The three-way inter-observer TEM and %TEM for craniometric variables was 2.6mm and 2.26%, respectively. Variables that yielded high error rates were orbital height, orbital breadth, inter-orbital breadth and parietal chord. Overall, minimal differences were found among the

  15. A Distributed GPU-Based Framework for Real-Time 3D Volume Rendering of Large Astronomical Data Cubes

    NASA Astrophysics Data System (ADS)

    Hassan, A. H.; Fluke, C. J.; Barnes, D. G.

    2012-05-01

    We present a framework to volume-render three-dimensional data cubes interactively using distributed ray-casting and volume-bricking over a cluster of workstations powered by one or more graphics processing units (GPUs) and a multi-core central processing unit (CPU). The main design target for this framework is to provide an in-core visualization solution able to provide three-dimensional interactive views of terabyte-sized data cubes. We tested the presented framework using a computing cluster comprising 64 nodes with a total of 128GPUs. The framework proved to be scalable to render a 204GB data cube with an average of 30 frames per second. Our performance analyses also compare the use of NVIDIA Tesla 1060 and 2050GPU architectures and the effect of increasing the visualization output resolution on the rendering performance. Although our initial focus, as shown in the examples presented in this work, is volume rendering of spectral data cubes from radio astronomy, we contend that our approach has applicability to other disciplines where close to real-time volume rendering of terabyte-order three-dimensional data sets is a requirement.

  16. The physics of volume rendering

    NASA Astrophysics Data System (ADS)

    Peters, Thomas

    2014-11-01

    Radiation transfer is an important topic in several physical disciplines, probably most prominently in astrophysics. Computer scientists use radiation transfer, among other things, for the visualization of complex data sets with direct volume rendering. In this article, I point out the connection between physical radiation transfer and volume rendering, and I describe an implementation of direct volume rendering in the astrophysical radiation transfer code RADMC-3D. I show examples for the use of this module on analytical models and simulation data.

  17. Direct Volume Rendering of Curvilinear Volumes

    NASA Technical Reports Server (NTRS)

    Vaziri, Arsi; Wilhelms, J.; Challinger, J.; Alper, N.; Ramamoorthy, S.; Kutler, Paul (Technical Monitor)

    1998-01-01

    Direct volume rendering can visualize sampled 3D scalar data as a continuous medium, or extract features. However, it is generally slow. Furthermore, most algorithms for direct volume rendering have assumed rectilinear gridded data. This paper discusses methods for using direct volume rendering when the original volume is curvilinear, i.e. is divided into six-sided cells which are not necessarily equilateral hexahedra. One approach is to ray-cast such volumes directly. An alternative approach is to interpolate the sample volumes to a rectilinear grid, and use this regular volume for rendering. Advantages and disadvantages of the two approaches in terms of speed and image quality are explored.

  18. Time-Critical Volume Rendering

    NASA Technical Reports Server (NTRS)

    Kaufman, Arie

    1998-01-01

    For the past twelve months, we have conducted and completed a joint research entitled "Time- Critical Volume Rendering" with NASA Ames. As expected, High performance volume rendering algorithms have been developed by exploring some new faster rendering techniques, including object presence acceleration, parallel processing, and hierarchical level-of-detail representation. Using our new techniques, initial experiments have achieved real-time rendering rates of more than 10 frames per second of various 3D data sets with highest resolution. A couple of joint papers and technique reports as well as an interactive real-time demo have been compiled as the result of this project.

  19. Age Estimation in Living Adults using 3D Volume Rendered CT Images of the Sternal Plastron and Lower Chest.

    PubMed

    Oldrini, Guillaume; Harter, Valentin; Witte, Yannick; Martrille, Laurent; Blum, Alain

    2016-01-01

    Age estimation is commonly of interest in a judicial context. In adults, it is less documented than in children. The aim of this study was to evaluate age estimation in adults using CT images of the sternal plastron with volume rendering technique (VRT). The evaluation criteria are derived from known methods used for age estimation and are applicable in living or dead subjects. The VRT images of 456 patients were analyzed. Two radiologists performed age estimation independently from an anterior view of the plastron. Interobserver agreement and correlation coefficients between each reader's classification and real age were calculated. The interobserver agreement was 0.86, and the correlation coefficients between readers classifications and real age classes were 0.60 and 0.65. Spearman correlation coefficients were, respectively, 0.89, 0.67, and 0.71. Analysis of the plastron using VRT allows age estimation in vivo quickly and with results similar than methods such as Iscan, Suchey-Brooks, and radiographs used to estimate the age of death. PMID:27092960

  20. Multivariate volume rendering

    SciTech Connect

    Crawfis, R.A.

    1996-03-01

    This paper presents a new technique for representing multivalued data sets defined on an integer lattice. It extends the state-of-the-art in volume rendering to include nonhomogeneous volume representations. That is, volume rendering of materials with very fine detail (e.g. translucent granite) within a voxel. Multivariate volume rendering is achieved by introducing controlled amounts of noise within the volume representation. Varying the local amount of noise within the volume is used to represent a separate scalar variable. The technique can also be used in image synthesis to create more realistic clouds and fog.

  1. Fast voxel-based 2D/3D registration algorithm using a volume rendering method based on the shear-warp factorization

    NASA Astrophysics Data System (ADS)

    Weese, Juergen; Goecke, Roland; Penney, Graeme P.; Desmedt, Paul; Buzug, Thorsten M.; Schumann, Heidrun

    1999-05-01

    2D/3D registration makes it possible to use pre-operative CT scans for navigation purposes during X-ray fluoroscopy guided interventions. We present a fast voxel-based method for this registration task, which uses a recently introduced similarity measure (pattern intensity). This measure is especially suitable for 2D/3D registration, because it is robust with respect to structures such as a stent visible in the X-ray fluoroscopy image but not in the CT scan. The method uses only a part of the CT scan for the generation of digitally reconstructed radiographs (DRRs) to accelerate their computation. Nevertheless, computation time is crucial for intra-operative application and a further speed-up is required, because numerous DRRs must be computed. For that reason, the suitability of different volume rendering methods for 2D/3D registration has been investigated. A method based on the shear-warp factorization of the viewing transformation turned out to be especially suitable and builds the basis of the registration algorithm. The algorithm has been applied to images of a spine phantom and to clinical images. For comparison, registration results have been calculated using ray-casting. The shear-warp factorization based rendering method accelerates registration by a factor of up to seven compared to ray-casting without degrading registration accuracy. Using a vertebra as feature for registration, computation time is in the range of 3-4s (Sun UltraSparc, 300 MHz) which is acceptable for intra-operative application.

  2. Fast perspective volume ray casting method using GPU-based acceleration techniques for translucency rendering in 3D endoluminal CT colonography.

    PubMed

    Lee, Taek-Hee; Lee, Jeongjin; Lee, Ho; Kye, Heewon; Shin, Yeong Gil; Kim, Soo Hong

    2009-08-01

    Recent advances in graphics processing unit (GPU) have enabled direct volume rendering at interactive rates. However, although perspective volume rendering for opaque isosurface is rapidly performed using conventional GPU-based method, perspective volume rendering for non-opaque volume such as translucency rendering is still slow. In this paper, we propose an efficient GPU-based acceleration technique of fast perspective volume ray casting for translucency rendering in computed tomography (CT) colonography. The empty space searching step is separated from the shading and compositing steps, and they are divided into separate processing passes in the GPU. Using this multi-pass acceleration, empty space leaping is performed exactly at the voxel level rather than at the block level, so that the efficiency of empty space leaping is maximized for colon data set, which has many curved or narrow regions. In addition, the numbers of shading and compositing steps are fixed, and additional empty space leapings between colon walls are performed to increase computational efficiency further near the haustral folds. Experiments were performed to illustrate the efficiency of the proposed scheme compared with the conventional GPU-based method, which has been known to be the fastest algorithm. The experimental results showed that the rendering speed of our method was 7.72fps for translucency rendering of 1024x1024 colonoscopy image, which was about 3.54 times faster than that of the conventional method. Since our method performed the fully optimized empty space leaping for any kind of colon inner shapes, the frame-rate variations of our method were about two times smaller than that of the conventional method to guarantee smooth navigation. The proposed method could be successfully applied to help diagnose colon cancer using translucency rendering in virtual colonoscopy. PMID:19541296

  3. Volume Rendering of Heliospheric Data

    NASA Astrophysics Data System (ADS)

    Hick, P. P.; Jackson, B. V.; Bailey, M. J.; Buffington, A.

    2001-12-01

    We demonstrate some of the techniques we currently use for the visualization of heliospheric volume data. Our 3D volume data usually are derived from tomographic reconstructions of the solar wind density and velocity from remote sensing observations (e.g., Thomson scattering and interplanetary scintillation observations). We show examples of hardware-based volume rendering using the Volume Pro PCI board (from TeraRecon, Inc.). This board updates the display at a rate of up to 30 frames per second using a parallel projection algorithm, allowing the manipulation of volume data in real-time. In addition, the manipulation of 4D volume data (the 4th dimension usually representing time) enables the visualization in real-time of an evolving (time-dependent) data set. We also show examples of perspective projections using IDL. This work was supported through NASA grant NAG5-9423.

  4. The rendering context for stereoscopic 3D web

    NASA Astrophysics Data System (ADS)

    Chen, Qinshui; Wang, Wenmin; Wang, Ronggang

    2014-03-01

    3D technologies on the Web has been studied for many years, but they are basically monoscopic 3D. With the stereoscopic technology gradually maturing, we are researching to integrate the binocular 3D technology into the Web, creating a stereoscopic 3D browser that will provide users with a brand new experience of human-computer interaction. In this paper, we propose a novel approach to apply stereoscopy technologies to the CSS3 3D Transforms. Under our model, each element can create or participate in a stereoscopic 3D rendering context, in which 3D Transforms such as scaling, translation and rotation, can be applied and be perceived in a truly 3D space. We first discuss the underlying principles of stereoscopy. After that we discuss how these principles can be applied to the Web. A stereoscopic 3D browser with backward compatibility is also created for demonstration purposes. We take advantage of the open-source WebKit project, integrating the 3D display ability into the rendering engine of the web browser. For each 3D web page, our 3D browser will create two slightly different images, each representing the left-eye view and right-eye view, both to be combined on the 3D display to generate the illusion of depth. And as the result turns out, elements can be manipulated in a truly 3D space.

  5. Algorithms for Haptic Rendering of 3D Objects

    NASA Technical Reports Server (NTRS)

    Basdogan, Cagatay; Ho, Chih-Hao; Srinavasan, Mandayam

    2003-01-01

    Algorithms have been developed to provide haptic rendering of three-dimensional (3D) objects in virtual (that is, computationally simulated) environments. The goal of haptic rendering is to generate tactual displays of the shapes, hardnesses, surface textures, and frictional properties of 3D objects in real time. Haptic rendering is a major element of the emerging field of computer haptics, which invites comparison with computer graphics. We have already seen various applications of computer haptics in the areas of medicine (surgical simulation, telemedicine, haptic user interfaces for blind people, and rehabilitation of patients with neurological disorders), entertainment (3D painting, character animation, morphing, and sculpting), mechanical design (path planning and assembly sequencing), and scientific visualization (geophysical data analysis and molecular manipulation).

  6. Vector quantization for volume rendering

    NASA Technical Reports Server (NTRS)

    Ning, Paul; Hesselink, Lambertus

    1992-01-01

    Volume rendering techniques typically process volumetric data in raw, uncompressed form. As algorithmic and architectural advances improve rendering speeds, however, larger data sets will be evaluated requiring consideration of data storage and transmission issues. In this paper, we analyze the data compression requirements for volume rendering applications and present a solution based on vector quantization. The proposed system compresses volumetric data and then renders images directly from the new data format. Tests on a fluid flow data set demonstrate that good image quality may be achieved at a compression ratio of 17:1 with only a 5 percent cost in additional rendering time.

  7. Speaking Volumes About 3-D

    NASA Technical Reports Server (NTRS)

    2002-01-01

    In 1999, Genex submitted a proposal to Stennis Space Center for a volumetric 3-D display technique that would provide multiple users with a 360-degree perspective to simultaneously view and analyze 3-D data. The futuristic capabilities of the VolumeViewer(R) have offered tremendous benefits to commercial users in the fields of medicine and surgery, air traffic control, pilot training and education, computer-aided design/computer-aided manufacturing, and military/battlefield management. The technology has also helped NASA to better analyze and assess the various data collected by its satellite and spacecraft sensors. Genex capitalized on its success with Stennis by introducing two separate products to the commercial market that incorporate key elements of the 3-D display technology designed under an SBIR contract. The company Rainbow 3D(R) imaging camera is a novel, three-dimensional surface profile measurement system that can obtain a full-frame 3-D image in less than 1 second. The third product is the 360-degree OmniEye(R) video system. Ideal for intrusion detection, surveillance, and situation management, this unique camera system offers a continuous, panoramic view of a scene in real time.

  8. 3D rendering of SAR distributions from Thermotron RF-8 using a ray casting technique.

    PubMed

    Paliwal, B R; Gehring, M A; Sanders, C; Mackie, T R; Raffety, H M; Song, C W

    1991-01-01

    A comprehensive 3D visualization package developed for CT-based 3D radiation treatment planning has been modified to volume-render SAR data. The program accepts data from sequential thermographic thermometry measurements as well as calculated data from thermal models. In this presentation sample data obtained from a capacitive heating system 'Thermotron-RF8' is presented. This capability allows the generation of accurate standardized volumetric images of SAR and provides a valuable tool to better preplan hyperthermia treatments. PMID:1919152

  9. Real-time rendering method and performance evaluation of composable 3D lenses for interactive VR.

    PubMed

    Borst, Christoph W; Tiesel, Jan-Phillip; Best, Christopher M

    2010-01-01

    We present and evaluate a new approach for real-time rendering of composable 3D lenses for polygonal scenes. Such lenses, usually called "volumetric lenses," are an extension of 2D Magic Lenses to 3D volumes in which effects are applied to scene elements. Although the composition of 2D lenses is well known, 3D composition was long considered infeasible due to both geometric and semantic complexity. Nonetheless, for a scene with multiple interactive 3D lenses, the problem of intersecting lenses must be considered. Intersecting 3D lenses in meaningful ways supports new interfaces such as hierarchical 3D windows, 3D lenses for managing and composing visualization options, or interactive shader development by direct manipulation of lenses providing component effects. Our 3D volumetric lens approach differs from other approaches and is one of the first to address efficient composition of multiple lenses. It is well-suited to head-tracked VR environments because it requires no view-dependent generation of major data structures, allowing caching and reuse of full or partial results. A Composite Shader Factory module composes shader programs for rendering composite visual styles and geometry of intersection regions. Geometry is handled by Boolean combinations of region tests in fragment shaders, which allows both convex and nonconvex CSG volumes for lens shape. Efficiency is further addressed by a Region Analyzer module and by broad-phase culling. Finally, we consider the handling of order effects for composed 3D lenses. PMID:20224135

  10. Hardware-accelerated autostereogram rendering for interactive 3D visualization

    NASA Astrophysics Data System (ADS)

    Petz, Christoph; Goldluecke, Bastian; Magnor, Marcus

    2003-05-01

    Single Image Random Dot Stereograms (SIRDS) are an attractive way of depicting three-dimensional objects using conventional display technology. Once trained in decoupling the eyes' convergence and focusing, autostereograms of this kind are able to convey the three-dimensional impression of a scene. We present in this work an algorithm that generates SIRDS at interactive frame rates on a conventional PC. The presented system allows rotating a 3D geometry model and observing the object from arbitrary positions in real-time. Subjective tests show that the perception of a moving or rotating 3D scene presents no problem: The gaze remains focused onto the object. In contrast to conventional SIRDS algorithms, we render multiple pixels in a single step using a texture-based approach, exploiting the parallel-processing architecture of modern graphics hardware. A vertex program determines the parallax for each vertex of the geometry model, and the graphics hardware's texture unit is used to render the dot pattern. No data has to be transferred between main memory and the graphics card for generating the autostereograms, leaving CPU capacity available for other tasks. Frame rates of 25 fps are attained at a resolution of 1024x512 pixels on a standard PC using a consumer-grade nVidia GeForce4 graphics card, demonstrating the real-time capability of the system.

  11. Novel Application of Confocal Laser Scanning Microscopy and 3D Volume Rendering toward Improving the Resolution of the Fossil Record of Charcoal

    PubMed Central

    Belcher, Claire M.; Punyasena, Surangi W.; Sivaguru, Mayandi

    2013-01-01

    Variations in the abundance of fossil charcoals between rocks and sediments are assumed to reflect changes in fire activity in Earth’s past. These variations in fire activity are often considered to be in response to environmental, ecological or climatic changes. The role that fire plays in feedbacks to such changes is becoming increasingly important to understand and highlights the need to create robust estimates of variations in fossil charcoal abundance. The majority of charcoal based fire reconstructions quantify the abundance of charcoal particles and do not consider the changes in the morphology of the individual particles that may have occurred due to fragmentation as part of their transport history. We have developed a novel application of confocal laser scanning microscopy coupled to image processing that enables the 3-dimensional reconstruction of individual charcoal particles. This method is able to measure the volume of both microfossil and mesofossil charcoal particles and allows the abundance of charcoal in a sample to be expressed as total volume of charcoal. The method further measures particle surface area and shape allowing both relationships between different size and shape metrics to be analysed and full consideration of variations in particle size and size sorting between different samples to be studied. We believe application of this new imaging approach could allow significant improvement in our ability to estimate variations in past fire activity using fossil charcoals. PMID:23977267

  12. Image space adaptive volume rendering

    NASA Astrophysics Data System (ADS)

    Corcoran, Andrew; Dingliana, John

    2012-01-01

    We present a technique for interactive direct volume rendering which provides adaptive sampling at a reduced memory requirement compared to traditional methods. Our technique exploits frame to frame coherence to quickly generate a two-dimensional importance map of the volume which guides sampling rate optimisation and allows us to provide interactive frame rates for user navigation and transfer function changes. In addition our ray casting shader detects any inconsistencies in our two-dimensional map and corrects them on the fly to ensure correct classification of important areas of the volume.

  13. Anatomical annotation on vascular structure in volume rendered images.

    PubMed

    Jiang, Zhengang; Nimura, Yukitaka; Hayashi, Yuichiro; Kitasaka, Takayuki; Misawa, Kazunari; Fujiwara, Michitaka; Kajita, Yasukazu; Wakabayashi, Toshihiko; Mori, Kensaku

    2013-03-01

    The precise annotation of vascular structure is desired in computer-assisted systems to help surgeons identify each vessel branch. This paper proposes a method that annotates vessels on volume rendered images by rendering their names on them using a two-pass rendering process. In the first rendering pass, vessel surface models are generated using such properties as centerlines, radii, and running directions. Then the vessel names are drawn on the vessel surfaces. Finally, the vessel name images and the corresponding depth buffer are generated by a virtual camera at the viewpoint. In the second rendering pass, volume rendered images are generated by a ray casting volume rendering algorithm that considers the depth buffer generated in the first rendering pass. After the two-pass rendering is finished, an annotated image is generated by blending the volume rendered image with the surface rendered image. To confirm the effectiveness of our proposed method, we performed a computer-assisted system for the automated annotation of abdominal arteries. The experimental results show that vessel names can be drawn on the corresponding vessel surface in the volume rendered images at a computing cost that is nearly the same as that by volume rendering only. The proposed method has enormous potential to be adopted to annotate the vessels in the 3D medical images in clinical applications, such as image-guided surgery. PMID:23562139

  14. Wire bonded 3D coils render air core microtransformers competitive

    NASA Astrophysics Data System (ADS)

    Moazenzadeh, A.; Spengler, N.; Lausecker, R.; Rezvani, A.; Mayer, M.; Korvink, J. G.; Wallrabe, U.

    2013-11-01

    We present a novel wafer-level fabrication method for 3D solenoidal microtransformers using an automatic wire bonder for chip-scale, very high frequency regime applications. Using standard microelectromechanical systems fabrication processes for the manufacturing of supporting structures, together with ultra-fast wire bonding for the fabrication of solenoids, enables the flexible and repeatable fabrication, at high throughput, of high performance air core microtransformers. The primary and secondary solenoids are wound one on top of the other in the lateral direction, using a 25 µm thick insulated wire. Besides commonly available gold wire, we also introduce insulated copper wire to our coil winding process. The influence of copper on the transformer properties is explored and compared to gold. A simulation model based on the solenoids’ wire bonding trajectories has been defined using the FastHenry software to accurately predict and optimize the transformer's inductive properties. The transformer chips are encapsulated in polydimethylsiloxane in order to protect the coils from environmental influences and mechanical damage. Meanwhile, the effect of the increase in the internal capacitance of the chips as a result of the encapsulation is analyzed. A fabricated transformer with 20 windings in both the primary and the secondary coils, and a footprint of 1 mm2, yields an inductance of 490 nH, a maximum efficiency of 68%, and a coupling factor of 94%. The repeatability of the coil winding process was investigated by comparing the data of 25 identically processed devices. Finally, the microtransformers are benchmarked to underline the potential of the technology in rendering air core transformers competitive.

  15. Sorting and hardware assisted rendering for volume visualization

    SciTech Connect

    Stein, C.; Becker, B.; Max, N.

    1994-03-01

    We present some techniques for volume rendering unstructured data. Interpolation between vertex colors and opacities is performed using hardware assisted texture mapping, and color is integrated for use with a volume rendering system. We also present an O(n{sup 2}) method for sorting n arbitrarily shaped convex polyhedra prior to visualization. It generalizes the Newell, Newell and Sancha sort for polygons to 3-D volume elements.

  16. Efficient volume rendering using octree space subdivision

    NASA Astrophysics Data System (ADS)

    Krumhauer, Peter; Tsygankov, Michael; Reich, Christian; Evgrafov, Anton

    1999-03-01

    This paper describes a discrete ray-tracing algorithm, which employs the adaptive hierarchical spatial subdivision (octree) technique for 3D uniform binary voxel space representation. The binary voxel space contains voxels of two kinds: 'surface' and 'non-surface.' Surface voxels include property information like the surface normal and color. The usage of octrees dramatically reduces the memory amount required to store 3D models. The average compression ratio is in the range between 1:24 up to 1:50 compared to uncompressed voxels. A fast ray casting algorithm called BOXER was developed, which allows rendering 256 X 256 X 256 and 512 X 512 X 512 volumes nearly in real-time on standard Intel-based PCs.

  17. Volume Rendering of AMR Simulations

    NASA Astrophysics Data System (ADS)

    Labadens, M.; Pomarède, D.; Chapon, D.; Teyssier, R.; Bournaud, F.; Renaud, F.; Grandjouan, N.

    2013-04-01

    High-resolution simulations often rely on the Adaptive Mesh Resolution (AMR) technique to optimize memory consumption versus attainable precision. While this technique allows for dramatic improvements in terms of computing performance, the analysis and visualization of its data outputs remain challenging. The lack of effective volume renderers for the octree-based AMR used by the RAMSES simulation program has led to the development of the solutions presented in this paper. Two custom algorithms are discussed, based on the splatting and the ray-casting techniques. Their usage is illustrated in the context of the visualization of a high-resolution, 6000-processor simulation of a Milky Way-like galaxy. Performance obtained in terms of memory management and parallelism speedup are presented.

  18. Efficient hardware accelerated rendering of multiple volumes by data dependent local render functions

    NASA Astrophysics Data System (ADS)

    Lehmann, Helko; Geller, Dieter; Weese, Jürgen; Kiefer, Gundolf

    2007-03-01

    The inspection of a patient's data for diagnostics, therapy planning or therapy guidance involves an increasing number of 3D data sets, e.g. acquired by different imaging modalities, with different scanner settings or at different times. To enable viewing of the data in one consistent anatomical context fused interactive renderings of multiple 3D data sets are desirable. However, interactive fused rendering of typical medical data sets using standard computing hardware remains a challenge. In this paper we present a method to render multiple 3D data sets. By introducing local rendering functions, i.e. functions that are adapted to the complexity of the visible data contained in the different regions of a scene, we can ensure that the overall performance for fused rendering of multiple data sets depends on the actual amount of visible data. This is in contrast to other approaches where the performance depends mainly on the number of rendered data sets. We integrate the method into a streaming rendering architecture with brick-based data representations of the volume data. This enables efficient handling of data sets that do not fit into the graphics board memory and a good utilization of the texture caches. Furthermore, transfer and rendering of volume data that does not contribute to the final image can be avoided. We illustrate the benefits of our method by experiments with clinical data.

  19. Exposure Render: An Interactive Photo-Realistic Volume Rendering Framework

    PubMed Central

    Kroes, Thomas; Post, Frits H.; Botha, Charl P.

    2012-01-01

    The field of volume visualization has undergone rapid development during the past years, both due to advances in suitable computing hardware and due to the increasing availability of large volume datasets. Recent work has focused on increasing the visual realism in Direct Volume Rendering (DVR) by integrating a number of visually plausible but often effect-specific rendering techniques, for instance modeling of light occlusion and depth of field. Besides yielding more attractive renderings, especially the more realistic lighting has a positive effect on perceptual tasks. Although these new rendering techniques yield impressive results, they exhibit limitations in terms of their exibility and their performance. Monte Carlo ray tracing (MCRT), coupled with physically based light transport, is the de-facto standard for synthesizing highly realistic images in the graphics domain, although usually not from volumetric data. Due to the stochastic sampling of MCRT algorithms, numerous effects can be achieved in a relatively straight-forward fashion. For this reason, we have developed a practical framework that applies MCRT techniques also to direct volume rendering (DVR). With this work, we demonstrate that a host of realistic effects, including physically based lighting, can be simulated in a generic and flexible fashion, leading to interactive DVR with improved realism. In the hope that this improved approach to DVR will see more use in practice, we have made available our framework under a permissive open source license. PMID:22768292

  20. Direct volume rendering methods for cell structures.

    PubMed

    Martišek, Dalibor; Martišek, Karel

    2012-01-01

    The study of the complicated architecture of cell space structures is an important problem in biology and medical research. Optical cuts of cells produced by confocal microscopes enable two-dimensional (2D) and three-dimensional (3D) reconstructions of observed cells. This paper discuses new possibilities for direct volume rendering of these data. We often encounter 16 or more bit images in confocal microscopy of cells. Most of the information contained in these images is unsubstantial for the human vision. Therefore, it is necessary to use mathematical algorithms for visualization of such images. Present software tools as OpenGL or DirectX run quickly in graphic station with special graphic cards, run very unsatisfactory on PC without these cards and outputs are usually poor for real data. These tools are black boxes for a common user and make it impossible to correct and improve them. With the method proposed, more parameters of the environment can be set, making it possible to apply 3D filters to set the output image sharpness in relation to the noise. The quality of the output is incomparable to the earlier described methods and is worth increasing the computing time. We would like to offer mathematical methods of 3D scalar data visualization describing new algorithms that run on standard PCs very well. PMID:22511504

  1. Volume Visual Attention Maps (VVAM) in ray-casting rendering.

    PubMed

    Beristain, Andoni; Congote, John; Ruiz, Oscar

    2012-01-01

    This paper presents an extension visual attention maps for volume data visualization, where eye fixation points become rays in the 3D space, and the visual attention map becomes a volume. This Volume Visual Attention Map (VVAM) is used to interactively enhance a ray-casting based direct volume rendering (DVR) visualization. The practical application of this idea into the biomedical image visualization field is explored for interactive visualization. PMID:22356956

  2. A fast high accuracy volume renderer for unstructured data.

    SciTech Connect

    Angel, Edward S.; Moreland, Kenneth D.

    2004-07-01

    In this paper, we describe an unstructured mesh volume renderer. Our renderer is interactive and accurately integrates light intensity an order of magnitude faster than previous methods. We employ a projective technique that takes advantage of the expanded programmability of the latest 3D graphics hardware. We also analyze an optical model commonly used for scientific volume rendering and derive a new method to compute it that is very accurate but computationally feasible in real time. We demonstrate a system that can accurately produce a volume rendering of an unstructured mesh with a first-order approximation to any classification method. Furthermore, our system is capable of rendering over 300 thousand tetrahedra per second yet is independent of the classification scheme used.

  3. Automatic bone-free rendering of cerebral aneurysms via 3D CTA

    NASA Astrophysics Data System (ADS)

    Saha, Punam K.; Abrahams, John M.; Udupa, Jayaram K.

    2001-07-01

    3D computed tomographic angiography (3D-CTA) has been described as an alternative to digital subtraction angiography (DSA) in the clinical evaluation of cerebrovascular diseases. A bone-free rendition of 3D-CTA facilitates a quick and accurate clinical evaluation of the disease. We propose a new bone removal process that is accomplished in three sequential steps - (1) primary delineation and removal of bones, (2) removing the effect of partial voluming around bone surfaces, and (3) removal of thin bones around nose, mouth and eyes. The bone removed image of vasculature and aneurysms is rendered via maximum intensity projection (MIP). The method has been tested on 10 patients' 3D-CTA images acquired on a general Electric Hi-Speed Spiral CT Scanner. The algorithm successfully subtracted bone showing the cerebral vasculature in all 10 patients' data. The method allows for a unique analysis of 3D-CTA data for near automatic removal of bones. This greatly reduces the need for manual removal of bones that is currently utilized and greatly facilitates the visualization of the anatomy of vascular lesions.

  4. Parallel Rendering of Large Time-Varying Volume Data

    NASA Technical Reports Server (NTRS)

    Garbutt, Alexander E.

    2005-01-01

    Interactive visualization of large time-varying 3D volume datasets has been and still is a great challenge to the modem computational world. It stretches the limits of the memory capacity, the disk space, the network bandwidth and the CPU speed of a conventional computer. In this SURF project, we propose to develop a parallel volume rendering program on SGI's Prism, a cluster computer equipped with state-of-the-art graphic hardware. The proposed program combines both parallel computing and hardware rendering in order to achieve an interactive rendering rate. We use 3D texture mapping and a hardware shader to implement 3D volume rendering on each workstation. We use SGI's VisServer to enable remote rendering using Prism's graphic hardware. And last, we will integrate this new program with ParVox, a parallel distributed visualization system developed at JPL. At the end of the project, we Will demonstrate remote interactive visualization using this new hardware volume renderer on JPL's Prism System using a time-varying dataset from selected JPL applications.

  5. Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering

    PubMed Central

    Sicat, Ronell; Krüger, Jens; Möller, Torsten; Hadwiger, Markus

    2015-01-01

    This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs. PMID:26146475

  6. Post-processing methods of rendering and visualizing 3-D reconstructed tomographic images

    SciTech Connect

    Wong, S.T.C.

    1997-02-01

    The purpose of this presentation is to discuss the computer processing techniques of tomographic images, after they have been generated by imaging scanners, for volume visualization. Volume visualization is concerned with the representation, manipulation, and rendering of volumetric data. Since the first digital images were produced from computed tomography (CT) scanners in the mid 1970s, applications of visualization in medicine have expanded dramatically. Today, three-dimensional (3D) medical visualization has expanded from using CT data, the first inherently digital source of 3D medical data, to using data from various medical imaging modalities, including magnetic resonance scanners, positron emission scanners, digital ultrasound, electronic and confocal microscopy, and other medical imaging modalities. We have advanced from rendering anatomy to aid diagnosis and visualize complex anatomic structures to planning and assisting surgery and radiation treatment. New, more accurate and cost-effective procedures for clinical services and biomedical research have become possible by integrating computer graphics technology with medical images. This trend is particularly noticeable in current market-driven health care environment. For example, interventional imaging, image-guided surgery, and stereotactic and visualization techniques are now stemming into surgical practice. In this presentation, we discuss only computer-display-based approaches of volumetric medical visualization. That is, we assume that the display device available is two-dimensional (2D) in nature and all analysis of multidimensional image data is to be carried out via the 2D screen of the device. There are technologies such as holography and virtual reality that do provide a {open_quotes}true 3D screen{close_quotes}. To confine the scope, this presentation will not discuss such approaches.

  7. Efficient space-leaping method for volume rendering

    NASA Astrophysics Data System (ADS)

    Cho, Sungup; Kim, Hyeongdo; Kim, Myeongsun; Jeong, Changsung

    1999-03-01

    Volume rendering is a technique that visualize 2D image of object from 3D volume data on image screen. Ray casting algorithm, one of popular volume rendering techniques, generate image with detail and high quality compared with other volume rendering algorithms but since this is a highly time consuming process given large number of voxels, many acceleration techniques have been developed. Here we introduce new acceleration technique, efficient space leaping method. Our new space leaping method traverse volume data and projects 3D location of voxel onto image screen to find pixels that have non-zero value in final volume image and locations of non-empty voxels that are closest to ray. During this process, adaptive run-length encoding and line drawing algorithm are used to traverse volume data and find pixels with non-zero value efficiently. Then we cast rays not through entire screen pixel but only through projected screen pixels and start rendering process from non-empty voxel location directly. This new method shows significant time savings applied to surface extraction without loss of image quality.

  8. Fast volume rendering for medical image.

    PubMed

    Ying, Hu; Xin-He, Xu

    2005-01-01

    In orders to improve the rendering speed of ray casting and make this technique a practical routine in medical applications, two new and improved techniques are described in this paper. First, an integrated method using "proximity clouds" technique is applied to speed up ray casting. The second technique for speeding up the 3D rendering is done through a parallel implementation based on "single computer multi CPU" model Four groups of CT data sets have been used to validate the improvement of the rendering speed. The result shown that the interactive rendering speed is up to 6-10 fps, which is almost real-time making our algorithm practical in medical visualization routine. PMID:17281409

  9. Foundations for Measuring Volume Rendering Quality

    NASA Technical Reports Server (NTRS)

    Williams, Peter L.; Uselton, Samuel P.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    The goal of this paper is to provide a foundation for objectively comparing volume rendered images. The key elements of the foundation are: (1) a rigorous specification of all the parameters that need to be specified to define the conditions under which a volume rendered image is generated; (2) a methodology for difference classification, including a suite of functions or metrics to quantify and classify the difference between two volume rendered images that will support an analysis of the relative importance of particular differences. The results of this method can be used to study the changes caused by modifying particular parameter values, to compare and quantify changes between images of similar data sets rendered in the same way, and even to detect errors in the design, implementation or modification of a volume rendering system. If one has a benchmark image, for example one created by a high accuracy volume rendering system, the method can be used to evaluate the accuracy of a given image.

  10. Mobile Volume Rendering: Past, Present and Future.

    PubMed

    Noguera, José M; Jiménez, J Roberto

    2016-02-01

    Volume rendering has been a relevant topic in scientific visualization for the last decades. However, the exploration of reasonably big volume datasets requires considerable computing power, which has limited this field to the desktop scenario. But the recent advances in mobile graphics hardware have motivated the research community to overcome these restrictions and to bring volume graphics to these ubiquitous handheld platforms. This survey presents the past and present work on mobile volume rendering, and is meant to serve as an overview and introduction to the field. It proposes a classification of the current efforts and covers aspects such as advantages and issues of the mobile platforms, rendering strategies, performance and user interfaces. The paper ends by highlighting promising research directions to motivate the development of new and interesting mobile volume solutions. PMID:26731459

  11. Rapid Decimation for Direct Volume Rendering

    NASA Technical Reports Server (NTRS)

    Gibbs, Jonathan; VanGelder, Allen; Verma, Vivek; Wilhelms, Jane

    1997-01-01

    An approach for eliminating unnecessary portions of a volume when producing a direct volume rendering is described. This reduction in volume size sacrifices some image quality in the interest of rendering speed. Since volume visualization is often used as an exploratory visualization technique, it is important to reduce rendering times, so the user can effectively explore the volume. The methods presented can speed up rendering by factors of 2 to 3 with minor image degradation. A family of decimation algorithms to reduce the number of primitives in the volume without altering the volume's grid in any way is introduced. This allows the decimation to be computed rapidly, making it easier to change decimation levels on the fly. Further, because very little extra space is required, this method is suitable for the very large volumes that are becoming common. The method is also grid-independent, so it is suitable for multiple overlapping curvilinear and unstructured, as well as regular, grids. The decimation process can proceed automatically, or can be guided by the user so that important regions of the volume are decimated less than unimportant regions. A formal error measure is described based on a three-dimensional analog of the Radon transform. Decimation methods are evaluated based on this metric and on direct comparison with reference images.

  12. Faster, higher quality volume visualization for 3D medical imaging

    NASA Astrophysics Data System (ADS)

    Kalvin, Alan D.; Laine, Andrew F.; Song, Ting

    2008-03-01

    The two major volume visualization methods used in biomedical applications are Maximum Intensity Projection (MIP) and Volume Rendering (VR), both of which involve the process of creating sets of 2D projections from 3D images. We have developed a new method for very fast, high-quality volume visualization of 3D biomedical images, based on the fact that the inverse of this process (transforming 2D projections into a 3D image) is essentially equivalent to tomographic image reconstruction. This new method uses the 2D projections acquired by the scanner, thereby obviating the need for the two computationally expensive steps currently required in the complete process of biomedical visualization, that is, (i) reconstructing the 3D image from 2D projection data, and (ii) computing the set of 2D projections from the reconstructed 3D image As well as improvements in computation speed, this method also results in improvements in visualization quality, and in the case of x-ray CT we can exploit this quality improvement to reduce radiation dosage. In this paper, demonstrate the benefits of developing biomedical visualization techniques by directly processing the sensor data acquired by body scanners, rather than by processing the image data reconstructed from the sensor data. We show results of using this approach for volume visualization for tomographic modalities, like x-ray CT, and as well as for MRI.

  13. Elasticity-based three dimensional ultrasound real-time volume rendering

    NASA Astrophysics Data System (ADS)

    Boctor, Emad M.; Matinfar, Mohammad; Ahmad, Omar; Rivaz, Hassan; Choti, Michael; Taylor, Russell H.

    2009-02-01

    Volumetric ultrasound imaging has not gained wide recognition, despite the availability of real-time 3D ultrasound scanners and the anticipated potential of 3D ultrasound imaging in diagnostic and interventional radiology. Their use, however, has been hindered by the lack of real-time visualization methods that are capable of producing high quality 3D rendering of the target/surface of interest. Volume rendering is a known visualization method, which can display clear surfaces out of the acquired volumetric data, and has an increasing number of applications utilizing CT and MRI data. The key element of any volume rendering pipeline is the ability to classify the target/surface of interest by setting an appropriate opacity function. Practical and successful real-time 3D ultrasound volume rendering can be achieved in Obstetrics and Angio applications where setting these opacity functions can be done rapidly, and reliably. Unfortunately, 3D ultrasound volume rendering of soft tissues is a challenging task due to the presence of significant amount of noise and speckle. Recently, several research groups have shown the feasibility of producing 3D elasticity volume from two consecutive 3D ultrasound scans. This report describes a novel volume rendering pipeline utilizing elasticity information. The basic idea is to compute B-mode voxel opacity from the rapidly calculated strain values, which can also be mixed with conventional gradient based opacity function. We have implemented the volume renderer using GPU unit, which gives an update rate of 40 volume/sec.

  14. 3D volume visualization in remote radiation treatment planning

    NASA Astrophysics Data System (ADS)

    Yun, David Y.; Garcia, Hong-Mei C.; Mun, Seong K.; Rogers, James E.; Tohme, Walid G.; Carlson, Wayne E.; May, Stephen; Yagel, Roni

    1996-03-01

    This paper reports a novel applications of 3D visualization in an ARPA-funded remote radiation treatment planning (RTP) experiment, utilizing supercomputer 3D volumetric modeling power and NASA ACTS (Advanced Communication Technology Satellite) communication bandwidths at the Ka-band range. The objective of radiation treatment is to deliver a tumorcidal dose of radiation to a tumor volume while minimizing doses to surrounding normal tissues. High performance graphics computers are required to allow physicians to view a 3D anatomy, specify proposed radiation beams, and evaluate the dose distribution around the tumor. Supercomputing power is needed to compute and even optimize dose distribution according to pre-specified requirements. High speed communications offer possibilities for sharing scarce and expensive computing resources (e.g., hardware, software, personnel, etc.) as well as medical expertise for 3D treatment planning among hospitals. This paper provides initial technical insights into the feasibility of such resource sharing. The overall deployment of the RTP experiment, visualization procedures, and parallel volume rendering in support of remote interactive 3D volume visualization will be described.

  15. A practical approach to spectral volume rendering.

    PubMed

    Bergner, Steven; Möller, Torsten; Tory, Melanie; Drew, Mark S

    2005-01-01

    To make a spectral representation of color practicable for volume rendering, a new low-dimensional subspace method is used to act as the carrier of spectral information. With that model, spectral light material interaction can be integrated into existing volume rendering methods at almost no penalty. In addition, slow rendering methods can profit from the new technique of postillumination-generating spectral images in real-time for arbitrary light spectra under a fixed viewpoint. Thus, the capability of spectral rendering to create distinct impressions of a scene under different lighting conditions is established as a method of real-time interaction. Although we use an achromatic opacity in our rendering, we show how spectral rendering permits different data set features to be emphasized or hidden as long as they have not been entirely obscured. The use of postillumination is an order of magnitude faster than changing the transfer function and repeating the projection step. To put the user in control of the spectral visualization, we devise a new widget, a "light-dial," for interactively changing the illumination and include a usability study of this new light space exploration tool. Applied to spectral transfer functions, different lights bring out or hide specific qualities of the data. In conjunction with postillumination, this provides a new means for preparing data for visualization and forms a new degree of freedom for guided exploration of volumetric data sets. PMID:15747643

  16. Perspective volume rendering on Parallel Algebraic Logic (PAL) computer

    NASA Astrophysics Data System (ADS)

    Li, Hongzheng; Shi, Hongchi

    1998-09-01

    We propose a perspective volume graphics rendering algorithm on SIMD mesh-connected computers and implement the algorithm on the Parallel Algebraic Logic computer. The algorithm is a parallel ray casting algorithm. It decomposes the 3D perspective projection into two transformations that can be implemented in the SIMD fashion to solve the data redistribution problem caused by non-regular data access patterns in the perspective projection.

  17. Computer-aided detection of colonic polyps using volume rendering

    NASA Astrophysics Data System (ADS)

    Hong, Wei; Qiu, Feng; Marino, Joseph; Kaufman, Arie

    2007-03-01

    This work utilizes a novel pipeline for the computer-aided detection (CAD) of colonic polyps, assisting radiologists in locating polyps when using a virtual colonoscopy system. Our CAD pipeline automatically detects polyps while reducing the number of false positives (FPs). It integrates volume rendering and conformal colon flattening with texture and shape analysis. The colon is first digitally cleansed, segmented, and extracted from the CT dataset of the abdomen. The colon surface is then mapped to a 2D rectangle using conformal mapping. Using this colon flattening method, the CAD problem is converted from 3D into 2D. The flattened image is rendered using a direct volume rendering of the 3D colon dataset with a translucent transfer function. Suspicious polyps are detected by applying a clustering method on the 2D volume rendered image. The FPs are reduced by analyzing shape and texture features of the suspicious areas detected by the clustering step. Compared with shape-based methods, ours is much faster and much more efficient as it avoids computing curvature and other shape parameters for the whole colon wall. We tested our method with 178 datasets and found it to be 100% sensitive to adenomatous polyps with a low rate of FPs. The CAD results are seamlessly integrated into a virtual colonoscopy system, providing the radiologists with visual cues and likelihood indicators of areas likely to contain polyps, and allowing them to quickly inspect the suspicious areas and further exploit the flattened colon view for easy navigation and bookmark placement.

  18. Volume Rendering for Curvilinear and Unstructured Grids

    SciTech Connect

    Max, N; Williams, P; Silva, C; Cook, R

    2003-03-05

    We discuss two volume rendering methods developed at Lawrence Livermore National Laboratory. The first, cell projection, renders the polygons in the projection of each cell. It requires a global visibility sort in order to composite the cells in back to front order, and we discuss several different algorithms for this sort. The second method uses regularly spaced slice planes perpendicular to the X, Y, or Z axes, which slice the cells into polygons. Both methods are supplemented with anti-aliasing techniques to deal with small cells that might fall between pixel samples or slice planes, and both have been parallelized.

  19. Imaging of Temporomandibular Joint: Approach by Direct Volume Rendering

    PubMed Central

    Caradonna, Carola; Bruschetta, Daniele; Vaccarino, Gianluigi; Milardi, Demetrio

    2014-01-01

    Background: The purpose of this study was to conduct a morphological analysis of the temporomandibular joint, a highly specialized synovial joint that permits movement and function of the mandible. Materials and Methods: We have studied the temporom-andibular joint anatomy, directly on the living, from 3D images obtained by medical imaging Computed Tomography and Nuclear Magnetic Resonance acquisition, and subsequent re-engineering techniques 3D Surface Rendering and Volume Rendering. Data were analysed with the goal of being able to isolate, identify and distinguish the anatomical structures of the joint, and get the largest possible number of information utilizing software for post-processing work. Results: It was possible to reproduce anatomy of the skeletal structures, as well as through acquisitions of Magnetic Resonance Imaging; it was also possible to visualize the vascular, muscular, ligamentous and tendinous components of the articular complex, and also the capsule and the fibrous cartilaginous disc. We managed the Surface Rendering and Volume Rendering, not only to obtain three-dimensional images for colour and for resolution comparable to the usual anatomical preparations, but also a considerable number of anatomical, minuter details, zooming, rotating and cutting the same images with linking, graduating the colour, transparency and opacity from time to time. Conclusion: These results are encouraging to stimulate further studies in other anatomical districts. PMID:25664280

  20. Fast volume rendering algorithm in a virtual endoscopy system

    NASA Astrophysics Data System (ADS)

    Kim, Sang H.; Kim, Jin K.; Ra, Jong Beom

    2002-05-01

    Recently, 3D virtual endoscopy has been used as an alternative noninvasive procedure for visualization of a hollow organ. In this paper, we propose a fast volume rendering scheme based on perspective ray casting for virtual endoscopy. As a pre-processing step, the algorithm divides a volume into hierarchical blocks and classifies them into opaque or transparent blocks. Then, the rendering procedure is as follows. In the first step, we perform ray casting only for sub-sampled pixels on the image plane, and determine their pixel values and depth information. In the second step, by reducing the sub-sampling factor by half, we repeat ray casting for newly added pixels, and their pixel values and depth information are determined. Here, the previously obtained depth information is utilized to reduce the processing time. This step is performed recursively until the full-size rendering image is acquired. Experiments conducted on a PC shows that the proposed algorithm can reduce the rendering time by 70-80% for the bronchus and colon endoscopy, compared with the brute-force ray casting scheme. Thereby, interactive rendering becomes more realizable in a PC environment without any specific hardware.

  1. a Cache Design Method for Spatial Information Visualization in 3d Real-Time Rendering Engine

    NASA Astrophysics Data System (ADS)

    Dai, X.; Xiong, H.; Zheng, X.

    2012-07-01

    A well-designed cache system has positive impacts on the 3D real-time rendering engine. As the amount of visualization data getting larger, the effects become more obvious. They are the base of the 3D real-time rendering engine to smoothly browsing through the data, which is out of the core memory, or from the internet. In this article, a new kind of caches which are based on multi threads and large file are introduced. The memory cache consists of three parts, the rendering cache, the pre-rendering cache and the elimination cache. The rendering cache stores the data that is rendering in the engine; the data that is dispatched according to the position of the view point in the horizontal and vertical directions is stored in the pre-rendering cache; the data that is eliminated from the previous cache is stored in the eliminate cache and is going to write to the disk cache. Multi large files are used in the disk cache. When a disk cache file size reaches the limit length(128M is the top in the experiment), no item will be eliminated from the file, but a new large cache file will be created. If the large file number is greater than the maximum number that is pre-set, the earliest file will be deleted from the disk. In this way, only one file is opened for writing and reading, and the rest are read-only so the disk cache can be used in a high asynchronous way. The size of the large file is limited in order to map to the core memory to save loading time. Multi-thread is used to update the cache data. The threads are used to load data to the rendering cache as soon as possible for rendering, to load data to the pre-rendering cache for rendering next few frames, and to load data to the elimination cache which is not necessary for the moment. In our experiment, two threads are designed. The first thread is to organize the memory cache according to the view point, and created two threads: the adding list and the deleting list, the adding list index the data that should be

  2. Volume rendering of segmented image objects.

    PubMed

    Bullitt, Elizabeth; Aylward, Stephen R

    2002-08-01

    This paper describes a new method of combining ray-casting with segmentation. Volume rendering is performed at interactive rates on personal computers, and visualizations include both "superficial" ray-casting through a shell at each object's surface and "deep" ray-casting through the confines of each object. A feature of the approach is the option to smoothly and interactively dilate segmentation boundaries along all axes. This ability, when combined with selective "turning off" of extraneous image objects, can help clinicians detect and evaluate segmentation errors that may affect surgical planning. We describe both a method optimized for displaying tubular objects and a more general method applicable to objects of arbitrary geometry. In both cases, select three-dimensional points are projected onto a modified z buffer that records additional information about the projected objects. A subsequent step selectively volume renders only through the object volumes indicated by the z buffer. We describe how our approach differs from other reported methods for combining segmentation with ray-casting, and illustrate how our method can be useful in helping to detect segmentation errors. PMID:12472272

  3. Rapid exploration of curvilinear grids using direct volume rendering

    NASA Technical Reports Server (NTRS)

    Vangelder, Allen; Wilhelms, Jane

    1993-01-01

    Fast techniques for direct volume rendering over curvilinear grids of hexahedral cells are developed. This type of 3D grid is common in computational fluid dynamics and finite element analysis. Four new projection methods are presented and compared with each other and with previous methods for tetrahedral grids and rectilinear grids. All four methods use polygon-rendering hardware for speed. A simplified algorithm for visibility ordering, which is based on a combination of breadth-first and depth-first searches, is described. A new multi-pass blending method is described that reduces visual artifacts that are introduced by linear interpolation in hardware where exponential interpolation is needed. Multi-pass blending is of equal interest to hardware-oriented projection methods used on rectilinear grids. Visualization tools that permit rapid data banding and cycling through transfer functions, as well as region restrictions, are described.

  4. High Performance GPU-Based Fourier Volume Rendering.

    PubMed

    Abdellah, Marwan; Eldeib, Ayman; Sharawi, Amr

    2015-01-01

    Fourier volume rendering (FVR) is a significant visualization technique that has been used widely in digital radiography. As a result of its (N (2)log⁡N) time complexity, it provides a faster alternative to spatial domain volume rendering algorithms that are (N (3)) computationally complex. Relying on the Fourier projection-slice theorem, this technique operates on the spectral representation of a 3D volume instead of processing its spatial representation to generate attenuation-only projections that look like X-ray radiographs. Due to the rapid evolution of its underlying architecture, the graphics processing unit (GPU) became an attractive competent platform that can deliver giant computational raw power compared to the central processing unit (CPU) on a per-dollar-basis. The introduction of the compute unified device architecture (CUDA) technology enables embarrassingly-parallel algorithms to run efficiently on CUDA-capable GPU architectures. In this work, a high performance GPU-accelerated implementation of the FVR pipeline on CUDA-enabled GPUs is presented. This proposed implementation can achieve a speed-up of 117x compared to a single-threaded hybrid implementation that uses the CPU and GPU together by taking advantage of executing the rendering pipeline entirely on recent GPU architectures. PMID:25866499

  5. 3D chromosome rendering from Hi-C data using virtual reality

    NASA Astrophysics Data System (ADS)

    Zhu, Yixin; Selvaraj, Siddarth; Weber, Philip; Fang, Jennifer; Schulze, Jürgen P.; Ren, Bing

    2015-01-01

    Most genome browsers display DNA linearly, using single-dimensional depictions that are useful to examine certain epigenetic mechanisms such as DNA methylation. However, these representations are insufficient to visualize intrachromosomal interactions and relationships between distal genome features. Relationships between DNA regions may be difficult to decipher or missed entirely if those regions are distant in one dimension but could be spatially proximal when mapped to three-dimensional space. For example, the visualization of enhancers folding over genes is only fully expressed in three-dimensional space. Thus, to accurately understand DNA behavior during gene expression, a means to model chromosomes is essential. Using coordinates generated from Hi-C interaction frequency data, we have created interactive 3D models of whole chromosome structures and its respective domains. We have also rendered information on genomic features such as genes, CTCF binding sites, and enhancers. The goal of this article is to present the procedure, findings, and conclusions of our models and renderings.

  6. Volume rendering: application in static field conformal radiosurgery

    NASA Astrophysics Data System (ADS)

    Bourland, J. Daniel; Camp, Jon J.; Robb, Richard A.

    1992-09-01

    Lesions in the head which are large or irregularly shaped present challenges for radiosurgical treatment by linear accelerator or other radiosurgery modalities. To treat these lesions we are developing static field, conformal stereotactic radiosurgery. In this procedure seven to eleven megavoltage x-ray beams are aimed at the target volume. Each beam is designed from the beam's-eye view, and has its own unique geometry: gantry angle, table angle, and shape which conforms to the projected cross-section of the target. A difficulty with this and other 3- D treatment plans is the visualization of the treatment geometry and proposed treatment plan. Is the target volume geometrically covered by the arrangement of beams, and is the dose distribution adequate? To answer these questions we have been investigating the use of ANALYZETM volume rendering to display the target anatomy and the resultant dose distribution.

  7. A Multiresolution Image Cache for Volume Rendering

    SciTech Connect

    LaMar, E; Pascucci, V

    2003-02-27

    The authors discuss the techniques and implementation details of the shared-memory image caching system for volume visualization and iso-surface rendering. One of the goals of the system is to decouple image generation from image display. This is done by maintaining a set of impostors for interactive display while the production of the impostor imagery is performed by a set of parallel, background processes. The system introduces a caching basis that is free of the gap/overlap artifacts of earlier caching techniques. instead of placing impostors at fixed, pre-defined positions in world space, the technique is to adaptively place impostors relative to the camera viewpoint. The positions translate with the camera but stay aligned to the data; i.e., the positions translate, but do not rotate, with the camera. The viewing transformation is factored into a translation transformation and a rotation transformation. The impostor imagery is generated using just the translation transformation and visible impostors are displayed using just the rotation transformation. Displayed image quality is improved by increasing the number of impostors and the frequency that impostors are re-rendering is improved by decreasing the number of impostors.

  8. Real-time volume rendering of digital medical images on an iOS device

    NASA Astrophysics Data System (ADS)

    Noon, Christian; Holub, Joseph; Winer, Eliot

    2013-03-01

    Performing high quality 3D visualizations on mobile devices, while tantalizingly close in many areas, is still a quite difficult task. This is especially true for 3D volume rendering of digital medical images. Allowing this would empower medical personnel a powerful tool to diagnose and treat patients and train the next generation of physicians. This research focuses on performing real time volume rendering of digital medical images on iOS devices using custom developed GPU shaders for orthogonal texture slicing. An interactive volume renderer was designed and developed with several new features including dynamic modification of render resolutions, an incremental render loop, a shader-based clipping algorithm to support OpenGL ES 2.0, and an internal backface culling algorithm for properly sorting rendered geometry with alpha blending. The application was developed using several application programming interfaces (APIs) such as OpenSceneGraph (OSG) as the primary graphics renderer coupled with iOS Cocoa Touch for user interaction, and DCMTK for DICOM I/O. The developed application rendered volume datasets over 450 slices up to 50-60 frames per second, depending on the specific model of the iOS device. All rendering is done locally on the device so no Internet connection is required.

  9. Remote interactive direct volume rendering of AMR data

    SciTech Connect

    Kreylos, Oliver; Weber, Gunther H.; Bethel, E. Wes; Shalf, John M.; Hamann, Bernd; Joy, Kenneth I.

    2002-03-28

    We describe a framework for direct volume rendering of adaptive mesh refinement (AMR) data that operates directly on the hierarchical grid structure, without the need to resample data onto a single, uniform rectilinear grid. The framework can be used for a range of renderers optimized for particular hardware architectures: a hardware-assisted renderer for single-processor graphics workstations, and a massively parallel software-only renderer for supercomputers. It is also possible to use the framework for distributed rendering servers. By exploiting the multiresolution structure of AMR data, the hardware-assisted renderers can render large AMR data sets at interactive rates, even if the data is stored remotely.

  10. MRI Volume Fusion Based on 3D Shearlet Decompositions

    PubMed Central

    Duan, Chang; Wang, Shuai; Wang, Xue Gang; Huang, Qi Hong

    2014-01-01

    Nowadays many MRI scans can give 3D volume data with different contrasts, but the observers may want to view various contrasts in the same 3D volume. The conventional 2D medical fusion methods can only fuse the 3D volume data layer by layer, which may lead to the loss of interframe correlative information. In this paper, a novel 3D medical volume fusion method based on 3D band limited shearlet transform (3D BLST) is proposed. And this method is evaluated upon MRI T2* and quantitative susceptibility mapping data of 4 human brains. Both the perspective impression and the quality indices indicate that the proposed method has a better performance than conventional 2D wavelet, DT CWT, and 3D wavelet, DT CWT based fusion methods. PMID:24817880

  11. MRI Volume Fusion Based on 3D Shearlet Decompositions.

    PubMed

    Duan, Chang; Wang, Shuai; Wang, Xue Gang; Huang, Qi Hong

    2014-01-01

    Nowadays many MRI scans can give 3D volume data with different contrasts, but the observers may want to view various contrasts in the same 3D volume. The conventional 2D medical fusion methods can only fuse the 3D volume data layer by layer, which may lead to the loss of interframe correlative information. In this paper, a novel 3D medical volume fusion method based on 3D band limited shearlet transform (3D BLST) is proposed. And this method is evaluated upon MRI T2* and quantitative susceptibility mapping data of 4 human brains. Both the perspective impression and the quality indices indicate that the proposed method has a better performance than conventional 2D wavelet, DT CWT, and 3D wavelet, DT CWT based fusion methods. PMID:24817880

  12. High-performance GPU-based rendering for real-time, rigid 2D/3D-image registration and motion prediction in radiation oncology

    PubMed Central

    Spoerk, Jakob; Gendrin, Christelle; Weber, Christoph; Figl, Michael; Pawiro, Supriyanto Ardjo; Furtado, Hugo; Fabri, Daniella; Bloch, Christoph; Bergmann, Helmar; Gröller, Eduard; Birkfellner, Wolfgang

    2012-01-01

    A common problem in image-guided radiation therapy (IGRT) of lung cancer as well as other malignant diseases is the compensation of periodic and aperiodic motion during dose delivery. Modern systems for image-guided radiation oncology allow for the acquisition of cone-beam computed tomography data in the treatment room as well as the acquisition of planar radiographs during the treatment. A mid-term research goal is the compensation of tumor target volume motion by 2D/3D registration. In 2D/3D registration, spatial information on organ location is derived by an iterative comparison of perspective volume renderings, so-called digitally rendered radiographs (DRR) from computed tomography volume data, and planar reference x-rays. Currently, this rendering process is very time consuming, and real-time registration, which should at least provide data on organ position in less than a second, has not come into existence. We present two GPU-based rendering algorithms which generate a DRR of 512 × 512 pixels size from a CT dataset of 53 MB size at a pace of almost 100 Hz. This rendering rate is feasible by applying a number of algorithmic simplifications which range from alternative volume-driven rendering approaches – namely so-called wobbled splatting – to sub-sampling of the DRR-image by means of specialized raycasting techniques. Furthermore, general purpose graphics processing unit (GPGPU) programming paradigms were consequently utilized. Rendering quality and performance as well as the influence on the quality and performance of the overall registration process were measured and analyzed in detail. The results show that both methods are competitive and pave the way for fast motion compensation by rigid and possibly even non-rigid 2D/3D registration and, beyond that, adaptive filtering of motion models in IGRT. PMID:21782399

  13. Comparative analysis of video processing and 3D rendering for cloud video games using different virtualization technologies

    NASA Astrophysics Data System (ADS)

    Bada, Adedayo; Alcaraz-Calero, Jose M.; Wang, Qi; Grecos, Christos

    2014-05-01

    This paper describes a comprehensive empirical performance evaluation of 3D video processing employing the physical/virtual architecture implemented in a cloud environment. Different virtualization technologies, virtual video cards and various 3D benchmarks tools have been utilized in order to analyse the optimal performance in the context of 3D online gaming applications. This study highlights 3D video rendering performance under each type of hypervisors, and other factors including network I/O, disk I/O and memory usage. Comparisons of these factors under well-known virtual display technologies such as VNC, Spice and Virtual 3D adaptors reveal the strengths and weaknesses of the various hypervisors with respect to 3D video rendering and streaming.

  14. Direct Volume Rendering with Shading via Three-Dimensional Textures

    NASA Technical Reports Server (NTRS)

    Van Gelder, Allen; Kim, Kwansik

    1996-01-01

    A new and easy-to-implement method for direct volume rendering that uses 3D texture maps for acceleration, and incorporates directional lighting, is described. The implementation, called Voltx, produces high-quality images at nearly interactive speeds on workstations with hardware support for three-dimensional texture maps. Previously reported methods did not incorporate a light model, and did not address issues of multiple texture maps for large volumes. Our research shows that these extensions impact performance by about a factor of ten. Voltx supports orthographic, perspective, and stereo views. This paper describes the theory and implementation of this technique, and compares it to the shear-warp factorization approach. A rectilinear data set is converted into a three-dimensional texture map containing color and opacity information. Quantized normal vectors and a lookup table provide efficiency. A new tesselation of the sphere is described, which serves as the basis for normal-vector quantization. A new gradient-based shading criterion is described, in which the gradient magnitude is interpreted in the context of the field-data value and the material classification parameters, and not in isolation. In the rendering phase, the texture map is applied to a stack of parallel planes, which effectively cut the texture into many slabs. The slabs are composited to form an image.

  15. FluoRender: An Application of 2D Image Space Methods for 3D and 4D Confocal Microscopy Data Visualization in Neurobiology Research

    PubMed Central

    Wan, Yong; Otsuna, Hideo; Chien, Chi-Bin; Hansen, Charles

    2013-01-01

    2D image space methods are processing methods applied after the volumetric data are projected and rendered into the 2D image space, such as 2D filtering, tone mapping and compositing. In the application domain of volume visualization, most 2D image space methods can be carried out more efficiently than their 3D counterparts. Most importantly, 2D image space methods can be used to enhance volume visualization quality when applied together with volume rendering methods. In this paper, we present and discuss the applications of a series of 2D image space methods as enhancements to confocal microscopy visualizations, including 2D tone mapping, 2D compositing, and 2D color mapping. These methods are easily integrated with our existing confocal visualization tool, FluoRender, and the outcome is a full-featured visualization system that meets neurobiologists’ demands for qualitative analysis of confocal microscopy data. PMID:23584131

  16. 3D rendering of passive millimeter-wave scenes using modified open source software

    NASA Astrophysics Data System (ADS)

    Murakowski, Maciej; Wilson, John; Murakowski, Janusz; Schneider, Garrett; Schuetz, Christopher; Prather, Dennis

    2011-05-01

    As millimeter-wave imaging technology becomes more mature, several applications are emerging for which this technology may be useful. However, effectively predicting the nuances of millimeter-wave phenomenology on the usefulness for a given application remains a challenge. To this end, an accurate millimeter-wave scene simulator would have tremendous value in predicting imager requirements for a given application. Herein, we present a passive millimeter-wave scene simulator built on the open-source 3d modeling software Blender. We describe the changes made to the Blender rendering engine to make it suitable for this purpose, including physically accurate reflections at each material interface, volumetric absorption and scattering, and tracking of both s and p polarizations. In addition, we have incorporated a mmW material database and world model that emulates the effects of cold sky profiles for varying weather conditions and frequencies of operation. The images produced by this model have been validated against calibrated experimental imagery captured by a passive scanning millimeter-wave imager for maritime, desert, and standoff detection applications.

  17. Strategies for Effectively Visualizing a 3D Flow Using Volume Line Integral Convolution

    NASA Technical Reports Server (NTRS)

    Interrante, Victoria; Grosch, Chester

    1997-01-01

    This paper discusses strategies for effectively portraying 3D flow using volume line integral convolution. Issues include defining an appropriate input texture, clarifying the distinct identities and relative depths of the advected texture elements, and selectively highlighting regions of interest in both the input and output volumes. Apart from offering insights into the greater potential of 3D LIC as a method for effectively representing flow in a volume, a principal contribution of this work is the suggestion of a technique for generating and rendering 3D visibility-impeding 'halos' that can help to intuitively indicate the presence of depth discontinuities between contiguous elements in a projection and thereby clarify the 3D spatial organization of elements in the flow. The proposed techniques are applied to the visualization of a hot, supersonic, laminar jet exiting into a colder, subsonic coflow.

  18. Interactive pre-integrated volume rendering of medical datasets

    NASA Astrophysics Data System (ADS)

    Kye, Heewon; Hong, Helen; Shin, Yeong-Gil

    2005-04-01

    The pre-integrated volume rendering which produces high-quality images with less sampling has become one of the most efficient and important techniques in volume rendering field. In this paper, we propose an acceleration technique of pre-integrated rendering of dynamically classified volumes. Using the overlapped-min-max block, empty space skipping of ray casting can be applied in pre-integrated volume rendering. In addition, a new pre-integrated lookup table brings much fast rendering of high-precision data without degrading image quality. We have implemented our approaches not only on the consumer graphics hardware but also on CPU, and show the performance gains using several medical data sets.

  19. Local and Global Illumination in the Volume Rendering Integral

    SciTech Connect

    Max, N; Chen, M

    2005-10-21

    This article is intended as an update of the major survey by Max [1] on optical models for direct volume rendering. It provides a brief overview of the subject scope covered by [1], and brings recent developments, such as new shadow algorithms and refraction rendering, into the perspective. In particular, we examine three fundamentals aspects of direct volume rendering, namely the volume rendering integral, local illumination models and global illumination models, in a wavelength-independent manner. We review the developments on spectral volume rendering, in which visible light are considered as a form of electromagnetic radiation, optical models are implemented in conjunction with representations of spectral power distribution. This survey can provide a basis for, and encourage, new efforts for developing and using complex illumination models to achieve better realism and perception through optical correctness.

  20. Interactive Volume Rendering of Diffusion Tensor Data

    SciTech Connect

    Hlawitschka, Mario; Weber, Gunther; Anwander, Alfred; Carmichael, Owen; Hamann, Bernd; Scheuermann, Gerik

    2007-03-30

    As 3D volumetric images of the human body become an increasingly crucial source of information for the diagnosis and treatment of a broad variety of medical conditions, advanced techniques that allow clinicians to efficiently and clearly visualize volumetric images become increasingly important. Interaction has proven to be a key concept in analysis of medical images because static images of 3D data are prone to artifacts and misunderstanding of depth. Furthermore, fading out clinically irrelevant aspects of the image while preserving contextual anatomical landmarks helps medical doctors to focus on important parts of the images without becoming disoriented. Our goal was to develop a tool that unifies interactive manipulation and context preserving visualization of medical images with a special focus on diffusion tensor imaging (DTI) data. At each image voxel, DTI provides a 3 x 3 tensor whose entries represent the 3D statistical properties of water diffusion locally. Water motion that is preferential to specific spatial directions suggests structural organization of the underlying biological tissue; in particular, in the human brain, the naturally occuring diffusion of water in the axon portion of neurons is predominantly anisotropic along the longitudinal direction of the elongated, fiber-like axons [MMM+02]. This property has made DTI an emerging source of information about the structural integrity of axons and axonal connectivity between brain regions, both of which are thought to be disrupted in a broad range of medical disorders including multiple sclerosis, cerebrovascular disease, and autism [Mos02, FCI+01, JLH+99, BGKM+04, BJB+03].

  1. 3D in the Fast Lane: Render as You Go with the Latest OpenGL Boards.

    ERIC Educational Resources Information Center

    Sauer, Jeff; Murphy, Sam

    1997-01-01

    NT OpenGL hardware allows modelers and animators to work at relatively inexpensive NT workstations in their own offices or homes previous to shared space and workstation time in expensive studios. Rates seven OpenGL boards and two QuickDraw 3D accelerator boards for Mac users on overall value, wireframe and texture rendering, 2D acceleration, and…

  2. Real-time volume rendering of four-dimensional images based on three-dimensional texture mapping.

    PubMed

    Hwang, J; Kim, J S; Kim, J S; Kim, I Y; Kim, S I

    2001-06-01

    A four-dimensional (4-D) image consists of three-dimensional (3-D) volume data that varies with time. It is used to express a deforming or moving object in virtual surgery or 4-D ultrasound. It is difficult to obtain 4-D images by conventional ray-casting or shear-warp factorization methods because of their time-consuming rendering process and the pre-processing stage necessary whenever the volume data are changed. Even when 3-D texture mapping is used, repeated volume loading is time-consuming in 4-D image rendering. In this study, we propose a method to reduce data loading time using coherence between currently loaded volume and previously loaded volume in order to achieve real-time rendering based on 3-D texture mapping. Volume data are divided into small bricks and each brick being loaded is tested for similarity to one that was already loaded in memory. If the brick passes the test, it is defined as 3-D texture by OpenGL functions. Later, the texture slices of the brick are mapped into polygons and blended by OpenGL blending functions. All bricks undergo this test. Using continuous deforming, 50 volumes are rendered in interactive time with SGI ONYX. Realtime volume rendering based on 3-D texture mapping is currently available for personal computers. PMID:11442097

  3. Real-time volume rendering of MRCP: clinical applications.

    PubMed

    Neri, E; Boraschi, P; Caramella, D; Braccini, G; Gigoni, R; Cosottini, M; Lodovigi, S; Bartolozzi, C

    2000-02-01

    MR-cholangiopancreatography (Signa Contour 0.5T; GE/Medical Systems, Milwaukee, WI) data sets of 156 patients, obtained with a 2D T2-weighted FSE sequence, in the coronal plane, were volume rendered (Advantage Windows 3.1; GEMS) independently by two radiologists, that were asked to define the range of signal intensities in which the signal of the pancreaticobiliary system was included and to rank the quality of native images and volume renderings. Patients had biliary stones (n = 47), inflammatory ampullary stenoses (n = 18), pancreatic tumors (n = 12), surgical bilio-enteric anastomoses (n = 19), ampullary carcinomas (n = 2), pancreatic duct stone (n = 1), cholangiocarcinoma (n = 3) and normal pancreaticobiliary tree (n = 54). Good quality volume renderings of the bile ducts were obtained for at least a maximum diameter of 1.5 mm. The quality rank agreement between volume rendering and native images was excellent (k = 0.94). The correlation between the observers for the setting the signal intensity range was excellent and statistically significant (P < 0.001). The correlation between the observers for the time of volume rendering was not statistically significant. Biliary stones could be displayed in 32/47 (68%) cases. The pancreatic duct stones was displayed as well. Inflammatory ampullary stenoses were detected in all cases (100%). In case of pancreatic tumors, cholangiocarcinomas and ampullary carcinomas volume rendering allowed to identify the site of stenosis. In surgical bilio-enteric anastomoses volume rendering was helpful to display the residual biliary tract, the site of anastomosis and the enteric tract. Volume rendering could be a reliable and efficient tool for the study of the anatomy and pathological changes of the pancreaticobiliary tract. PMID:10697224

  4. Volume rendering using parallel algebraic logic (PAL) hardware

    NASA Astrophysics Data System (ADS)

    Li, Hongzheng; Shi, Hongchi; Coffield, Patrick C.

    1997-09-01

    In this paper, we present the implementation of a volume graphics rendering algorithm using shift-restoration operations on parallel algebraic logic (PAL) image processor. The algorithm is a parallel ray casting algorithm. In order to eliminate shading artifacts caused by inaccurate estimation of surface normal vectors, we use gray level volume instead of binary volume, and apply a low pass filter to smooth the volume object surfaces. By transforming the volume to an intermediate coordinate system to which there is a simple mapping from the object coordinate system, we solve the data redistribution problem caused by nonregular data access patterns in volume rendering. It has been proved very effective in reducing the data communication cost of the rendering algorithm on the PAL hardware.

  5. Fast stereoscopic images with ray-traced volume rendering

    SciTech Connect

    Adelson, S.J.; Hansen, C.D.

    1994-05-01

    One of the drawbacks of standard volume rendering techniques is that is it often difficult to comprehend the three-dimensional structure of the volume from a single frame; this is especially true in cases where there is no solid surface. Generally, several frames must be generated and viewed sequentially, using motion parallax to relay depth. Another option is to generate a single spectroscopic pair, resulting in clear and unambiguous depth information in both static and moving images. Methods have been developed which take advantage of the coherence between the two halves of a stereo pair for polygon rendering and ray-tracing, generating the second half of the pair in significantly less time than that required to completely render a single image. This paper reports the results of implementing these techniques with parallel ray-traced volume rendering. In tests with different data types, the time savings is in the range of 70--80%.

  6. Comparison between 3D volumetric rendering and multiplanar slices on the reliability of linear measurements on CBCT images: an in vitro study

    PubMed Central

    FERNANDES, Thais Maria Freire; ADAMCZYK, Julie; POLETI, Marcelo Lupion; HENRIQUES, José Fernando Castanha; FRIEDLAND, Bernard; GARIB, Daniela Gamba

    2015-01-01

    Objective The purpose of this study was to determine the accuracy and reliability of two methods of measurements of linear distances (multiplanar 2D and tridimensional reconstruction 3D) obtained from cone-beam computed tomography (CBCT) with different voxel sizes. Material and Methods Ten dry human mandibles were scanned at voxel sizes of 0.2 and 0.4 mm. Craniometric anatomical landmarks were identified twice by two independent operators on the multiplanar reconstructed and on volume rendering images that were generated by the software Dolphin®. Subsequently, physical measurements were performed using a digital caliper. Analysis of variance (ANOVA), intraclass correlation coefficient (ICC) and Bland-Altman were used for evaluating accuracy and reliability (p<0.05). Results Excellent intraobserver reliability and good to high precision interobserver reliability values were found for linear measurements from CBCT 3D and multiplanar images. Measurements performed on multiplanar reconstructed images were more accurate than measurements in volume rendering compared with the gold standard. No statistically significant difference was found between voxel protocols, independently of the measurement method. Conclusions Linear measurements on multiplanar images of 0.2 and 0.4 voxel are reliable and accurate when compared with direct caliper measurements. Caution should be taken in the volume rendering measurements, because the measurements were reliable, but not accurate for all variables. An increased voxel resolution did not result in greater accuracy of mandible measurements and would potentially provide increased patient radiation exposure. PMID:25004053

  7. Distributed volume rendering of global models of seismic wave propagation

    NASA Astrophysics Data System (ADS)

    Schwarz, N.; van Keken, P.; Renambot, L.; Tromp, J.; Komatitsch, D.; Johnson, A.; Leigh, J.

    2004-12-01

    Modeling the dynamics and structure of the Earth's interior now routinely involves massively distributed computational techniques, which makes it feasible to study time-dependent processes in the 3D Earth. Accurate, high-resolution models require the use of distributed simulations that run on, at least, moderately large PC clusters and produce large amounts of data on the order of terabytes distributed across the cluster. Visualizing such large data sets efficiently necessitates the use of the same type and magnitude of resources employed by the simulation. Generic, distributed volumetric rendering methods that produce high-quality monoscopic and stereoscopic visualizations currently exist, but rely on a different distributed data layout than is produced during simulation. This presents a challenge during the visualization process because an expensive data gather and redistribution stage is required before the distributed volume visualization algorithm can operate. We will compare different general purpose techniques and tools for visualizing volumetric data sets that are widely used in the field of scientific visualization, and propose a new approach that eliminates the data gather and redistribution stage by working directly on the data as distributed by, e.g., a seismic wave propagation simulation.

  8. Spatial 3D infrastructure: display-independent software framework, high-speed rendering electronics, and several new displays

    NASA Astrophysics Data System (ADS)

    Chun, Won-Suk; Napoli, Joshua; Cossairt, Oliver S.; Dorval, Rick K.; Hall, Deirdre M.; Purtell, Thomas J., II; Schooler, James F.; Banker, Yigal; Favalora, Gregg E.

    2005-03-01

    We present a software and hardware foundation to enable the rapid adoption of 3-D displays. Different 3-D displays - such as multiplanar, multiview, and electroholographic displays - naturally require different rendering methods. The adoption of these displays in the marketplace will be accelerated by a common software framework. The authors designed the SpatialGL API, a new rendering framework that unifies these display methods under one interface. SpatialGL enables complementary visualization assets to coexist through a uniform infrastructure. Also, SpatialGL supports legacy interfaces such as the OpenGL API. The authors" first implementation of SpatialGL uses multiview and multislice rendering algorithms to exploit the performance of modern graphics processing units (GPUs) to enable real-time visualization of 3-D graphics from medical imaging, oil & gas exploration, and homeland security. At the time of writing, SpatialGL runs on COTS workstations (both Windows and Linux) and on Actuality"s high-performance embedded computational engine that couples an NVIDIA GeForce 6800 Ultra GPU, an AMD Athlon 64 processor, and a proprietary, high-speed, programmable volumetric frame buffer that interfaces to a 1024 x 768 x 3 digital projector. Progress is illustrated using an off-the-shelf multiview display, Actuality"s multiplanar Perspecta Spatial 3D System, and an experimental multiview display. The experimental display is a quasi-holographic view-sequential system that generates aerial imagery measuring 30 mm x 25 mm x 25 mm, providing 198 horizontal views.

  9. Remote visualization system based on particle based volume rendering

    NASA Astrophysics Data System (ADS)

    Kawamura, Takuma; Idomura, Yasuhiro; Miyamura, Hiroko; Takemiya, Hiroshi; Sakamoto, Naohisa; Koyamada, Koji

    2015-01-01

    In this paper, we propose a novel remote visualization system based on particle-based volume rendering (PBVR),1 which enables interactive analyses of extreme scale volume data located on remote computing systems. The re- mote PBVR system consists of Server, which generates particles for rendering, and Client, which processes volume rendering, and the particle data size becomes significantly smaller than the original volume data. Depending on network bandwidth, the level of detail of images is flexibly controlled to attain high frame rates. Server is highly parallelized on various parallel platforms with hybrid programing model. The mapping process is accelerated by two orders of magnitudes compared with a single CPU. The structured and unstructured volume data with ~108 cells is processed within a few seconds. Compared with commodity Client/Server visualization tools, the total processing cost is dramatically reduced by using proposed system.

  10. On-the-sphere block-based 3D terrain rendering using a wavelet-encoded terrain database for SVS

    NASA Astrophysics Data System (ADS)

    Baxes, Gregory A.; Linger, Tim

    2006-05-01

    Successful integration and the ultimate adoption of 3D Synthetic Vision (SV) systems into the flight environment as a cockpit aid to pilot situational awareness (SA) depends highly on overcoming two primary engineering obstacles: 1) storing on-board terrain databases with sufficient accuracy, resolution and coverage areas; and 2) achieving real-time, deterministic, accurate and artifact-free 3D terrain rendering. These combined elements create a significant, inversely-compatible challenge to deployable SV systems that has not been adequately addressed in the realm of proliferous VisSim terrain-rendering approaches. Safety-critical SV systems for flight-deployed use, ground-control of flight systems such as UAVs and accurate mission rehearsal systems require a solution to these challenges. This paper describes the TerraMetrics TerraBlocks method of storing wavelet-encoded terrain datasets and a tightly-coupled 3D terrain-block rendering approach. Large-area terrain datasets are encoded using a wavelet transform, producing a hierarchical quadtree, powers-of-2 structure of the original terrain data at numerous levels of detail (LODs). The entire original raster terrain mesh (e.g., DTED) is transformed using either lossless or lossy wavelet transformation and is maintained in an equirectangular projection. The lossless form retains all original terrain mesh data integrity in the flight dataset. A side-effect benefit of terrain data compression is also achieved. The TerraBlocks run-time 3D terrain-block renderer accesses arbitrary, uniform-sized blocks of terrain data at varying LODs, depending on scene composition, from the wavelet-transformed terrain dataset. Terrain data blocks retain a spatially-filtered depiction of the original mesh data at the retrieved LOD. Terrain data blocks are processed as discrete objects and placed into spherical world space, relative to the viewpoint. Rendering determinacy is achieved through terrain-block LOD management and spherical

  11. 3D medical volume reconstruction using web services.

    PubMed

    Kooper, Rob; Shirk, Andrew; Lee, Sang-Chul; Lin, Amy; Folberg, Robert; Bajcsy, Peter

    2008-04-01

    We address the problem of 3D medical volume reconstruction using web services. The use of proposed web services is motivated by the fact that the problem of 3D medical volume reconstruction requires significant computer resources and human expertise in medical and computer science areas. Web services are implemented as an additional layer to a dataflow framework called data to knowledge. In the collaboration between UIC and NCSA, pre-processed input images at NCSA are made accessible to medical collaborators for registration. Every time UIC medical collaborators inspected images and selected corresponding features for registration, the web service at NCSA is contacted and the registration processing query is executed using the image to knowledge library of registration methods. Co-registered frames are returned for verification by medical collaborators in a new window. In this paper, we present 3D volume reconstruction problem requirements and the architecture of the developed prototype system at http://isda.ncsa.uiuc.edu/MedVolume. We also explain the tradeoffs of our system design and provide experimental data to support our system implementation. The prototype system has been used for multiple 3D volume reconstructions of blood vessels and vasculogenic mimicry patterns in histological sections of uveal melanoma studied by fluorescent confocal laser scanning microscope. PMID:18336808

  12. Appearance of bony lesions on 3-D CT reconstructions: a case study in variable renderings

    NASA Astrophysics Data System (ADS)

    Mankovich, Nicholas J.; White, Stuart C.

    1992-05-01

    This paper discusses conventional 3-D reconstruction for bone visualization and presents a case study to demonstrate the dangers of performing 3-D reconstructions without careful selection of the bone threshold. The visualization of midface bone lesions directly from axial CT images is difficult because of the complex anatomic relationships. Three-dimensional reconstructions made from the CT to provide graphic images showing lesions in relation to adjacent facial bones. Most commercially available 3-D image reconstruction requires that the radiologist or technologist identify a threshold image intensity value that can be used to distinguish bone from other tissues. Much has been made of the many disadvantages of this technique, but it continues as the predominant method in producing 3-D pictures for clinical use. This paper is intended to provide a clear demonstration for the physician of the caveats that should accompany 3-D reconstructions. We present a case of recurrent odontogenic keratocyst in the anterior maxilla where the 3-D reconstructions, made with different bone thresholds (windows), are compared to the resected specimen. A DMI 3200 computer was used to convert the scan data from a GE 9800 CT into a 3-D shaded surface image. Threshold values were assigned to (1) generate the most clinically pleasing image, (2) produce maximum theoretical fidelity (using the midpoint image intensity between average cortical bone and average soft tissue), and (3) cover stepped threshold intensities between these two methods. We compared the computer lesions with the resected specimen and noted measurement errors of up to 44 percent introduced by inappropriate bone threshold levels. We suggest clinically applicable standardization techniques in the 3-D reconstruction as well as cautionary language that should accompany the 3-D images.

  13. [A hybrid volume rendering method using general hardware].

    PubMed

    Li, Bin; Tian, Lianfang; Chen, Ping; Mao, Zongyuan

    2008-06-01

    In order to improve the effect and efficiency of the reconstructed image after hybrid volume rendering of different kinds of volume data from medical sequential slices or polygonal models, we propose a hybrid volume rendering method based on Shear-Warp with economical hardware. First, the hybrid volume data are pre-processed by Z-Buffer method and RLE (Run-Length Encoded) data structure. Then, during the process of compositing intermediate image, a resampling method based on the dual-interpolation and the intermediate slice interpolation methods is used to improve the efficiency and the effect. Finally, the reconstructed image is rendered by the texture-mapping technology of OpenGL. Experiments demonstrate the good performance of the proposed method. PMID:18693424

  14. Research on transformation and optimization of large scale 3D modeling for real time rendering

    NASA Astrophysics Data System (ADS)

    Yan, Hu; Yang, Yongchao; Zhao, Gang; He, Bin; Shen, Guosheng

    2011-12-01

    During the simulation process of real-time three-dimensional scene, the popular modeling software and the real-time rendering platform are not compatible. The common solution is to create three-dimensional scene model by using modeling software and then transform the format supported by rendering platform. This paper takes digital campus scene simulation as an example, analyzes and solves the problems of surface loss; texture distortion and loss; model flicker and so on during the transformation from 3Ds Max to MultiGen Creator. Besides, it proposes the optimization strategy of model which is transformed. The operation results show that this strategy is a good solution to all kinds of problems existing in transformation and it can speed up the rendering speed of the model.

  15. Java multi-histogram volume rendering framework for medical images

    NASA Astrophysics Data System (ADS)

    Senseney, Justin; Bokinsky, Alexandra; Cheng, Ruida; McCreedy, Evan; McAuliffe, Matthew J.

    2013-03-01

    This work extends the multi-histogram volume rendering framework proposed by Kniss et al. [1] to provide rendering results based on the impression of overlaid triangles on a graph of image intensity versus gradient magnitude. The developed method of volume rendering allows for greater emphasis to boundary visualization while avoiding issues common in medical image acquisition. For example, partial voluming effects in computed tomography and intensity inhomogeneity of similar tissue types in magnetic resonance imaging introduce pixel values that will not reflect differing tissue types when a standard transfer function is applied to an intensity histogram. This new framework uses developing technology to improve upon the Kniss multi-histogram framework by using Java, the GPU, and MIPAV, an open-source medical image processing application, to allow multi-histogram techniques to be widely disseminated. The OpenGL view aligned texture rendering approach suffered from performance setbacks, inaccessibility, and usability problems. Rendering results can now be interactively compared with other rendering frameworks, surfaces can now be extracted for use in other programs, and file formats that are widely used in the field of biomedical imaging can be visualized using this multi-histogram approach. OpenCL and GLSL are used to produce this new multi-histogram approach, leveraging texture memory on the graphics processing unit of desktops to provide a new interactive method for visualizing biomedical images. Performance results for this method are generated and qualitative rendering results are compared. The resulting framework provides the opportunity for further applications in medical imaging, both in volume rendering and in generic image processing.

  16. Remote volume rendering pipeline for mHealth applications

    NASA Astrophysics Data System (ADS)

    Gutenko, Ievgeniia; Petkov, Kaloian; Papadopoulos, Charilaos; Zhao, Xin; Park, Ji Hwan; Kaufman, Arie; Cha, Ronald

    2014-03-01

    We introduce a novel remote volume rendering pipeline for medical visualization targeted for mHealth (mobile health) applications. The necessity of such a pipeline stems from the large size of the medical imaging data produced by current CT and MRI scanners with respect to the complexity of the volumetric rendering algorithms. For example, the resolution of typical CT Angiography (CTA) data easily reaches 512^3 voxels and can exceed 6 gigabytes in size by spanning over the time domain while capturing a beating heart. This explosion in data size makes data transfers to mobile devices challenging, and even when the transfer problem is resolved the rendering performance of the device still remains a bottleneck. To deal with this issue, we propose a thin-client architecture, where the entirety of the data resides on a remote server where the image is rendered and then streamed to the client mobile device. We utilize the display and interaction capabilities of the mobile device, while performing interactive volume rendering on a server capable of handling large datasets. Specifically, upon user interaction the volume is rendered on the server and encoded into an H.264 video stream. H.264 is ubiquitously hardware accelerated, resulting in faster compression and lower power requirements. The choice of low-latency CPU- and GPU-based encoders is particularly important in enabling the interactive nature of our system. We demonstrate a prototype of our framework using various medical datasets on commodity tablet devices.

  17. A combined fuzzy-neural network model for non-linear prediction of 3-D rendering workload in grid computing.

    PubMed

    Doulamis, Nikolaos D; Doulamis, Anastasios D; Panagakis, Athanasios; Dolkas, Konstantinos; Varvarigou, Theodora A; Varvarigos, Emmanuel

    2004-04-01

    Implementation of a commercial application to a grid infrastructure introduces new challenges in managing the quality-of-service (QoS) requirements, most stem from the fact that negotiation on QoS between the user and the service provider should strictly be satisfied. An interesting commercial application with a wide impact on a variety of fields, which can benefit from the computational grid technologies, is three-dimensional (3-D) rendering. In order to implement, however, 3-D rendering to a grid infrastructure, we should develop appropriate scheduling and resource allocation mechanisms so that the negotiated (QoS) requirements are met. Efficient scheduling schemes require modeling and prediction of rendering workload. In this paper workload prediction is addressed based on a combined fuzzy classification and neural network model. Initially, appropriate descriptors are extracted to represent the synthetic world. The descriptors are obtained by parsing RIB formatted files, which provides a general structure for describing computer-generated images. Fuzzy classification is used for organizing rendering descriptor so that a reliable representation is accomplished which increases the prediction accuracy. Neural network performs workload prediction by modeling the nonlinear input-output relationship between rendering descriptors and the respective computational complexity. To increase prediction accuracy, a constructive algorithm is adopted in this paper to train the neural network so that network weights and size are simultaneously estimated. Then, a grid scheduler scheme is proposed to estimate the queuing order that the tasks should be executed and the most appopriate processor assignment so that the demanded QoS are satisfied as much as possible. A fair scheduling policy is considered as the most appropriate. Experimental results on a real grid infrastructure are presented to illustrate the efficiency of the proposed workload prediction--scheduling algorithm

  18. GPU-based Volume Rendering for Medical Image Visualization.

    PubMed

    Heng, Yang; Gu, Lixu

    2005-01-01

    During the quick advancements of medical image visualization and augmented virtual reality application, the low performance of the volume rendering algorithm is still a "bottle neck". To facilitate the usage of well developed hardware resource, a novel graphics processing unit (GPU)-based volume ray-casting algorithm is proposed in this paper. Running on a normal PC, it performs an interactive rate while keeping the same image quality as the traditional volume rendering algorithm does. Recently, GPU-accelerated direct volume rendering has positioned itself as an efficient tool for the display and visual analysis of volume data. However, for large sized medical image data, it often shows low efficiency for too large memories requested. Furthermore, it always holds a drawback of writing color buffers multi-times per frame. The proposed algorithm improves the situation by implementing ray casting operation completely in GPU. It needs only one slice plane from CPU and one 3Dtexture to store data when GPU calculates the two terminals of the ray and carries out the color blending operation in its pixel programs. So both the rendering speed and the memories consumed are improved, and the algorithm can deal most medical image data on normal PCs in the interactive speed. PMID:17281405

  19. A fast way to visualize the brain surface with volume rendering of MRI data.

    PubMed

    Matsumoto, S; Asato, R; Konishi, J

    1999-11-01

    The preprocessing of 3-dimensional (3D) MRI data constitutes a bottleneck in the process of visualizing the brain surface with volume rendering. As a fast way to achieve this preprocessing, the authors propose a simple pipeline based on an algorithm of seed-growing type, for approximate segmentation of the intradural space in T1-weighted 3D MRI data. Except for the setting of a seed and four parameters, this pipeline proceeds in an unsupervised manner; no interactive intermediate step is involved. It was tested with 15 datasets from normal adults. The result was reproducible in that as long as the seed was located within the cerebral white matter, identical segmentation was achieved for each dataset. Although the pipeline ran with gross segmentation error along the floor of the cranial cavity, it performed well along the cranial vault so that subsequent volume rendering permitted the observation of the sulco-gyral pattern over cerebral convexities. Use of this pipeline followed by volume rendering is a handy approach to the visualization of the brain surface from 3D MRI data. PMID:10587913

  20. 3-D surface rendering of myocardial SPECT images segmented by level set technique.

    PubMed

    Lee, Hwun-Jae; Lee, Sangbock

    2012-06-01

    SPECT(single photon emission computed tomography) myocardial imaging is a diagnosis technique that images the region of interest and examines any change induced by disease using a computer after injects intravenously a radiopharmaceutical drug emitting gamma ray and the drug has dispersed evenly in the heart . Myocardial perfusion imaging, which contains functional information, is useful for non-invasive diagnosis of myocardial disease but noises caused by physical factors and low resolution give difficulty in reading the images. In order to help reading myocardial images, this study proposed a method that segments myocardial images and reconstructs the segmented region into a 3D image. To resolve difficulty in reading, we segmented the left ventricle, the region of interest, using a level set and modeled the segmented region into a 3D image. PMID:20839037

  1. Segmentation, surface rendering, and surface simplification of 3-D skull images for the repair of a large skull defect

    NASA Astrophysics Data System (ADS)

    Wan, Weibing; Shi, Pengfei; Li, Shuguang

    2009-10-01

    Given the potential demonstrated by research into bone-tissue engineering, the use of medical image data for the rapid prototyping (RP) of scaffolds is a subject worthy of research. Computer-aided design and manufacture and medical imaging have created new possibilities for RP. Accurate and efficient design and fabrication of anatomic models is critical to these applications. We explore the application of RP computational methods to the repair of a pediatric skull defect. The focus of this study is the segmentation of the defect region seen in computerized tomography (CT) slice images of this patient's skull and the three-dimensional (3-D) surface rendering of the patient's CT-scan data. We see if our segmentation and surface rendering software can improve the generation of an implant model to fill a skull defect.

  2. ESPript/ENDscript: extracting and rendering sequence and 3D information from atomic structures of proteins

    PubMed Central

    Gouet, Patrice; Robert, Xavier; Courcelle, Emmanuel

    2003-01-01

    The fortran program ESPript was created in 1993, to display on a PostScript figure multiple sequence alignments adorned with secondary structure elements. A web server was made available in 1999 and ESPript has been linked to three major web tools: ProDom which identifies protein domains, PredictProtein which predicts secondary structure elements and NPS@ which runs sequence alignment programs. A web server named ENDscript was created in 2002 to facilitate the generation of ESPript figures containing a large amount of information. ENDscript uses programs such as BLAST, Clustal and PHYLODENDRON to work on protein sequences and such as DSSP, CNS and MOLSCRIPT to work on protein coordinates. It enables the creation, from a single Protein Data Bank identifier, of a multiple sequence alignment figure adorned with secondary structure elements of each sequence of known 3D structure. Similar 3D structures are superimposed in turn with the program PROFIT and a final figure is drawn with BOBSCRIPT, which shows sequence and structure conservation along the Cα trace of the query. ESPript and ENDscript are available at http://genopole.toulouse.inra.fr/ESPript. PMID:12824317

  3. Volume estimation of tonsil phantoms using an oral camera with 3D imaging.

    PubMed

    Das, Anshuman J; Valdez, Tulio A; Vargas, Jose Arbouin; Saksupapchon, Punyapat; Rachapudi, Pushyami; Ge, Zhifei; Estrada, Julio C; Raskar, Ramesh

    2016-04-01

    Three-dimensional (3D) visualization of oral cavity and oropharyngeal anatomy may play an important role in the evaluation for obstructive sleep apnea (OSA). Although computed tomography (CT) and magnetic resonance (MRI) imaging are capable of providing 3D anatomical descriptions, this type of technology is not readily available in a clinic setting. Current imaging of the oropharynx is performed using a light source and tongue depressors. For better assessment of the inferior pole of the tonsils and tongue base flexible laryngoscopes are required which only provide a two dimensional (2D) rendering. As a result, clinical diagnosis is generally subjective in tonsillar hypertrophy where current physical examination has limitations. In this report, we designed a hand held portable oral camera with 3D imaging capability to reconstruct the anatomy of the oropharynx in tonsillar hypertrophy where the tonsils get enlarged and can lead to increased airway resistance. We were able to precisely reconstruct the 3D shape of the tonsils and from that estimate airway obstruction percentage and volume of the tonsils in 3D printed realistic models. Our results correlate well with Brodsky's classification of tonsillar hypertrophy as well as intraoperative volume estimations. PMID:27446667

  4. Volume estimation of tonsil phantoms using an oral camera with 3D imaging

    PubMed Central

    Das, Anshuman J.; Valdez, Tulio A.; Vargas, Jose Arbouin; Saksupapchon, Punyapat; Rachapudi, Pushyami; Ge, Zhifei; Estrada, Julio C.; Raskar, Ramesh

    2016-01-01

    Three-dimensional (3D) visualization of oral cavity and oropharyngeal anatomy may play an important role in the evaluation for obstructive sleep apnea (OSA). Although computed tomography (CT) and magnetic resonance (MRI) imaging are capable of providing 3D anatomical descriptions, this type of technology is not readily available in a clinic setting. Current imaging of the oropharynx is performed using a light source and tongue depressors. For better assessment of the inferior pole of the tonsils and tongue base flexible laryngoscopes are required which only provide a two dimensional (2D) rendering. As a result, clinical diagnosis is generally subjective in tonsillar hypertrophy where current physical examination has limitations. In this report, we designed a hand held portable oral camera with 3D imaging capability to reconstruct the anatomy of the oropharynx in tonsillar hypertrophy where the tonsils get enlarged and can lead to increased airway resistance. We were able to precisely reconstruct the 3D shape of the tonsils and from that estimate airway obstruction percentage and volume of the tonsils in 3D printed realistic models. Our results correlate well with Brodsky’s classification of tonsillar hypertrophy as well as intraoperative volume estimations. PMID:27446667

  5. SOLIDFELIX: a transportable 3D static volume display

    NASA Astrophysics Data System (ADS)

    Langhans, Knut; Kreft, Alexander; Wörden, Henrik Tom

    2009-02-01

    Flat 2D screens cannot display complex 3D structures without the usage of different slices of the 3D model. Volumetric displays like the "FELIX 3D-Displays" can solve the problem. They provide space-filling images and are characterized by "multi-viewer" and "all-round view" capabilities without requiring cumbersome goggles. In the past many scientists tried to develop similar 3D displays. Our paper includes an overview from 1912 up to today. During several years of investigations on swept volume displays within the "FELIX 3D-Projekt" we learned about some significant disadvantages of rotating screens, for example hidden zones. For this reason the FELIX-Team started investigations also in the area of static volume displays. Within three years of research on our 3D static volume display at a normal high school in Germany we were able to achieve considerable results despite minor funding resources within this non-commercial group. Core element of our setup is the display volume which consists of a cubic transparent material (crystal, glass, or polymers doped with special ions, mainly from the rare earth group or other fluorescent materials). We focused our investigations on one frequency, two step upconversion (OFTS-UC) and two frequency, two step upconversion (TFTSUC) with IR-Lasers as excitation source. Our main interest was both to find an appropriate material and an appropriate doping for the display volume. Early experiments were carried out with CaF2 and YLiF4 crystals doped with 0.5 mol% Er3+-ions which were excited in order to create a volumetric pixel (voxel). In addition to that the crystals are limited to a very small size which is the reason why we later investigated on heavy metal fluoride glasses which are easier to produce in large sizes. Currently we are using a ZBLAN glass belonging to the mentioned group and making it possible to increase both the display volume and the brightness of the images significantly. Although, our display is currently

  6. Parallel volume rendering on the IBM Blue Gene/P.

    SciTech Connect

    Peterka, T.; Yu, H.; Ross, R.; Ma, K.-L.; Mathematics and Computer Science; Univ. of California at Davis

    2008-01-01

    Parallel ray casting volume rendering is implemented and tested on an IBM Blue Gene distributed memory parallel architecture. Data are presented from experiments under a number of different conditions, including dataset size, number of processors, low and high quality rendering, offline storage of results, and streaming of images for remote display. Performance is divided into three main sections of the algorithm: disk I/O, rendering, and compositing. The dynamic balance between these tasks varies with the number of processors and other conditions. Lessons learned from the work include understanding the balance between parallel I/O, computation, and communication within the context of visualization on supercomputers, recommendations for tuning and optimization, and opportunities for scaling further in the future. Extrapolating these results to very large data and image sizes suggests that a distributed memory HPC architecture such as the Blue Gene is a viable platform for some types of visualization at very large scales.

  7. Distributed GPU Volume Rendering of ASKAP Spectral Data Cubes

    NASA Astrophysics Data System (ADS)

    Hassan, A. H.; Fluke, C. J.; Barnes, D. G.

    2011-07-01

    The Australian SKA Pathfinder (ASKAP) will be producing 2.2 terabyte HI spectral-line cubes for each 8 hours of observation by 2013. Global views of spectral data cubes are vital for the detection of instrumentation errors, the identification of data artifacts and noise characteristics, and the discovery of strange phenomena, unexpected relations, or unknown patterns. We have previously presented the first framework that can render ASKAP-sized cubes at interactive frame rates. The framework provides the user with a real-time interactive volume rendering by combining shared and distributed memory architectures, distributed CPUs and graphics processing units (GPUs), using the ray-casting algorithm. In this paper we present two main extensions of this framework which are: using a multi-panel display system to provide a high resolution rendering output, and the ability to integrate automated data analysis tools into the visualization output and to interact with its output in place.

  8. 3D Surface Reconstruction and Volume Calculation of Rills

    NASA Astrophysics Data System (ADS)

    Brings, Christine; Gronz, Oliver; Becker, Kerstin; Wirtz, Stefan; Seeger, Manuel; Ries, Johannes B.

    2015-04-01

    We use the low-cost, user-friendly photogrammetric Structure from Motion (SfM) technique, which is implemented in the Software VisualSfM, for 3D surface reconstruction and volume calculation of an 18 meter long rill in Luxembourg. The images were taken with a Canon HD video camera 1) before a natural rainfall event, 2) after a natural rainfall event and before a rill experiment and 3) after a rill experiment. Recording with a video camera results compared to a photo camera not only a huge time advantage, the method also guarantees more than adequately overlapping sharp images. For each model, approximately 8 minutes of video were taken. As SfM needs single images, we automatically selected the sharpest image from 15 frame intervals. The sharpness was estimated using a derivative-based metric. Then, VisualSfM detects feature points in each image, searches matching feature points in all image pairs, recovers the camera positions and finally by triangulation of camera positions and feature points the software reconstructs a point cloud of the rill surface. From the point cloud, 3D surface models (meshes) are created and via difference calculations of the pre and post models a visualization of the changes (erosion and accumulation areas) and quantification of erosion volumes are possible. The calculated volumes are presented in spatial units of the models and so real values must be converted via references. The outputs are three models at three different points in time. The results show that especially using images taken from suboptimal videos (bad lighting conditions, low contrast of the surface, too much in-motion unsharpness), the sharpness algorithm leads to much more matching features. Hence the point densities of the 3D models are increased and thereby clarify the calculations.

  9. Automatic 3-D grayscale volume matching and shape analysis.

    PubMed

    Guétat, Grégoire; Maitre, Matthieu; Joly, Laurène; Lai, Sen-Lin; Lee, Tzumin; Shinagawa, Yoshihisa

    2006-04-01

    Recently, shape matching in three dimensions (3-D) has been gaining importance in a wide variety of fields such as computer graphics, computer vision, medicine, and biology, with applications such as object recognition, medical diagnosis, and quantitative morphological analysis of biological operations. Automatic shape matching techniques developed in the field of computer graphics handle object surfaces, but ignore intensities of inner voxels. In biology and medical imaging, voxel intensities obtained by computed tomography (CT), magnetic resonance imagery (MRI), and confocal microscopes are important to determine point correspondences. Nevertheless, most biomedical volume matching techniques require human interactions, and automatic methods assume matched objects to have very similar shapes so as to avoid combinatorial explosions of point. This article is aimed at decreasing the gap between the two fields. The proposed method automatically finds dense point correspondences between two grayscale volumes; i.e., finds a correspondent in the second volume for every voxel in the first volume, based on the voxel intensities. Mutiresolutional pyramids are introduced to reduce computational load and handle highly plastic objects. We calculate the average shape of a set of similar objects and give a measure of plasticity to compare them. Matching results can also be used to generate intermediate volumes for morphing. We use various data to validate the effectiveness of our method: we calculate the average shape and plasticity of a set of fly brain cells, and we also match a human skull and an orangutan skull. PMID:16617625

  10. Image-Based Rendering of LOD1 3D City Models for traffic-augmented Immersive Street-view Navigation

    NASA Astrophysics Data System (ADS)

    Brédif, M.

    2013-10-01

    It may be argued that urban areas may now be modeled with sufficient details for realistic fly-through over the cities at a reasonable price point. Modeling cities at the street level for immersive street-view navigation is however still a very expensive (or even impossible) operation if one tries to match the level of detail acquired by street-view mobile mapping imagery. This paper proposes to leverage the richness of these street-view images with the common availability of nation-wide LOD1 3D city models, using an image-based rendering technique : projective multi-texturing. Such a coarse 3D city model may be used as a lightweight scene proxy of approximate coarse geometry. The images neighboring the interpolated viewpoint are projected onto this scene proxy using their estimated poses and calibrations and blended together according to their relative distance. This enables an immersive navigation within the image dataset that is perfectly equal to - and thus as rich as - original images when viewed from their viewpoint location, and which degrades gracefully in between viewpoint locations. Beyond proving the applicability of this preprocessing-free computer graphics technique to mobile mapping images and LOD1 3D city models, our contributions are three-fold. Firstly, image distortion is corrected online in the GPU, preventing an extra image resampling step. Secondly, externally-computed binary masks may be used to discard pixels corresponding to moving objects. Thirdly, we propose a shadowmap-inspired technique that prevents, at marginal cost, the projective texturing of surfaces beyond the first, as seen from the projected image viewpoint location. Finally, an augmented visualization application is introduced to showcase the proposed immersive navigation: images are unpopulated from vehicles using externally-computed binary masks and repopulated using a 3D visualization of a 2D traffic simulation.

  11. Efficient Encoding and Rendering of Time-Varying Volume Data

    NASA Technical Reports Server (NTRS)

    Ma, Kwan-Liu; Smith, Diann; Shih, Ming-Yun; Shen, Han-Wei

    1998-01-01

    Visualization of time-varying volumetric data sets, which may be obtained from numerical simulations or sensing instruments, provides scientists insights into the detailed dynamics of the phenomenon under study. This paper describes a coherent solution based on quantization, coupled with octree and difference encoding for visualizing time-varying volumetric data. Quantization is used to attain voxel-level compression and may have a significant influence on the performance of the subsequent encoding and visualization steps. Octree encoding is used for spatial domain compression, and difference encoding for temporal domain compression. In essence, neighboring voxels may be fused into macro voxels if they have similar values, and subtrees at consecutive time steps may be merged if they are identical. The software rendering process is tailored according to the tree structures and the volume visualization process. With the tree representation, selective rendering may be performed very efficiently. Additionally, the I/O costs are reduced. With these combined savings, a higher level of user interactivity is achieved. We have studied a variety of time-varying volume datasets, performed encoding based on data statistics, and optimized the rendering calculations wherever possible. Preliminary tests on workstations have shown in many cases tremendous reduction by as high as 90% in both storage space and inter-frame delay.

  12. Realistic fetus skin color processing for ultrasound volume rendering

    NASA Astrophysics Data System (ADS)

    Kim, Yun-Tae; Kim, Kyuhong; Park, Sung-Chan; Kang, Jooyoung; Kim, Jung-Ho

    2014-01-01

    This paper proposes realistic fetus skin color processing using a 2D color map and a tone mapping function (TMF) for ultrasound volume rendering. The contributions of this paper are a 2D color map generated through a gamut model of skin color and a TMF that depends on the lighting position. First, the gamut model of fetus skin color is calculated by color distribution of baby images. The 2D color map is created using a gamut model for tone mapping of ray casting. For the translucent effect, a 2D color map in which lightness is inverted is generated. Second, to enhance the contrast of rendered images, the luminance, color, and tone curve TMF parameters are changed using 2D Gaussian function that depends on the lighting position. The experimental results demonstrate that the proposed method achieves better realistic skin color reproduction than the conventional method.

  13. Significant acceleration of 2D-3D registration-based fusion of ultrasound and x-ray images by mesh-based DRR rendering

    NASA Astrophysics Data System (ADS)

    Kaiser, Markus; John, Matthias; Borsdorf, Anja; Mountney, Peter; Ionasec, Razvan; Nöttling, Alois; Kiefer, Philipp; Seeburger, Jörg; Neumuth, Thomas

    2013-03-01

    For transcatheter-based minimally invasive procedures in structural heart disease ultrasound and X-ray are the two enabling imaging modalities. A live fusion of both real-time modalities can potentially improve the workflow and the catheter navigation by combining the excellent instrument imaging of X-ray with the high-quality soft tissue imaging of ultrasound. A recently published approach to fuse X-ray fluoroscopy with trans-esophageal echo (TEE) registers the ultrasound probe to X-ray images by a 2D-3D registration method which inherently provides a registration of ultrasound images to X-ray images. In this paper, we significantly accelerate the 2D-3D registration method in this context. The main novelty is to generate the projection images (DRR) of the 3D object not via volume ray-casting but instead via a fast rendering of triangular meshes. This is possible, because in the setting for TEE/X-ray fusion the 3D geometry of the ultrasound probe is known in advance and their main components can be described by triangular meshes. We show that the new approach can achieve a speedup factor up to 65 and does not affect the registration accuracy when used in conjunction with the gradient correlation similarity measure. The improvement is independent of the underlying registration optimizer. Based on the results, a TEE/X-ray fusion could be performed with a higher frame rate and a shorter time lag towards real-time registration performance. The approach could potentially accelerate other applications of 2D-3D registrations, e.g. the registration of implant models with X-ray images.

  14. Volume rendering in the presence of partial volume effects

    NASA Astrophysics Data System (ADS)

    Souza, Andre D. A.; Udupa, Jayaram K.; Saha, Punam K.

    2002-05-01

    In tomographic images, partial volume effects (PVE) cause several artifacts in volume renditions. In x-ray CT, for example, soft-tissue-like pseudo structures appear in bone-to-air and bone-to-fat interfaces. Further, skin, which is identical to soft tissue in terms of CT number, obscures the rendition of the latter. The purpose of this paper is to demonstrate these phenomena and to provide effective solutions that yield significantly improved renditions. Here, we introduce two methods that detect and classify voxels with PVE in x-ray CT. A method is described to automatically peel skin so that PVE-resolved renditions of bone and soft tissue reveal considerably more details. In the first method, the fraction of each tissue material in each voxel v is estimated by taking into account the intensities of the voxels neighboring v. The second method is based on the following postulate (IEEE PAMI, vol. 23 pp. 689- 706, 2001): In any acquired image, voxels with the highest uncertainty occur in the vicinity of object boundaries. The removal of skin is achieved by means of mathematical morphology. Volume renditions have been created before and after applying the methods for several patient CT datasets. A mathematical phantom experiment involving different levels of PVE has been conducted by adding different degrees of noise and blurring. A quantitative evaluation is done utilizing the mathematical phantom and clinical CT data wherein an operator carefully masked out voxels with PVE in the segmented images. All results have demonstrated the enhanced quality of display of bone and soft tissue after applying the proposed methods. The quantitative evaluations indicate that more than 98% of the voxels with PVE are removed by the two methods and the second method performs slightly better than the first. Further, skin peeling vividly reveals fine details in the soft tissue structures.

  15. Sphere-Enhanced Microwave Ablation (sMWA) Versus Bland Microwave Ablation (bMWA): Technical Parameters, Specific CT 3D Rendering and Histopathology

    SciTech Connect

    Gockner, T. L.; Zelzer, S.; Mokry, T. Gnutzmann, D. Bellemann, N.; Mogler, C.; Beierfuß, A. Köllensperger, E. Germann, G.; Radeleff, B. A. Stampfl, U. Kauczor, H. U.; Pereira, P. L.; Sommer, C. M.

    2015-04-15

    PurposeThis study was designed to compare technical parameters during ablation as well as CT 3D rendering and histopathology of the ablation zone between sphere-enhanced microwave ablation (sMWA) and bland microwave ablation (bMWA).MethodsIn six sheep-livers, 18 microwave ablations were performed with identical system presets (power output: 80 W, ablation time: 120 s). In three sheep, transarterial embolisation (TAE) was performed immediately before microwave ablation using spheres (diameter: 40 ± 10 μm) (sMWA). In the other three sheep, microwave ablation was performed without spheres embolisation (bMWA). Contrast-enhanced CT, sacrifice, and liver harvest followed immediately after microwave ablation. Study goals included technical parameters during ablation (resulting power output, ablation time), geometry of the ablation zone applying specific CT 3D rendering with a software prototype (short axis of the ablation zone, volume of the largest aligned ablation sphere within the ablation zone), and histopathology (hematoxylin-eosin, Masson Goldner and TUNEL).ResultsResulting power output/ablation times were 78.7 ± 1.0 W/120 ± 0.0 s for bMWA and 78.4 ± 1.0 W/120 ± 0.0 s for sMWA (n.s., respectively). Short axis/volume were 23.7 ± 3.7 mm/7.0 ± 2.4 cm{sup 3} for bMWA and 29.1 ± 3.4 mm/11.5 ± 3.9 cm{sup 3} for sMWA (P < 0.01, respectively). Histopathology confirmed the signs of coagulation necrosis as well as early and irreversible cell death for bMWA and sMWA. For sMWA, spheres were detected within, at the rim, and outside of the ablation zone without conspicuous features.ConclusionsSpecific CT 3D rendering identifies a larger ablation zone for sMWA compared with bMWA. The histopathological signs and the detectable amount of cell death are comparable for both groups. When comparing sMWA with bMWA, TAE has no effect on the technical parameters during ablation.

  16. A model-free method for annotating on vascular structure in volume rendered images

    NASA Astrophysics Data System (ADS)

    He, Wei; Li, Yanfang; Shi, Weili; Miao, Yu; He, Fei; Yan, Fei; Yang, Huamin; Zhang, Huimao; Mori, Kensaku; Jiang, Zhengang

    2015-03-01

    The precise annotation of vessel is desired in computer-assisted systems to help surgeons identify each vessel branch. A method has been reported that annotates vessels on volume rendered images by rendering their names on them using a two-pass rendering process. In the reported method, however, cylinder surface models of the vessels should be generated for writing vessels names. In fact, vessels are not actual cylinders, so the surfaces of the vessels cannot be simulated by such models accurately. This paper presents a model-free method for annotating vessels on volume rendered images by rendering their names on them using the two-pass rendering process: surface rendering and volume rendering. In the surface rendering process, docking points of vessel names are estimated by using such properties as centerlines, running directions, and vessel regions which are obtained in preprocess. Then the vessel names are pasted on the vessel surfaces at the docking points. In the volume rendering process, volume image is rendered using a fast volume rendering algorithm with depth buffer of image rendered in the surface rendering process. Finally, those rendered images are blended into an image as a result. In order to confirm the proposed method, a visualizing system for the automated annotation of abdominal arteries is performed. The experimental results show that vessel names can be drawn on the corresponding vessel in the volume rendered images correctly. The proposed method has enormous potential to be adopted to annotate other organs which cannot be modeled using regular geometrical surface.

  17. Accelerating fourier volume rendering by polar coordinate data representation.

    PubMed

    Liao, Jan-Ray; Lee, Shun-Zhi; Lee, Huai-Che

    2012-12-01

    Volume rendering is an important tool to visualize three-dimensional data in biomedicine by projecting the data to a two-dimensional plane. The projection is done by ray casting and its complexity is proportional to the number of three-dimensional data points. To reduce complexity, Fourier volume rendering (FVR) uses slice projection theorem to facilitate the integration of voxels along the ray casting path. In this paper, we proposed a new method for FVR that stored and processed the frequency domain data in polar coordinate. By exploiting three aspects of data processing which is previously impossible in rectilinear coordinate, our new method is much faster than the previous methods. The first aspect is data regularity. When data are stored in polar coordinate, extracting a slice involves accessing data stored in adjacent memory location. This regularity makes memory access more efficient. The second aspect is to utilize the high data density near the origin in polar coordinate. We can obtain two benefits from this aspect. The first allows us to extract a slice by nearest-neighbor interpolation instead of more complex interpolation but without sacrificing image quality. The second allows us to trade off between image quality and memory storage. The third aspect is to recognize that converting from rectilinear coordinate to polar coordinate is a one-time process. Therefore, we can use a better interpolation kernel with larger support in coordinate conversion. In turn, most of the computation is shifted to the preprocessing stage and interactive rendering can be made very fast. In the experiments, we show that the speed in interactive visualization for our new method is independent of the size of the interpolation kernel, therefore, achieving comparable image quality at a faster rate than previous methods. PMID:22771165

  18. Registration of 3D spectral OCT volumes using 3D SIFT feature point matching

    NASA Astrophysics Data System (ADS)

    Niemeijer, Meindert; Garvin, Mona K.; Lee, Kyungmoo; van Ginneken, Bram; Abràmoff, Michael D.; Sonka, Milan

    2009-02-01

    The recent introduction of next generation spectral OCT scanners has enabled routine acquisition of high resolution, 3D cross-sectional volumetric images of the retina. 3D OCT is used in the detection and management of serious eye diseases such as glaucoma and age-related macular degeneration. For follow-up studies, image registration is a vital tool to enable more precise, quantitative comparison of disease states. This work presents a registration method based on a recently introduced extension of the 2D Scale-Invariant Feature Transform (SIFT) framework1 to 3D.2 The SIFT feature extractor locates minima and maxima in the difference of Gaussian scale space to find salient feature points. It then uses histograms of the local gradient directions around each found extremum in 3D to characterize them in a 4096 element feature vector. Matching points are found by comparing the distance between feature vectors. We apply this method to the rigid registration of optic nerve head- (ONH) and macula-centered 3D OCT scans of the same patient that have only limited overlap. Three OCT data set pairs with known deformation were used for quantitative assessment of the method's robustness and accuracy when deformations of rotation and scaling were considered. Three-dimensional registration accuracy of 2.0+/-3.3 voxels was observed. The accuracy was assessed as average voxel distance error in N=1572 matched locations. The registration method was applied to 12 3D OCT scans (200 x 200 x 1024 voxels) of 6 normal eyes imaged in vivo to demonstrate the clinical utility and robustness of the method in a real-world environment.

  19. Emphatic, interactive volume rendering to support variance in user expertise.

    PubMed

    Stredney, Don; Ebert, David S; Svakhine, Nikolai; Bryan, Jason; Sessanna, Dennis; Wiet, Gregory J

    2005-01-01

    Various levels of representation, from abstract to schematic to realistic, have been exploited for millennia to facilitate the transfer of information from one individual to another. Learning complex information, such as that found in biomedicine, proves specifically problematic to many, and requires incremental, step-wise depictions of the information to clarify structural, functional, and procedural relationships.Emerging volume-rendering technique, such as non-photorealistic representation, coupled with advances in computational speeds, especially new graphical processing units, provide unique capabilities to explore the use of various levels of representation in interactive sessions. We have developed a system that produces images that simulate pictorial representations for both scientific and biomedical visualization. The system combines traditional and novel volume illustration techniques. We present examples from our efforts to distill representational techniques for both creative exploration and emphatic presentation for clarity. More specifically, we present our efforts to adapt these techniques for interactive simulation sessions being developed in a concurrent project for resident training in temporal bone dissection simulation. The goal of this effort is to evaluate the use of emphatic rendering to guide the user in an interactive session and to facilitate the learning of complex biomedical information, including structural, functional, and procedural information. PMID:15718791

  20. See-Through Imaging of Laser-Scanned 3d Cultural Heritage Objects Based on Stochastic Rendering of Large-Scale Point Clouds

    NASA Astrophysics Data System (ADS)

    Tanaka, S.; Hasegawa, K.; Okamoto, N.; Umegaki, R.; Wang, S.; Uemura, M.; Okamoto, A.; Koyamada, K.

    2016-06-01

    We propose a method for the precise 3D see-through imaging, or transparent visualization, of the large-scale and complex point clouds acquired via the laser scanning of 3D cultural heritage objects. Our method is based on a stochastic algorithm and directly uses the 3D points, which are acquired using a laser scanner, as the rendering primitives. This method achieves the correct depth feel without requiring depth sorting of the rendering primitives along the line of sight. Eliminating this need allows us to avoid long computation times when creating natural and precise 3D see-through views of laser-scanned cultural heritage objects. The opacity of each laser-scanned object is also flexibly controllable. For a laser-scanned point cloud consisting of more than 107 or 108 3D points, the pre-processing requires only a few minutes, and the rendering can be executed at interactive frame rates. Our method enables the creation of cumulative 3D see-through images of time-series laser-scanned data. It also offers the possibility of fused visualization for observing a laser-scanned object behind a transparent high-quality photographic image placed in the 3D scene. We demonstrate the effectiveness of our method by applying it to festival floats of high cultural value. These festival floats have complex outer and inner 3D structures and are suitable for see-through imaging.

  1. Dynamic real-time 4D cardiac MDCT image display using GPU-accelerated volume rendering.

    PubMed

    Zhang, Qi; Eagleson, Roy; Peters, Terry M

    2009-09-01

    Intraoperative cardiac monitoring, accurate preoperative diagnosis, and surgical planning are important components of minimally-invasive cardiac therapy. Retrospective, electrocardiographically (ECG) gated, multidetector computed tomographical (MDCT), four-dimensional (3D + time), real-time, cardiac image visualization is an important tool for the surgeon in such procedure, particularly if the dynamic volumetric image can be registered to, and fused with the actual patient anatomy. The addition of stereoscopic imaging provides a more intuitive environment by adding binocular vision and depth cues to structures within the beating heart. In this paper, we describe the design and implementation of a comprehensive stereoscopic 4D cardiac image visualization and manipulation platform, based on the opacity density radiation model, which exploits the power of modern graphics processing units (GPUs) in the rendering pipeline. In addition, we present a new algorithm to synchronize the phases of the dynamic heart to clinical ECG signals, and to calculate and compensate for latencies in the visualization pipeline. A dynamic multiresolution display is implemented to enable the interactive selection and emphasis of volume of interest (VOI) within the entire contextual cardiac volume and to enhance performance, and a novel color and opacity adjustment algorithm is designed to increase the uniformity of the rendered multiresolution image of heart. Our system provides a visualization environment superior to noninteractive software-based implementations, but with a rendering speed that is comparable to traditional, but inferior quality, volume rendering approaches based on texture mapping. This retrospective ECG-gated dynamic cardiac display system can provide real-time feedback regarding the suspected pathology, function, and structural defects, as well as anatomical information such as chamber volume and morphology. PMID:19467840

  2. Simulation and training of lumbar punctures using haptic volume rendering and a 6DOF haptic device

    NASA Astrophysics Data System (ADS)

    Färber, Matthias; Heller, Julika; Handels, Heinz

    2007-03-01

    The lumbar puncture is performed by inserting a needle into the spinal chord of the patient to inject medicaments or to extract liquor. The training of this procedure is usually done on the patient guided by experienced supervisors. A virtual reality lumbar puncture simulator has been developed in order to minimize the training costs and the patient's risk. We use a haptic device with six degrees of freedom (6DOF) to feedback forces that resist needle insertion and rotation. An improved haptic volume rendering approach is used to calculate the forces. This approach makes use of label data of relevant structures like skin, bone, muscles or fat and original CT data that contributes information about image structures that can not be segmented. A real-time 3D visualization with optional stereo view shows the punctured region. 2D visualizations of orthogonal slices enable a detailed impression of the anatomical context. The input data consisting of CT and label data and surface models of relevant structures is defined in an XML file together with haptic rendering and visualization parameters. In a first evaluation the visible human male data has been used to generate a virtual training body. Several users with different medical experience tested the lumbar puncture trainer. The simulator gives a good haptic and visual impression of the needle insertion and the haptic volume rendering technique enables the feeling of unsegmented structures. Especially, the restriction of transversal needle movement together with rotation constraints enabled by the 6DOF device facilitate a realistic puncture simulation.

  3. 3-D dynamic rupture simulations by a finite volume method

    NASA Astrophysics Data System (ADS)

    Benjemaa, M.; Glinsky-Olivier, N.; Cruz-Atienza, V. M.; Virieux, J.

    2009-07-01

    Dynamic rupture of a 3-D spontaneous crack of arbitrary shape is investigated using a finite volume (FV) approach. The full domain is decomposed in tetrahedra whereas the surface, on which the rupture takes place, is discretized with triangles that are faces of tetrahedra. First of all, the elastodynamic equations are described into a pseudo-conservative form for an easy application of the FV discretization. Explicit boundary conditions are given using criteria based on the conservation of discrete energy through the crack surface. Using a stress-threshold criterion, these conditions specify fluxes through those triangles that have suffered rupture. On these broken surfaces, stress follows a linear slip-weakening law, although other friction laws can be implemented. For The Problem Version 3 of the dynamic-rupture code verification exercise conducted by the SCEC/USGS, numerical solutions on a planar fault exhibit a very high convergence rate and are in good agreement with the reference one provided by a finite difference (FD) technique. For a non-planar fault of parabolic shape, numerical solutions agree satisfactorily well with those obtained with a semi-analytical boundary integral method in terms of shear stress amplitudes, stopping phases arrival times and stress overshoots. Differences between solutions are attributed to the low-order interpolation of the FV approach, whose results are particularly sensitive to the mesh regularity (structured/unstructured). We expect this method, which is well adapted for multiprocessor parallel computing, to be competitive with others for solving large scale dynamic ruptures scenarios of seismic sources in the near future.

  4. Noise-based volume rendering for the visualization of multivariate volumetric data.

    PubMed

    Khlebnikov, Rostislav; Kainz, Bernhard; Steinberger, Markus; Schmalstieg, Dieter

    2013-12-01

    Analysis of multivariate data is of great importance in many scientific disciplines. However, visualization of 3D spatially-fixed multivariate volumetric data is a very challenging task. In this paper we present a method that allows simultaneous real-time visualization of multivariate data. We redistribute the opacity within a voxel to improve the readability of the color defined by a regular transfer function, and to maintain the see-through capabilities of volume rendering. We use predictable procedural noise--random-phase Gabor noise--to generate a high-frequency redistribution pattern and construct an opacity mapping function, which allows to partition the available space among the displayed data attributes. This mapping function is appropriately filtered to avoid aliasing, while maintaining transparent regions. We show the usefulness of our approach on various data sets and with different example applications. Furthermore, we evaluate our method by comparing it to other visualization techniques in a controlled user study. Overall, the results of our study indicate that users are much more accurate in determining exact data values with our novel 3D volume visualization method. Significantly lower error rates for reading data values and high subjective ranking of our method imply that it has a high chance of being adopted for the purpose of visualization of multivariate 3D data. PMID:24051860

  5. Fast software-based volume rendering using multimedia instructions on PC platforms and its application to virtual endoscopy

    NASA Astrophysics Data System (ADS)

    Mori, Kensaku; Suenaga, Yasuhito; Toriwaki, Jun-ichiro

    2003-05-01

    This paper describes a software-based fast volume rendering (VolR) method on a PC platform by using multimedia instructions, such as SIMD instructions, which are currently available in PCs' CPUs. This method achieves fast rendering speed through highly optimizing software rather than an improved rendering algorithm. In volume rendering using a ray casting method, the system requires fast execution of the following processes: (a) interpolation of voxel or color values at sample points, (b) computation of normal vectors (gray-level gradient vectors), (c) calculation of shaded values obtained by dot-products of normal vectors and light source direction vectors, (d) memory access to a huge area, and (e) efficient ray skipping at translucent regions. The proposed software implements these fundamental processes in volume rending by using special instruction sets for multimedia processing. The proposed software can generate virtual endoscopic images of a 3-D volume of 512x512x489 voxel size by volume rendering with perspective projection, specular reflection, and on-the-fly normal vector computation on a conventional PC without any special hardware at thirteen frames per second. Semi-translucent display is also possible.

  6. Wobbled splatting--a fast perspective volume rendering method for simulation of x-ray images from CT.

    PubMed

    Birkfellner, Wolfgang; Seemann, Rudolf; Figl, Michael; Hummel, Johann; Ede, Christopher; Homolka, Peter; Yang, Xinhui; Niederer, Peter; Bergmann, Helmar

    2005-05-01

    3D/2D registration, the automatic assignment of a global rigid-body transformation matching the coordinate systems of patient and preoperative volume scan using projection images, is an important topic in image-guided therapy and radiation oncology. A crucial part of most 3D/2D registration algorithms is the fast computation of digitally rendered radiographs (DRRs) to be compared iteratively to radiographs or portal images. Since registration is an iterative process, fast generation of DRRs-which are perspective summed voxel renderings-is desired. In this note, we present a simple and rapid method for generation of DRRs based on splat rendering. As opposed to conventional splatting, antialiasing of the resulting images is not achieved by means of computing a discrete point spread function (a so-called footprint), but by stochastic distortion of either the voxel positions in the volume scan or by the simulation of a focal spot of the x-ray tube with non-zero diameter. Our method generates slightly blurred DRRs suitable for registration purposes at framerates of approximately 10 Hz when rendering volume images with a size of 30 MB. PMID:15843725

  7. A Parallel Pipelined Renderer for the Time-Varying Volume Data

    NASA Technical Reports Server (NTRS)

    Chiueh, Tzi-Cker; Ma, Kwan-Liu

    1997-01-01

    This paper presents a strategy for efficiently rendering time-varying volume data sets on a distributed-memory parallel computer. Time-varying volume data take large storage space and visualizing them requires reading large files continuously or periodically throughout the course of the visualization process. Instead of using all the processors to collectively render one volume at a time, a pipelined rendering process is formed by partitioning processors into groups to render multiple volumes concurrently. In this way, the overall rendering time may be greatly reduced because the pipelined rendering tasks are overlapped with the I/O required to load each volume into a group of processors; moreover, parallelization overhead may be reduced as a result of partitioning the processors. We modify an existing parallel volume renderer to exploit various levels of rendering parallelism and to study how the partitioning of processors may lead to optimal rendering performance. Two factors which are important to the overall execution time are re-source utilization efficiency and pipeline startup latency. The optimal partitioning configuration is the one that balances these two factors. Tests on Intel Paragon computers show that in general optimal partitionings do exist for a given rendering task and result in 40-50% saving in overall rendering time.

  8. Volume rendering of visible human data for an anatomical virtual environment.

    PubMed

    Kerr, J; Ratiu, P; Sellberg, M

    1996-01-01

    In this work, we utilize the axial anatomical human male sections from the National Library of Medicine's Visible Human Project to generate three-dimensional (3-D) volume representations of the human male subject. The two-dimensional (2-D) projection images were produced by combining ray tracing techniques with automated image segmentation routines. The resultant images provide accurate and realistic volumetric representations of the Visible Human data set which is ultimately needed in medical virtual environment simulation. Ray tracing techniques provide methods by which 2-D volume views of a 3-D voxel array can be produced. The cross-sectional images can be scanned at different angles to produce rotated views of the voxel array. By combining volume views at incremental angles over 360 degrees a full volumetric representation of the voxel array, in this case the human male data set, can be computer generated and displayed without the speed and memory limitations of trying to display the entire data array. Additional texture and feature information can be obtained from the data by applying optical property equations to the ray scans. The imaging effects that can be added to volume renderings using these equations include shading, shadowing, and transparency. The automated segmentation routines provide a means to distinguish between various anatomical structures of the body. These routines can be used to differentiate between skin, fat, muscle, cartilage, blood vessels, and bone. By combining automated segmentation routines with the ray-tracing techniques, 2-D volume views of various anatomical structures and features can be isolated from the full data set. Examples of these segmentation abilities are demonstrated for the human male data set which include volume views of the skeletal systems, the musculoskeletal system, and part of the vascular system. The methods described above allow us to generate lifelike images, NURBS surface models, and realistic texture maps

  9. 3D-Assisted Quantitative Assessment of Orbital Volume Using an Open-Source Software Platform in a Taiwanese Population

    PubMed Central

    Shyu, Victor Bong-Hang; Hsu, Chung-En; Chen, Chih-hao; Chen, Chien-Tzung

    2015-01-01

    Orbital volume evaluation is an important part of pre-operative assessments in orbital trauma and congenital deformity patients. The availability of the affordable, open-source software, OsiriX, as a tool for preoperative planning increased the popularity of radiological assessments by the surgeon. A volume calculation method based on 3D volume rendering-assisted region-of-interest computation was used to determine the normal orbital volume in Taiwanese patients after reorientation to the Frankfurt plane. Method one utilized 3D points for intuitive orbital rim outlining. The mean normal orbital volume for left and right orbits was 24.3±1.51 ml and 24.7±1.17 ml in male and 21.0±1.21 ml and 21.1±1.30 ml in female subjects. Another method (method two) based on the bilateral orbital lateral rim was also used to calculate orbital volume and compared with method one. The mean normal orbital volume for left and right orbits was 19.0±1.68 ml and 19.1±1.45 ml in male and 16.0±1.01 ml and 16.1±0.92 ml in female subjects. The inter-rater reliability and intra-rater measurement accuracy between users for both methods was found to be acceptable for orbital volume calculations. 3D-assisted quantification of orbital volume is a feasible technique for orbital volume assessment. The normal orbital volume can be used as controls in cases of unilateral orbital reconstruction with a mean size discrepancy of less than 3.1±2.03% in females and 2.7±1.32% in males. The OsiriX software can be used reliably by the individual surgeon as a comprehensive preoperative planning and imaging tool for orbital volume measurement and computed tomography reorientation. PMID:25774683

  10. Mapping high-fidelity volume rendering for medical imaging to CPU, GPU and many-core architectures.

    PubMed

    Smelyanskiy, Mikhail; Holmes, David; Chhugani, Jatin; Larson, Alan; Carmean, Douglas M; Hanson, Dennis; Dubey, Pradeep; Augustine, Kurt; Kim, Daehyun; Kyker, Alan; Lee, Victor W; Nguyen, Anthony D; Seiler, Larry; Robb, Richard

    2009-01-01

    Medical volumetric imaging requires high fidelity, high performance rendering algorithms. We motivate and analyze new volumetric rendering algorithms that are suited to modern parallel processing architectures. First, we describe the three major categories of volume rendering algorithms and confirm through an imaging scientist-guided evaluation that ray-casting is the most acceptable. We describe a thread- and data-parallel implementation of ray-casting that makes it amenable to key architectural trends of three modern commodity parallel architectures: multi-core, GPU, and an upcoming many-core Intel architecture code-named Larrabee. We achieve more than an order of magnitude performance improvement on a number of large 3D medical datasets. We further describe a data compression scheme that significantly reduces data-transfer overhead. This allows our approach to scale well to large numbers of Larrabee cores. PMID:19834234

  11. Improved ray-casting volume rendering used for brain MR imaging

    NASA Astrophysics Data System (ADS)

    Liu, Zhengguang; Zhang, Ying; Ma, Ximei; Lin, Xueyan

    2006-09-01

    This paper discusses the principle and implementation method of Ray-casting volume rendering algorithm. In order to enhance the image quality and speed of alternate operation, we improve the grads formula in Ray-casting volume rendering algorithm and compound method of the sampling points.

  12. 3D ultrasound volume stitching using phase symmetry and harris corner detection for orthopaedic applications

    NASA Astrophysics Data System (ADS)

    Dalvi, Rupin; Hacihaliloglu, Ilker; Abugharbieh, Rafeef

    2010-03-01

    Stitching of volumes obtained from three dimensional (3D) ultrasound (US) scanners improves visualization of anatomy in many clinical applications. Fast but accurate volume registration remains the key challenge in this area.We propose a volume stitching method based on efficient registration of 3D US volumes obtained from a tracked US probe. Since the volumes, after adjusting for probe motion, are coarsely registered, we obtain salient correspondence points in the central slices of these volumes. This is done by first removing artifacts in the US slices using intensity invariant local phase image processing and then applying the Harris Corner detection algorithm. Fast sub-volume registration on a small neighborhood around the points then gives fast, accurate 3D registration parameters. The method has been tested on 3D US scans of phantom and real human radius and pelvis bones and a phantom human fetus. The method has also been compared to volumetric registration, as well as feature based registration using 3D-SIFT. Quantitative results show average post-registration error of 0.33mm which is comparable to volumetric registration accuracy (0.31mm) and much better than 3D-SIFT based registration which failed to register the volumes. The proposed method was also much faster than volumetric registration (~4.5 seconds versus 83 seconds).

  13. Automatic segmentation of the fetal cerebellum on ultrasound volumes, using a 3D statistical shape model.

    PubMed

    Gutiérrez-Becker, Benjamín; Arámbula Cosío, Fernando; Guzmán Huerta, Mario E; Benavides-Serralde, Jesús Andrés; Camargo-Marín, Lisbeth; Medina Bañuelos, Verónica

    2013-09-01

    Previous work has shown that the segmentation of anatomical structures on 3D ultrasound data sets provides an important tool for the assessment of the fetal health. In this work, we present an algorithm based on a 3D statistical shape model to segment the fetal cerebellum on 3D ultrasound volumes. This model is adjusted using an ad hoc objective function which is in turn optimized using the Nelder-Mead simplex algorithm. Our algorithm was tested on ultrasound volumes of the fetal brain taken from 20 pregnant women, between 18 and 24 gestational weeks. An intraclass correlation coefficient of 0.8528 and a mean Dice coefficient of 0.8 between cerebellar volumes measured using manual techniques and the volumes calculated using our algorithm were obtained. As far as we know, this is the first effort to automatically segment fetal intracranial structures on 3D ultrasound data. PMID:23686392

  14. Accelerating Time-Varying Hardware Volume Rendering Using TSP Trees and Color-Based Error Metrics

    NASA Technical Reports Server (NTRS)

    Ellsworth, David; Chiang, Ling-Jen; Shen, Han-Wei; Kwak, Dochan (Technical Monitor)

    2000-01-01

    This paper describes a new hardware volume rendering algorithm for time-varying data. The algorithm uses the Time-Space Partitioning (TSP) tree data structure to identify regions within the data that have spatial or temporal coherence. By using this coherence, the rendering algorithm can improve performance when the volume data is larger than the texture memory capacity by decreasing the amount of textures required. This coherence can also allow improved speed by appropriately rendering flat-shaded polygons instead of textured polygons, and by not rendering transparent regions. To reduce the polygonization overhead caused by the use of the hierarchical data structure, we introduce an optimization method using polygon templates. The paper also introduces new color-based error metrics, which more accurately identify coherent regions compared to the earlier scalar-based metrics. By showing experimental results from runs using different data sets and error metrics, we demonstrate that the new methods give substantial improvements in volume rendering performance.

  15. Accuracy of volume measurement using 3D ultrasound and development of CT-3D US image fusion algorithm for prostate cancer radiotherapy

    SciTech Connect

    Baek, Jihye; Huh, Jangyoung; Hyun An, So; Oh, Yoonjin; Kim, Myungsoo; Kim, DongYoung; Chung, Kwangzoo; Cho, Sungho; Lee, Rena

    2013-02-15

    Purpose: To evaluate the accuracy of measuring volumes using three-dimensional ultrasound (3D US), and to verify the feasibility of the replacement of CT-MR fusion images with CT-3D US in radiotherapy treatment planning. Methods: Phantoms, consisting of water, contrast agent, and agarose, were manufactured. The volume was measured using 3D US, CT, and MR devices. A CT-3D US and MR-3D US image fusion software was developed using the Insight Toolkit library in order to acquire three-dimensional fusion images. The quality of the image fusion was evaluated using metric value and fusion images. Results: Volume measurement, using 3D US, shows a 2.8 {+-} 1.5% error, 4.4 {+-} 3.0% error for CT, and 3.1 {+-} 2.0% error for MR. The results imply that volume measurement using the 3D US devices has a similar accuracy level to that of CT and MR. Three-dimensional image fusion of CT-3D US and MR-3D US was successfully performed using phantom images. Moreover, MR-3D US image fusion was performed using human bladder images. Conclusions: 3D US could be used in the volume measurement of human bladders and prostates. CT-3D US image fusion could be used in monitoring the target position in each fraction of external beam radiation therapy. Moreover, the feasibility of replacing the CT-MR image fusion to the CT-3D US in radiotherapy treatment planning was verified.

  16. Evaluation of Gastric Volumes: Comparison of 3-D Ultrasound and Magnetic Resonance Imaging.

    PubMed

    Buisman, Wijnand J; Mauritz, Femke A; Westerhuis, Wouter E; Gilja, Odd Helge; van der Zee, David C; van Herwaarden-Lindeboom, Maud Y A

    2016-07-01

    To investigate gastric accommodation, accurate measurements of gastric volumes are necessary. An excellent technique to measure gastric volumes is dynamic magnetic resonance imaging (MRI). Unfortunately, dynamic MRI is expensive and not always available. A new 3-D ultrasound (US) method using a matrix transducer was developed to measure gastric volumes. In this prospective study, 14 healthy volunteers underwent a dynamic MRI and a 3-D US. Gastric volumes were calculated with intra-gastric liquid content and total gastric volume. Mean postprandial liquid gastric content was 397 ± 96.5 mL. Mean volume difference was 1.0 mL with limits of agreement of -8.9 to 10.9 mL. When gastric air was taken into account, mean total gastric volume was 540 ± 115.4 mL SD. Mean volume difference was 2.3 mL with limits of agreement of -21.1 to 26.4 mL. The matrix 3-D US showed excellent agreement with dynamic MRI. Therefore matrix 3-D US is a reliable alternative to measure gastric volumes. PMID:27067418

  17. Volume Attenuation and High Frequency Loss as Auditory Depth Cues in Stereoscopic 3D Cinema

    NASA Astrophysics Data System (ADS)

    Manolas, Christos; Pauletto, Sandra

    2014-09-01

    Assisted by the technological advances of the past decades, stereoscopic 3D (S3D) cinema is currently in the process of being established as a mainstream form of entertainment. The main focus of this collaborative effort is placed on the creation of immersive S3D visuals. However, with few exceptions, little attention has been given so far to the potential effect of the soundtrack on such environments. The potential of sound both as a means to enhance the impact of the S3D visual information and to expand the S3D cinematic world beyond the boundaries of the visuals is large. This article reports on our research into the possibilities of using auditory depth cues within the soundtrack as a means of affecting the perception of depth within cinematic S3D scenes. We study two main distance-related auditory cues: high-end frequency loss and overall volume attenuation. A series of experiments explored the effectiveness of these auditory cues. Results, although not conclusive, indicate that the studied auditory cues can influence the audience judgement of depth in cinematic 3D scenes, sometimes in unexpected ways. We conclude that 3D filmmaking can benefit from further studies on the effectiveness of specific sound design techniques to enhance S3D cinema.

  18. Interactive volume rendering of multimodality 4D cardiac data with the use of consumer graphics hardware

    NASA Astrophysics Data System (ADS)

    Enders, Frank; Strengert, Magnus; Iserhardt-Bauer, Sabine; Aladl, Usaf E.; Slomka, Piotr J.

    2003-05-01

    Interactive multimodality 4D volume rendering of cardiac images is challenging due to several factors. Animated rendering of fused volumes with multiple lookup tables (LUT) and interactive adjustments of relative volume positions and orientations must be performed in real time. In addition it is difficult to visualize the myocardium separated from the surrounding tissue on some modalities, such as MRI. In this work we propose to use software techniques combined with hardware capabilities of modern consumer video cards for real-time visualization of time-varying multimodality fused cardiac volumes for diagnostic purposes.

  19. [An improved fast algorithm for ray casting volume rendering of medical images].

    PubMed

    Tao, Ling; Wang, Huina; Tian, Zhiliang

    2006-10-01

    Ray casting algorithm can obtain better quality images in volume rendering, however, it presents some problems such as powerful computing capacity and slow rendering velocity. Therefore, a new fast algorithm of ray casting volume rendering is proposed in this paper. This algorithm reduces matrix computation by the matrix transformation characteristics of re-sampling points in two coordinate system, so re-sampled computational process is accelerated. By extending the Bresenham algorithm to three dimension and utilizing boundary box technique, this algorithm avoids the sampling in empty voxel and greatly improves the efficiency of ray casting. The experiment results show that the improved acceleration algorithm can produce the required quality images, at the same time reduces the total operations remarkably, and speeds up the volume rendering. PMID:17121341

  20. Exploratory nuclear microprobe data visualisation using 3- and 4-dimensional biological volume rendering tools

    NASA Astrophysics Data System (ADS)

    Whitlow, Harry J.; Ren, Minqin; van Kan, Jeroen A.; Watt, Frank; White, Dan

    2007-07-01

    The emergence of Confocal Microscopy (CM) and Atomic Force Microscopy (AFM) as everyday tools in cellular level biology has stimulated development of 3D data visualisation software. Conventional 2-dimensional images of cell (optical) sections obtained in a transmission electron or optical microscopes and more sophisticated multidimensional imaging methods require processing software capable of 3D rendering and mathematically transforming data in 3-, 4-, or more dimensions. The richness of data obtained from the different nuclear microscopy imaging techniques and often parallel information channels (X-ray, secondary electron, Scanning Transmission Ion Microscopy) is often not obvious because subtleties and interrelations in the data could not be rendered in a human interpretable way. In this exploratory study we have applied the BioImageXD software, originally developed for rendering of multidimensional CM data, to some different nuclear microscopy data. Cells-on-Silicon STIM data from a human breast cancer cell line and elemental maps from lesions on rabbit aorta have been visualised. Mathematical filtering and averaging combined with hardware accelerated 3D rendering enabled dramatically clear visualisation of inter-cellular regions comprising extra cellular matrix proteins that were otherwise difficult to visualise, and also sub cellular structures. For elemental mapping, the use of filtered correlation surfaces and colour channels clearly revealed the interrelations in the data structures that are not easily discernible in the PIXE elemental maps.

  1. Mathematical models for volume rendering and neutron transport

    SciTech Connect

    Max, N.

    1994-09-01

    This paper reviews several different models for light interaction with volume densities of absorbing, glowing, reflecting, or scattering material. They include absorption only, glow only, glow and absorption combined, single scattering of external illumination, and multiple scattering. The models are derived from differential equations, and illustrated on a data set representing a cloud. They are related to corresponding models in neutron transport. The multiple scattering model uses an efficient method to propagate the radiation which does not suffer from the ray effect.

  2. 3D photography in the objective analysis of volume augmentation including fat augmentation and dermal fillers.

    PubMed

    Meier, Jason D; Glasgold, Robert A; Glasgold, Mark J

    2011-11-01

    The authors present quantitative and objective 3D data from their studies showing long-term results with facial volume augmentation. The first study analyzes fat grafting of the midface and the second study presents augmentation of the tear trough with hyaluronic filler. Surgeons using 3D quantitative analysis can learn the duration of results and the optimal amount to inject, as well as showing patients results that are not demonstrable with standard, 2D photography. PMID:22004863

  3. High-quality anatomical structure enhancement for cardiac image dynamic volume rendering

    NASA Astrophysics Data System (ADS)

    Zhang, Qi; Eagleson, Roy; Guiraudon, Gerard M.; Peters, Terry M.

    2008-03-01

    Dynamic volume rendering of the beating heart is an important element in cardiac disease diagnosis and therapy planning, providing the clinician with insight into the internal cardiac structure and functional behavior. Most clinical applications tend to focus upon a particular set of organ structures, and in the case of cardiac imaging, it would be helpful to embed anatomical features into the dynamic volume that are of particular importance to an intervention. A uniform transfer function (TF), such as is generally employed in volume rendering, cannot effectively isolate such structures because of the lack of spatial information and the small intensity differences between adjacent tissues. Explicit segmentation is a powerful way to approach this problem, which usually yields a single binary mask volume (MV), where a unit value in a voxel within the MV acts as a tag label representing the anatomical structure of interest (ASOI). These labels are used to determine the TF employed to adjust the ASOI display. Traditional approaches for rendering such segmented volumetric datasets usually deliver unsatisfactory results, such as noninteractive rendering speed, low image quality, intermixing artifacts along the rendered subvolume boundaries, and speckle noise. In this paper, we introduce a new "color coding" approach, based on the graphics processing unit (GPU) accelerated raycasting algorithm and a pre-integrated voxel classification method, to address this problem. The mask tag labels derived from segmentation are first smoothed with a Gaussian filter, and multiple TFs are designed for each of the MVs and the source cardiac volume respectively, mapping the voxel's intensity to color and opacity at each sampling point along the casting ray. The resultant values are composited together using a boundary color adjustment technique, which acts as "coding" the segmented anatomical structure information into the rendered source volume of the beating heart. Our algorithm

  4. Early pregnancy placental bed and fetal vascular volume measurements using 3-D virtual reality.

    PubMed

    Reus, Averil D; Klop-van der Aa, Josine; Rifouna, Maria S; Koning, Anton H J; Exalto, Niek; van der Spek, Peter J; Steegers, Eric A P

    2014-08-01

    In this study, a new 3-D Virtual Reality (3D VR) technique for examining placental and uterine vasculature was investigated. The validity of placental bed vascular volume (PBVV) and fetal vascular volume (FVV) measurements was assessed and associations of PBVV and FVV with embryonic volume, crown-rump length, fetal birth weight and maternal parity were investigated. One hundred thirty-two patients were included in this study, and measurements were performed in 100 patients. Using V-Scope software, 100 3-D Power Doppler data sets of 100 pregnancies at 12 wk of gestation were analyzed with 3D VR in the I-Space Virtual Reality system. Volume measurements were performed with semi-automatic, pre-defined parameters. The inter-observer and intra-observer agreement was excellent with all intra-class correlation coefficients >0.93. PBVVs of multiparous women were significantly larger than the PBVVs of primiparous women (p = 0.008). In this study, no other associations were found. In conclusion, V-Scope offers a reproducible method for measuring PBVV and FVV at 12 wk of gestation, although we are unsure whether the volume measured represents the true volume of the vasculature. Maternal parity influences PBVV. PMID:24798392

  5. Distributed ray casting for high-speed volume rendering. Master's thesis

    SciTech Connect

    Brightbill, P.L.

    1992-01-01

    The volume rendering technique known as ray casting or ray tracing is notoriously slow for large volume sizes, yet provides superior images. A technique is needed to accelerate ray tracing volumes without depending on special purpose or parallel computers. The realization and improvements in distributed computing over the past two decades has motivated its use in this work. This thesis explores a technique to speedup ray casting by distributed programming. The work investigates the possibility of dividing the volume among general purpose workstations and casting rays (using Levoy's front-to-back algorithm) through each subvolume independently. The final step being the composition of all subvolume rendered images. Results indicate a 75 percent savings in rendering time by distributed processing over eight processors versus a single processor.

  6. Area and volume coherence for efficient visualization of 3D scalar functions

    SciTech Connect

    Max, N. California Univ., Davis, CA ); Hanrahan, P. ); Crawfis, R. )

    1990-01-01

    We present an algorithm for compositing a combination of density clouds and contour surfaces used to represent a scalar function on a 3-D volume. The volume is divided into convex polyhedra, at whose vertices the function is known, and the polyhedra are sorted in depth before compositing. For data given at scattered 3-D points, we show that this sorting can be done in O(n) time if we chose the tetrahedra in the Delaunay triangulation as the polyhedra. The integrals for cloud opacity and visible cloud intensity along a ray through a convex polyhedron are computed analytically, and this computation is coherent across the polyhedron's area. 33 refs.

  7. 2D-3D Registration of CT Vertebra Volume to Fluoroscopy Projection: A Calibration Model Assessment

    NASA Astrophysics Data System (ADS)

    Bifulco, P.; Cesarelli, M.; Allen, R.; Romano, M.; Fratini, A.; Pasquariello, G.

    2009-12-01

    This study extends a previous research concerning intervertebral motion registration by means of 2D dynamic fluoroscopy to obtain a more comprehensive 3D description of vertebral kinematics. The problem of estimating the 3D rigid pose of a CT volume of a vertebra from its 2D X-ray fluoroscopy projection is addressed. 2D-3D registration is obtained maximising a measure of similarity between Digitally Reconstructed Radiographs (obtained from the CT volume) and real fluoroscopic projection. X-ray energy correction was performed. To assess the method a calibration model was realised a sheep dry vertebra was rigidly fixed to a frame of reference including metallic markers. Accurate measurement of 3D orientation was obtained via single-camera calibration of the markers and held as true 3D vertebra position; then, vertebra 3D pose was estimated and results compared. Error analysis revealed accuracy of the order of 0.1 degree for the rotation angles of about 1 mm for displacements parallel to the fluoroscopic plane, and of order of 10 mm for the orthogonal displacement.

  8. A concept of volume rendering guided search process to analyze medical data set.

    PubMed

    Zhou, Jianlong; Xiao, Chun; Wang, Zhiyan; Takatsuka, Masahiro

    2008-03-01

    This paper firstly presents an approach of parallel coordinates based parameter control panel (PCP). The PCP is used to control parameters of focal region-based volume rendering (FRVR) during data analysis. It uses a parallel coordinates style interface. Different rendering parameters represented with nodes on each axis, and renditions based on related parameters are connected using polylines to show dependencies between renditions and parameters. Based on the PCP, a concept of volume rendering guided search process is proposed. The search pipeline is divided into four phases. Different parameters of FRVR are recorded and modulated in the PCP during search phases. The concept shows that volume visualization could play the role of guiding a search process in the rendition space to help users to efficiently find local structures of interest. The usability of the proposed approach is evaluated to show its effectiveness. PMID:18082371

  9. Fast interactive real-time volume rendering of real-time three-dimensional echocardiography: an implementation for low-end computers

    NASA Technical Reports Server (NTRS)

    Saracino, G.; Greenberg, N. L.; Shiota, T.; Corsi, C.; Lamberti, C.; Thomas, J. D.

    2002-01-01

    Real-time three-dimensional echocardiography (RT3DE) is an innovative cardiac imaging modality. However, partly due to lack of user-friendly software, RT3DE has not been widely accepted as a clinical tool. The object of this study was to develop and implement a fast and interactive volume renderer of RT3DE datasets designed for a clinical environment where speed and simplicity are not secondary to accuracy. Thirty-six patients (20 regurgitation, 8 normal, 8 cardiomyopathy) were imaged using RT3DE. Using our newly developed software, all 3D data sets were rendered in real-time throughout the cardiac cycle and assessment of cardiac function and pathology was performed for each case. The real-time interactive volume visualization system is user friendly and instantly provides consistent and reliable 3D images without expensive workstations or dedicated hardware. We believe that this novel tool can be used clinically for dynamic visualization of cardiac anatomy.

  10. Breast mass detection using slice conspicuity in 3D reconstructed digital breast volumes

    NASA Astrophysics Data System (ADS)

    Kim, Seong Tae; Kim, Dae Hoe; Ro, Yong Man

    2014-09-01

    In digital breast tomosynthesis, the three dimensional (3D) reconstructed volumes only provide quasi-3D structure information with limited resolution along the depth direction due to insufficient sampling in depth direction and the limited angular range. The limitation could seriously hamper the conventional 3D image analysis techniques for detecting masses because the limited number of projection views causes blurring in the out-of-focus planes. In this paper, we propose a novel mass detection approach using slice conspicuity in the 3D reconstructed digital breast volumes to overcome the above limitation. First, to overcome the limited resolution along the depth direction, we detect regions of interest (ROIs) on each reconstructed slice and separately utilize the depth directional information to combine the ROIs effectively. Furthermore, we measure the blurriness of each slice for resolving the degradation of performance caused by the blur in the out-of-focus plane. Finally, mass features are extracted from the selected in focus slices and analyzed by a support vector machine classifier to reduce the false positives. Comparative experiments have been conducted on a clinical data set. Experimental results demonstrate that the proposed approach outperforms the conventional 3D approach by achieving a high sensitivity with a small number of false positives.

  11. Combination 3D TOP with 2D PC MRA Techique for cerebral blood flow volume measurement.

    PubMed

    Guo, G; Wu, R H; Zhang, Y P; Guan, J T; Guo, Y L; Cheng, Y; terBrugge, K; Mikulis, D J

    2006-01-01

    To demonstrate the discrepancy of cerebral blood flow volume (BFV) estimation with 2D phase-contrast (2D PC) MRA guided with 3D time-of-flight (3D TOF) MR localization by using an "internal" standard. 20 groups of the common (CCA), internal (ICA), and external (ECA) carotid arteries in 10 healthy subjects were examined with 2D PC MRA guided by 3D TOF MR angiograms. The sum BFV of the internal and external carotid arteries was then compared with the ipsilateral common carotid artery flow. An accurate technique would demonstrate no difference. The difference was therefore a measure of accuracy of the method. 3D TOF MRA localization is presented to allow the determination of a slice orientation to improve the accuracy of 2D PC MRA in estimate the BFV. By using the combined protocols, there was better correlation in BFV estimate between the sum of ICA+ECA with the ipsilateral CCA (R2=0.729, P=0.000). The inconsistency (mean +/- SD) was found to be 6.95 +/- 5.95% for estimate the BFV in ICA+ECA and ipsilateral CCA. The main inconsistency was contributed to the ECA and its branches. Guided with 3D TOF MRA localization, 2D PC MRA is more accurate in the determination of blood flow volume in the carotid arteries. PMID:17946401

  12. New approach to the perception of 3D shape based on veridicality, complexity, symmetry and volume.

    PubMed

    Pizlo, Zygmunt; Sawada, Tadamasa; Li, Yunfeng; Kropatsch, Walter G; Steinman, Robert M

    2010-01-01

    This paper reviews recent progress towards understanding 3D shape perception made possible by appreciating the significant role that veridicality and complexity play in the natural visual environment. The ability to see objects as they really are "out there" is derived from the complexity inherent in the 3D object's shape. The importance of both veridicality and complexity was ignored in most prior research. Appreciating their importance made it possible to devise a computational model that recovers the 3D shape of an object from only one of its 2D images. This model uses a simplicity principle consisting of only four a priori constraints representing properties of 3D shapes, primarily their symmetry and volume. The model recovers 3D shapes from a single 2D image as well, and sometimes even better, than a human being. In the rare recoveries in which errors are observed, the errors made by the model and human subjects are very similar. The model makes no use of depth, surfaces or learning. Recent elaborations of this model include: (i) the recovery of the shapes of natural objects, including human and animal bodies with limbs in varying positions (ii) providing the model with two input images that allowed it to achieve virtually perfect shape constancy from almost all viewing directions. The review concludes with a comparison of some of the highlights of our novel, successful approach to the recovery of 3D shape from a 2D image with prior, less successful approaches. PMID:19800910

  13. A data distributed parallel algorithm for ray-traced volume rendering

    NASA Technical Reports Server (NTRS)

    Ma, Kwan-Liu; Painter, James S.; Hansen, Charles D.; Krogh, Michael F.

    1993-01-01

    This paper presents a divide-and-conquer ray-traced volume rendering algorithm and a parallel image compositing method, along with their implementation and performance on the Connection Machine CM-5, and networked workstations. This algorithm distributes both the data and the computations to individual processing units to achieve fast, high-quality rendering of high-resolution data. The volume data, once distributed, is left intact. The processing nodes perform local ray tracing of their subvolume concurrently. No communication between processing units is needed during this locally ray-tracing process. A subimage is generated by each processing unit and the final image is obtained by compositing subimages in the proper order, which can be determined a priori. Test results on both the CM-5 and a group of networked workstations demonstrate the practicality of our rendering algorithm and compositing method.

  14. Generation of 3D ellipsoidal laser beams by means of a profiled volume chirped Bragg grating

    NASA Astrophysics Data System (ADS)

    Mironov, S. Yu; Poteomkin, A. K.; Gacheva, E. I.; Andrianov, A. V.; Zelenogorskii, V. V.; Vasiliev, R.; Smirnov, V.; Krasilnikov, M.; Stephan, F.; Khazanov, E. A.

    2016-05-01

    A method for shaping photocathode laser driver pulses into 3D ellipsoidal form has been proposed and implemented. The key idea of the method is to use a chirped Bragg grating recorded within the ellipsoid volume and absent outside it. If a beam with a constant (within the grating reflection band) spectral density and uniform (within the grating aperture) cross-section is incident on such a grating, the reflected beam will be a 3D ellipsoid in space and time. 3D ellipsoidal beams were obtained in experiment for the first time. It is expected that such laser beams will allow the electron bunch emittance to be reduced when applied at R± photo injectors.

  15. Parallelizing a High Accuracy Hardware-Assisted Volume Renderer for Meshes with Arbitrary Polyhedra

    SciTech Connect

    Bennett,J; Cook,R; Max,N; May,D; Williams,P

    2001-07-23

    This paper discusses our efforts to improve the performance of the high-accuracy (HIAC) volume rendering system, based on cell projection, which is used to display unstructured, scientific data sets for analysis. The parallelization of HIAC, using the pthreads and MPI API's, resulted in significant speedup, but interactive frame rates are not yet attainable for very large data sets.

  16. A comparison of gradient estimation methods for volume rendering on unstructured meshes.

    PubMed

    Correa, Carlos D; Hero, Robert; Ma, Kwan-Liu

    2011-03-01

    This paper presents a study of gradient estimation methods for rendering unstructured-mesh volume data. Gradient estimation is necessary for rendering shaded isosurfaces and specular highlights, which provide important cues for shape and depth. Gradient estimation has been widely studied and deployed for regular-grid volume data to achieve local illumination effects, but has been, otherwise, for unstructured-mesh data. As a result, most of the unstructured-mesh volume visualizations made so far were unlit. In this paper, we present a comprehensive study of gradient estimation methods for unstructured meshes with respect to their cost and performance. Through a number of benchmarks, we discuss the effects of mesh quality and scalar function complexity in the accuracy of the reconstruction, and their impact in lighting-enabled volume rendering. Based on our study, we also propose two heuristic improvements to the gradient reconstruction process. The first heuristic improves the rendering quality with a hybrid algorithm that combines the results of the multiple reconstruction methods, based on the properties of a given mesh. The second heuristic improves the efficiency of its GPU implementation, by restricting the computation of the gradient on a fixed-size local neighborhood. PMID:21233515

  17. Parallelizing a High Accuracy Hardware-Assisted Volume Renderer for Meshes with Arbitrary Polyhedra

    SciTech Connect

    Bennett, J; Cook, R; Max, N; May, D; Williams, P

    2001-03-26

    This paper discusses the authors efforts to improve the performance of the high-accuracy (HIAC) volume rendering system, based on cell projection, which is used to display unstructured, scientific data sets for analysis. The parallelization of HIAC, using the pthreads and MPI API's, resulted in significant speedup, but interactive frame rates are not yet attainable for very large data sets.

  18. Registration of 2D cardiac images to real-time 3D ultrasound volumes for 3D stress echocardiography

    NASA Astrophysics Data System (ADS)

    Leung, K. Y. Esther; van Stralen, Marijn; Voormolen, Marco M.; van Burken, Gerard; Nemes, Attila; ten Cate, Folkert J.; Geleijnse, Marcel L.; de Jong, Nico; van der Steen, Antonius F. W.; Reiber, Johan H. C.; Bosch, Johan G.

    2006-03-01

    Three-dimensional (3D) stress echocardiography is a novel technique for diagnosing cardiac dysfunction, by comparing wall motion of the left ventricle under different stages of stress. For quantitative comparison of this motion, it is essential to register the ultrasound data. We propose an intensity based rigid registration method to retrieve two-dimensional (2D) four-chamber (4C), two-chamber, and short-axis planes from the 3D data set acquired in the stress stage, using manually selected 2D planes in the rest stage as reference. The algorithm uses the Nelder-Mead simplex optimization to find the optimal transformation of one uniform scaling, three rotation, and three translation parameters. We compared registration using the SAD, SSD, and NCC metrics, performed on four resolution levels of a Gaussian pyramid. The registration's effectiveness was assessed by comparing the 3D positions of the registered apex and mitral valve midpoints and 4C direction with the manually selected results. The registration was tested on data from 20 patients. Best results were found using the NCC metric on data downsampled with factor two: mean registration errors were 8.1mm, 5.4mm, and 8.0° in the apex position, mitral valve position, and 4C direction respectively. The errors were close to the interobserver (7.1mm, 3.8mm, 7.4°) and intraobserver variability (5.2mm, 3.3mm, 7.0°), and better than the error before registration (9.4mm, 9.0mm, 9.9°). We demonstrated that the registration algorithm visually and quantitatively improves the alignment of rest and stress data sets, performing similar to manual alignment. This will improve automated analysis in 3D stress echocardiography.

  19. Direct Visuo-Haptic 4D Volume Rendering Using Respiratory Motion Models.

    PubMed

    Fortmeier, Dirk; Wilms, Matthias; Mastmeyer, Andre; Handels, Heinz

    2015-01-01

    This article presents methods for direct visuo-haptic 4D volume rendering of virtual patient models under respiratory motion. Breathing models are computed based on patient-specific 4D CT image data sequences. Virtual patient models are visualized in real-time by ray casting based rendering of a reference CT image warped by a time-variant displacement field, which is computed using the motion models at run-time. Furthermore, haptic interaction with the animated virtual patient models is provided by using the displacements computed at high rendering rates to translate the position of the haptic device into the space of the reference CT image. This concept is applied to virtual palpation and the haptic simulation of insertion of a virtual bendable needle. To this aim, different motion models that are applicable in real-time are presented and the methods are integrated into a needle puncture training simulation framework, which can be used for simulated biopsy or vessel puncture in the liver. To confirm real-time applicability, a performance analysis of the resulting framework is given. It is shown that the presented methods achieve mean update rates around 2,000 Hz for haptic simulation and interactive frame rates for volume rendering and thus are well suited for visuo-haptic rendering of virtual patients under respiratory motion. PMID:26087498

  20. Web-based volume slicer for 3D electron-microscopy data from EMDB.

    PubMed

    Salavert-Torres, José; Iudin, Andrii; Lagerstedt, Ingvar; Sanz-García, Eduardo; Kleywegt, Gerard J; Patwardhan, Ardan

    2016-05-01

    We describe the functionality and design of the Volume slicer - a web-based slice viewer for EMDB entries. This tool uniquely provides the facility to view slices from 3D EM reconstructions along the three orthogonal axes and to rapidly switch between them and navigate through the volume. We have employed multiple rounds of user-experience testing with members of the EM community to ensure that the interface is easy and intuitive to use and the information provided is relevant. The impetus to develop the Volume slicer has been calls from the EM community to provide web-based interactive visualisation of 2D slice data. This would be useful for quick initial checks of the quality of a reconstruction. Again in response to calls from the community, we plan to further develop the Volume slicer into a fully-fledged Volume browser that provides integrated visualisation of EMDB and PDB entries from the molecular to the cellular scale. PMID:26876163

  1. Digital breast tomosynthesis: computerized detection of microcalcifications in reconstructed breast volume using a 3D approach

    NASA Astrophysics Data System (ADS)

    Chan, Heang-Ping; Sahiner, Berkman; Wei, Jun; Hadjiiski, Lubomir M.; Zhou, Chuan; Helvie, Mark A.

    2010-03-01

    We are developing a computer-aided detection (CAD) system for clustered microcalcifications in digital breast tomosynthesis (DBT). In this preliminary study, we investigated the approach of detecting microcalcifications in the tomosynthesized volume. The DBT volume is first enhanced by 3D multi-scale filtering and analysis of the eigenvalues of Hessian matrices with a calcification response function and signal-to-noise ratio enhancement filtering. Potential signal sites are identified in the enhanced volume and local analysis is performed to further characterize each object. A 3D dynamic clustering procedure is designed to locate potential clusters using hierarchical criteria. We collected a pilot data set of two-view DBT mammograms of 39 breasts containing microcalcification clusters (17 malignant, 22 benign) with IRB approval. A total of 74 clusters were identified by an experienced radiologist in the 78 DBT views. Our prototype CAD system achieved view-based sensitivity of 90% and 80% at an average FP rate of 7.3 and 2.0 clusters per volume, respectively. At the same levels of case-based sensitivity, the FP rates were 3.6 and 1.3 clusters per volume, respectively. For the subset of malignant clusters, the view-based detection sensitivity was 94% and 82% at an average FP rate of 6.0 and 1.5 FP clusters per volume, respectively. At the same levels of case-based sensitivity, the FP rates were 1.2 and 0.9 clusters per volume, respectively. This study demonstrated that computerized microcalcification detection in 3D is a promising approach to the development of a CAD system for DBT. Study is underway to further improve the computer-vision methods and to optimize the processing parameters using a larger data set.

  2. High Productivity DRIE solutions for 3D-SiP and MEMS Volume Manufacturing

    NASA Astrophysics Data System (ADS)

    Puech, M.; Thevenoud, JM; Launay, N.; Arnal, N.; Godinat, P.; Andrieu, B.; Gruffat, JM

    2006-04-01

    Emerging 3D-SiP technologies and high volume MEMS applications require high productivity mass production DRIE systems. The Alcatel DRIE product range has recently been optimised to reach the highest process and hardware production performances. A study based on sub-micron high aspect ratio structures encountered in the most stringent 3D-SiP has been carried out. The optimization of the Bosch process parameters has resulted in ultra high silicon etch rates, with unrivalled uniformity and repeatability leading to excellent process. In parallel, most recent hardware and proprietary design optimization including vacuum pumping lines, process chamber, wafer chucks, pressure control system, gas delivery are discussed. These improvements have been monitored in a mass production environment for a mobile phone application. Field data analysis shows a significant reduction of cost of ownership thanks to increased throughput and much lower running costs. These benefits are now available for all 3D-SiP and high volume MEMS applications. The typical etched patterns include tapered trenches for CMOS imagers, through silicon via holes for die stacking, well controlled profile angle for 3D high precision inertial sensors, and large exposed area features for inkjet printer heads and Silicon microphones.

  3. High-productivity DRIE solutions for 3D-SiP and MEMS volume manufacturing

    NASA Astrophysics Data System (ADS)

    Puech, M.; Thevenoud, J. M.; Launay, N.; Arnal, N.; Godinat, P.; Andrieu, B.; Gruffat, J. M.

    2006-12-01

    Emerging 3D-SiP technologies and high volume MEMS applications require high productivity mass production DRIE systems. The Alcatel DRIE product range has recently been optimized to reach the highest process and hardware production performances. A study based on sub-micron high aspect ratio structures encountered in the most stringent 3D-SiP has been carried out. The optimization of the Bosch process parameters have shown ultra high silicon etch rate, with unrivaled uniformity and repeatability leading to excellent process yields. In parallel, most recent hardware and proprietary design optimization including vacuum pumping lines, process chamber, wafer chucks, pressure control system, gas delivery are discussed. A key factor for achieving the highest performances was the recognized expertise of Alcatel vacuum and plasma science technologies. These improvements have been monitored in a mass production environment for a mobile phone application. Field data analysis shows a significant reduction of cost of ownership thanks to increased throughput and much lower running costs. These benefits are now available for all 3D-SiP and high volume MEMS applications. The typical etched patterns include tapered trenches for CMOS imagers, through silicon via holes for die stacking, well controlled profile angle for 3D high precision inertial sensors, and large exposed area features for inkjet printer head and Silicon microphones.

  4. CT-guided Irreversible Electroporation in an Acute Porcine Liver Model: Effect of Previous Transarterial Iodized Oil Tissue Marking on Technical Parameters, 3D Computed Tomographic Rendering of the Electroporation Zone, and Histopathology

    SciTech Connect

    Sommer, C. M.; Fritz, S.; Vollherbst, D.; Zelzer, S.; Wachter, M. F. Bellemann, N. Gockner, T. Mokry, T. Schmitz, A.; Aulmann, S.; Stampfl, U.; Pereira, P.; Kauczor, H. U.; Werner, J.; Radeleff, B. A.

    2015-02-15

    PurposeTo evaluate the effect of previous transarterial iodized oil tissue marking (ITM) on technical parameters, three-dimensional (3D) computed tomographic (CT) rendering of the electroporation zone, and histopathology after CT-guided irreversible electroporation (IRE) in an acute porcine liver model as a potential strategy to improve IRE performance.MethodsAfter Ethics Committee approval was obtained, in five landrace pigs, two IREs of the right and left liver (RL and LL) were performed under CT guidance with identical electroporation parameters. Before IRE, transarterial marking of the LL was performed with iodized oil. Nonenhanced and contrast-enhanced CT examinations followed. One hour after IRE, animals were killed and livers collected. Mean resulting voltage and amperage during IRE were assessed. For 3D CT rendering of the electroporation zone, parameters for size and shape were analyzed. Quantitative data were compared by the Mann–Whitney test. Histopathological differences were assessed.ResultsMean resulting voltage and amperage were 2,545.3 ± 66.0 V and 26.1 ± 1.8 A for RL, and 2,537.3 ± 69.0 V and 27.7 ± 1.8 A for LL without significant differences. Short axis, volume, and sphericity index were 16.5 ± 4.4 mm, 8.6 ± 3.2 cm{sup 3}, and 1.7 ± 0.3 for RL, and 18.2 ± 3.4 mm, 9.8 ± 3.8 cm{sup 3}, and 1.7 ± 0.3 for LL without significant differences. For RL and LL, the electroporation zone consisted of severely widened hepatic sinusoids containing erythrocytes and showed homogeneous apoptosis. For LL, iodized oil could be detected in the center and at the rim of the electroporation zone.ConclusionThere is no adverse effect of previous ITM on technical parameters, 3D CT rendering of the electroporation zone, and histopathology after CT-guided IRE of the liver.

  5. Finite volume and finite element methods applied to 3D laminar and turbulent channel flows

    SciTech Connect

    Louda, Petr; Příhoda, Jaromír; Sváček, Petr; Kozel, Karel

    2014-12-10

    The work deals with numerical simulations of incompressible flow in channels with rectangular cross section. The rectangular cross section itself leads to development of various secondary flow patterns, where accuracy of simulation is influenced by numerical viscosity of the scheme and by turbulence modeling. In this work some developments of stabilized finite element method are presented. Its results are compared with those of an implicit finite volume method also described, in laminar and turbulent flows. It is shown that numerical viscosity can cause errors of same magnitude as different turbulence models. The finite volume method is also applied to 3D turbulent flow around backward facing step and good agreement with 3D experimental results is obtained.

  6. Detection and 3D representation of pulmonary air bubbles in HRCT volumes

    NASA Astrophysics Data System (ADS)

    Silva, Jose S.; Silva, Augusto F.; Santos, Beatriz S.; Madeira, Joaquim

    2003-05-01

    Bubble emphysema is a disease characterized by the presence of air bubbles within the lungs. With the purpose of identifying pulmonary air bubbles, two alternative methods were developed, using High Resolution Computer Tomography (HRCT) exams. The search volume is confined to the pulmonary volume through a previously developed pulmonary contour detection algorithm. The first detection method follows a slice by slice approach and uses selection criteria based on the Hounsfield levels, dimensions, shape and localization of the bubbles. Candidate regions that do not exhibit axial coherence along at least two sections are excluded. Intermediate sections are interpolated for a more realistic representation of lungs and bubbles. The second detection method, after the pulmonary volume delimitation, follows a fully 3D approach. A global threshold is applied to the entire lung volume returning candidate regions. 3D morphologic operators are used to remove spurious structures and to circumscribe the bubbles. Bubble representation is accomplished by two alternative methods. The first generates bubble surfaces based on the voxel volumes previously detected; the second method assumes that bubbles are approximately spherical. In order to obtain better 3D representations, fits super-quadrics to bubble volume. The fitting process is based on non-linear least squares optimization method, where a super-quadric is adapted to a regular grid of points defined on each bubble. All methods were applied to real and semi-synthetical data where artificial and randomly deformed bubbles were embedded in the interior of healthy lungs. Quantitative results regarding bubble geometric features are either similar to a priori known values used in simulation tests, or indicate clinically acceptable dimensions and locations when dealing with real data.

  7. Visual Quality Adjustment for Volume Rendering in a Head-Tracked Virtual Environment.

    PubMed

    Hanel, Claudia; Weyers, Benjamin; Hentschel, Bernd; Kuhlen, Torsten W

    2016-04-01

    To avoid simulator sickness and improve presence in immersive virtual environments (IVEs), high frame rates and low latency are required. In contrast, volume rendering applications typically strive for high visual quality that induces high computational load and, thus, leads to low frame rates. To evaluate this trade-off in IVEs, we conducted a controlled user study with 53 participants. Search and count tasks were performed in a CAVE with varying volume rendering conditions which are applied according to viewer position updates corresponding to head tracking. The results of our study indicate that participants preferred the rendering condition with continuous adjustment of the visual quality over an instantaneous adjustment which guaranteed for low latency and over no adjustment providing constant high visual quality but rather low frame rates. Within the continuous condition, the participants showed best task performance and felt less disturbed by effects of the visualization during movements. Our findings provide a good basis for further evaluations of how to accelerate volume rendering in IVEs according to user's preferences. PMID:26780811

  8. 3D volume reconstruction of a mouse brain histological sections using warp filtering

    SciTech Connect

    Ju, Tao; Warren, Joe; Carson, James P.; Bello, Musodiq; Kakadiaris, Ioannis; Chiu, Wah; Thaller, Christina; Eichele, Gregor

    2006-09-30

    Sectioning tissues for optical microscopy often introduces upon the resulting sections distortions that make 3D reconstruction difficult. Here we present an automatic method for producing a smooth 3D volume from distorted 2D sections in the absence of any undistorted references. The method is based on pairwise elastic image warps between successive tissue sections, which can be computed by 2D image registration. Using a Gaussian filter, an average warp is computed for each section from the pairwise warps in a group of its neighboring sections. The average warps deform each section to match its neighboring sections, thus creating a smooth volume where corresponding features on successive sections lie close to each other. The proposed method can be used with any existing 2D image registration method for 3D reconstruction. In particular, we present a novel image warping algorithm based on dynamic programming that extends Dynamic Time Warping in 1D speech recognition to compute pairwise warps between high-resolution 2D images. The warping algorithm efficiently computes a restricted class of 2D local deformations that are characteristic between successive tissue sections. Finally, a validation framework is proposed and applied to evaluate the quality of reconstruction using both real sections and a synthetic volume.

  9. Accurately measuring volume of soil samples using low cost Kinect 3D scanner

    NASA Astrophysics Data System (ADS)

    van der Sterre, Boy-Santhos; Hut, Rolf; van de Giesen, Nick

    2013-04-01

    The 3D scanner of the Kinect game controller can be used to increase the accuracy and efficiency of determining in situ soil moisture content. Soil moisture is one of the principal hydrological variables in both the water and energy interactions between soil and atmosphere. Current in situ measurements of soil moisture either rely on indirect measurements (of electromagnetic constants or heat capacity) or on physically taking a sample and weighing it in a lab. The bottleneck in accurately retrieving soil moisture using samples is the determining of the volume of the sample. Currently this is mostly done by the very time consuming "sand cone method" in which the volume were the sample used to sit is filled with sand. We show that 3D scanner that is part of the 150 game controller extension "Kinect" can be used to make 3D scans before and after taking the sample. The accuracy of this method is tested by scanning forms of known volume. This method is less time consuming and less error-prone than using a sand cone.

  10. Accurately measuring volume of soil samples using low cost Kinect 3D scanner

    NASA Astrophysics Data System (ADS)

    van der Sterre, B.; Hut, R.; Van De Giesen, N.

    2012-12-01

    The 3D scanner of the Kinect game controller can be used to increase the accuracy and efficiency of determining in situ soil moisture content. Soil moisture is one of the principal hydrological variables in both the water and energy interactions between soil and atmosphere. Current in situ measurements of soil moisture either rely on indirect measurements (of electromagnetic constants or heat capacity) or on physically taking a sample and weighing it in a lab. The bottleneck in accurately retrieving soil moisture using samples is the determining of the volume of the sample. Currently this is mostly done by the very time consuming "sand cone method" in which the volume were the sample used to sit is filled with sand. We show that 3D scanner that is part of the $150 game controller extension "Kinect" can be used to make 3D scans before and after taking the sample. The accuracy of this method is tested by scanning forms of known volume. This method is less time consuming and less error-prone than using a sand cone.

  11. 3-D foliation unfolding with volume and bed-length least-squares conservation

    SciTech Connect

    Leger, M.; Morvan, J.M.; Thibaut, M.

    1994-12-31

    Restoration of a geologic structure at earlier times is a good means to criticize, and next to improve, its interpretation. Restoration softwares already exist in 2D, but a lot of work remains to be done in 3D. The authors focus on the interbedding slip phenomenon, with bed-length and volume conservation. They unfold a (geometrical) foliation by optimizing following least-squares criteria: horizontalness, bed-length and volume conservation, under equality constraints related to the position of the ``binding`` or ``pin-surface``

  12. Estimation of single cell volume from 3D confocal images using automatic data processing

    NASA Astrophysics Data System (ADS)

    Chorvatova, A.; Cagalinec, M.; Mateasik, A.; Chorvat, D., Jr.

    2012-06-01

    Cardiac cells are highly structured with a non-uniform morphology. Although precise estimation of their volume is essential for correct evaluation of hypertrophic changes of the heart, simple and unified techniques that allow determination of the single cardiomyocyte volume with sufficient precision are still limited. Here, we describe a novel approach to assess the cell volume from confocal microscopy 3D images of living cardiac myocytes. We propose a fast procedure based on segementation using active deformable contours. This technique is independent on laser gain and/or pinhole settings and it is also applicable on images of cells stained with low fluorescence markers. Presented approach is a promising new tool to investigate changes in the cell volume during normal, as well as pathological growth, as we demonstrate in the case of cell enlargement during hypertension in rats.

  13. 3D quantification of microclimate volume in layered clothing for the prediction of clothing insulation.

    PubMed

    Lee, Yejin; Hong, Kyunghi; Hong, Sung-Ae

    2007-05-01

    Garment fit and resultant air volume is a crucial factor in thermal insulation, and yet, it has been difficult to quantify the air volume of clothing microclimate and relate it to the thermal insulation value just using the information on the size of clothing pattern without actual 3D volume measurement in wear condition. As earlier methods for the computation of air volume in clothing microclimate, vacuum over suit and circumference model have been used. However, these methods have inevitable disadvantages in terms of cost or accuracy due to the limitations of measurement equipment. In this paper, the phase-shifting moiré topography was introduced as one of the 3D scanning tools to measure the air volume of clothing microclimate quantitatively. The purpose of this research is to adopt a non-contact image scanning technology, phase-shifting moiré topography, to ascertain relationship between air volume and insulation value of layered clothing systems in wear situations where the 2D fabric creates new conditions in 3D spaces. The insulation of vests over shirts as a layered clothing system was measured with a thermal manikin in the environmental condition of 20 degrees C, 65% RH and air velocity of 0.79 m/s. As the pattern size increased, the insulation of the clothing system was increased. But beyond a certain limit, the insulation started to decrease due to convection and ventilation, which is more apparent when only the vest was worn over the torso of manikin. The relationship between clothing air volume and insulation was difficult to predict with a single vest due to the extreme openings which induced active ventilation. But when the vest was worn over the shirt, the effects of thickness of the fabrics on insulation were less pronounced compared with that of air volume. In conclusion, phase-shifting moiré topography was one of the efficient and accurate ways of quantifying air volume and its distribution across the clothing microclimate. It is also noted

  14. Glacial isostatic adjustment on 3-D Earth models: a finite-volume formulation

    NASA Astrophysics Data System (ADS)

    Latychev, Konstantin; Mitrovica, Jerry X.; Tromp, Jeroen; Tamisiea, Mark E.; Komatitsch, Dimitri; Christara, Christina C.

    2005-05-01

    We describe and present results from a finite-volume (FV) parallel computer code for forward modelling the Maxwell viscoelastic response of a 3-D, self-gravitating, elastically compressible Earth to an arbitrary surface load. We implement a conservative, control volume discretization of the governing equations using a tetrahedral grid in Cartesian geometry and a low-order, linear interpolation. The basic starting grid honours all major radial discontinuities in the Preliminary Reference Earth Model (PREM), and the models are permitted arbitrary spatial variations in viscosity and elastic parameters. These variations may be either continuous or discontinuous at a set of grid nodes forming a 3-D surface within the (regional or global) modelling domain. In the second part of the paper, we adopt the FV methodology and a spherically symmetric Earth model to generate a suite of predictions sampling a broad class of glacial isostatic adjustment (GIA) data types (3-D crustal motions, long-wavelength gravity anomalies). These calculations, based on either a simple disc load history or a global Late Pleistocene ice load reconstruction (ICE-3G), are benchmarked against predictions generated using the traditional normal-mode approach to GIA. The detailed comparison provides a guide for future analyses (e.g. what grid resolution is required to obtain a specific accuracy?) and it indicates that discrepancies in predictions of 3-D crustal velocities less than 0.1 mm yr-1 are generally obtainable for global grids with ~3 × 106 nodes; however, grids of higher resolution are required to predict large-amplitude (>1 cm yr-1) radial velocities in zones of peak post-glacial uplift (e.g. James bay) to the same level of absolute accuracy. We conclude the paper with a first application of the new formulation to a 3-D problem. Specifically, we consider the impact of mantle viscosity heterogeneity on predictions of present-day 3-D crustal motions in North America. In these tests, the

  15. Hierarchical and Parallelizable Direct Volume Rendering for Irregular and Multiple Grids

    NASA Technical Reports Server (NTRS)

    Wilhelms, Jane; VanGelder, Allen; Tarantino, Paul; Gibbs, Jonathan

    1996-01-01

    A general volume rendering technique is described that efficiently produces images of excellent quality from data defined over irregular grids having a wide variety of formats. Rendering is done in software, eliminating the need for special graphics hardware, as well as any artifacts associated with graphics hardware. Images of volumes with about one million cells can be produced in one to several minutes on a workstation with a 150 MHz processor. A significant advantage of this method for applications such as computational fluid dynamics is that it can process multiple intersecting grids. Such grids present problems for most current volume rendering techniques. Also, the wide range of cell sizes (by a factor of 10,000 or more), which is typical of such applications, does not present difficulties, as it does for many techniques. A spatial hierarchical organization makes it possible to access data from a restricted region efficiently. The tree has greater depth in regions of greater detail, determined by the number of cells in the region. It also makes it possible to render useful 'preview' images very quickly (about one second for one-million-cell grids) by displaying each region associated with a tree node as one cell. Previews show enough detail to navigate effectively in very large data sets. The algorithmic techniques include use of a kappa-d tree, with prefix-order partitioning of triangles, to reduce the number of primitives that must be processed for one rendering, coarse-grain parallelism for a shared-memory MIMD architecture, a new perspective transformation that achieves greater numerical accuracy, and a scanline algorithm with depth sorting and a new clipping technique.

  16. Online volume rendering of incrementally accumulated LSCEM images for superficial oral cancer detection.

    PubMed

    Chiew, Wei Ming; Lin, Feng; Qian, Kemao; Seah, Hock Soon

    2011-04-10

    Laser scanning confocal endomicroscope (LSCEM) has emerged as an imaging modality which provides non-invasive, in vivo imaging of biological tissue on a microscopic scale. Scientific visualizations for LSCEM datasets captured by current imaging systems require these datasets to be fully acquired and brought to a separate rendering machine. To extend the features and capabilities of this modality, we propose a system which is capable of performing realtime visualization of LSCEM datasets. Using field-programmable gate arrays, our system performs three tasks in parallel: (1) automated control of dataset acquisition; (2) imaging-rendering system synchronization; and (3) realtime volume rendering of dynamic datasets. Through fusion of LSCEM imaging and volume rendering processes, acquired datasets can be visualized in realtime to provide an immediate perception of the image quality and biological conditions of the subject, further assisting in realtime cancer diagnosis. Subsequently, the imaging procedure can be improved for more accurate diagnosis and reduce the need for repeating the process due to unsatisfactory datasets. PMID:21611094

  17. Volume quantization of the mouse cerebellum by semiautomatic 3D segmentation of magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Sijbers, Jan; Van der Linden, Anne-Marie; Scheunders, Paul; Van Audekerke, Johan; Van Dyck, Dirk; Raman, Erik R.

    1996-04-01

    The aim of this work is the development of a non-invasive technique for efficient and accurate volume quantization of the cerebellum of mice. This enables an in-vivo study on the development of the cerebellum in order to define possible alterations in cerebellum volume of transgenic mice. We concentrate on a semi-automatic segmentation procedure to extract the cerebellum from 3D magnetic resonance data. The proposed technique uses a 3D variant of Vincent and Soille's immersion based watershed algorithm which is applied to the gradient magnitude of the MR data. The algorithm results in a partitioning of the data in volume primitives. The known drawback of the watershed algorithm, over-segmentation, is strongly reduced by a priori application of an adaptive anisotropic diffusion filter on the gradient magnitude data. In addition, over-segmentation is a posteriori contingently reduced by properly merging volume primitives, based on the minimum description length principle. The outcome of the preceding image processing step is presented to the user for manual segmentation. The first slice which contains the object of interest is quickly segmented by the user through selection of basic image regions. In the sequel, the subsequent slices are automatically segmented. The segmentation results are contingently manually corrected. The technique is tested on phantom objects, where segmentation errors less than 2% were observed. Three-dimensional reconstructions of the segmented data are shown for the mouse cerebellum and the mouse brains in toto.

  18. A new method to combine 3D reconstruction volumes for multiple parallel circular cone beam orbits

    PubMed Central

    Baek, Jongduk; Pelc, Norbert J.

    2010-01-01

    Purpose: This article presents a new reconstruction method for 3D imaging using a multiple 360° circular orbit cone beam CT system, specifically a way to combine 3D volumes reconstructed with each orbit. The main goal is to improve the noise performance in the combined image while avoiding cone beam artifacts. Methods: The cone beam projection data of each orbit are reconstructed using the FDK algorithm. When at least a portion of the total volume can be reconstructed by more than one source, the proposed combination method combines these overlap regions using weighted averaging in frequency space. The local exactness and the noise performance of the combination method were tested with computer simulations of a Defrise phantom, a FORBILD head phantom, and uniform noise in the raw data. Results: A noiseless simulation showed that the local exactness of the reconstructed volume from the source with the smallest tilt angle was preserved in the combined image. A noise simulation demonstrated that the combination method improved the noise performance compared to a single orbit reconstruction. Conclusions: In CT systems which have overlap volumes that can be reconstructed with data from more than one orbit and in which the spatial frequency content of each reconstruction can be calculated, the proposed method offers improved noise performance while keeping the local exactness of data from the source with the smallest tilt angle. PMID:21089770

  19. Fully analytical integration over the 3D volume bounded by the β sphere in topological atoms.

    PubMed

    Popelier, Paul L A

    2011-11-17

    Atomic properties of a topological atom are obtained by 3D integration over the volume of its atomic basin. Algorithms that compute atomic properties typically integrate over two subspaces: the volume bounded by the so-called β sphere, which is centered at the nucleus and completely contained within the atomic basin, and the volume of the remaining part of the basin. Here we show how the usual quadrature over the β sphere volume can be replaced by a fully analytical 3D integration leading to the atomic charge (monopole moment) for s, p, and d functions. Spherical tensor multipole moments have also been implemented and tested up to hexadecupole for s functions only, and up to quadrupole for s and p functions. The new algorithm is illustrated by operating on capped glycine (HF/6-31G, 35 molecular orbitals (MOs), 322 Gaussian primitives, 19 nuclei), the protein crambin (HF/3-21G, 1260 MOs, 5922 primitives and 642 nuclei), and tin (Z = 50) in Sn(2)(CH(3))(2) (B3LYP/cc-pVTZ and LANL2DZ, 59 MOs, 1352 primitives). PMID:21978204

  20. On 3-D inelastic analysis methods for hot section components. Volume 1: Special finite element models

    NASA Technical Reports Server (NTRS)

    Nakazawa, S.

    1988-01-01

    This annual status report presents the results of work performed during the fourth year of the 3-D Inelastic Analysis Methods for Hot Section Components program (NASA Contract NAS3-23697). The objective of the program is to produce a series of new computer codes permitting more accurate and efficient 3-D analysis of selected hot section components, i.e., combustor liners, turbine blades and turbine vanes. The computer codes embody a progression of math models and are streamlined to take advantage of geometrical features, loading conditions, and forms of material response that distinguish each group of selected components. Volume 1 of this report discusses the special finite element models developed during the fourth year of the contract.

  1. On 3-D inelastic analysis methods for hot section components. Volume 1: Special finite element models

    NASA Technical Reports Server (NTRS)

    Nakazawa, S.

    1987-01-01

    This Annual Status Report presents the results of work performed during the third year of the 3-D Inelastic Analysis Methods for Hot Section Components program (NASA Contract NAS3-23697). The objective of the program is to produce a series of new computer codes that permit more accurate and efficient three-dimensional analysis of selected hot section components, i.e., combustor liners, turbine blades, and turbine vanes. The computer codes embody a progression of mathematical models and are streamlined to take advantage of geometrical features, loading conditions, and forms of material response that distinguish each group of selected components. This report is presented in two volumes. Volume 1 describes effort performed under Task 4B, Special Finite Element Special Function Models, while Volume 2 concentrates on Task 4C, Advanced Special Functions Models.

  2. Registration of 3D spectral OCT volumes combining ICP with a graph-based approach

    NASA Astrophysics Data System (ADS)

    Niemeijer, Meindert; Lee, Kyungmoo; Garvin, Mona K.; Abràmoff, Michael D.; Sonka, Milan

    2012-02-01

    The introduction of spectral Optical Coherence Tomography (OCT) scanners has enabled acquisition of high resolution, 3D cross-sectional volumetric images of the retina. 3D-OCT is used to detect and manage eye diseases such as glaucoma and age-related macular degeneration. To follow-up patients over time, image registration is a vital tool to enable more precise, quantitative comparison of disease states. In this work we present a 3D registrationmethod based on a two-step approach. In the first step we register both scans in the XY domain using an Iterative Closest Point (ICP) based algorithm. This algorithm is applied to vessel segmentations obtained from the projection image of each scan. The distance minimized in the ICP algorithm includes measurements of the vessel orientation and vessel width to allow for a more robust match. In the second step, a graph-based method is applied to find the optimal translation along the depth axis of the individual A-scans in the volume to match both scans. The cost image used to construct the graph is based on the mean squared error (MSE) between matching A-scans in both images at different translations. We have applied this method to the registration of Optic Nerve Head (ONH) centered 3D-OCT scans of the same patient. First, 10 3D-OCT scans of 5 eyes with glaucoma imaged in vivo were registered for a qualitative evaluation of the algorithm performance. Then, 17 OCT data set pairs of 17 eyes with known deformation were used for quantitative assessment of the method's robustness.

  3. Reconstruction Error of Calibration Volume's Coordinates for 3D Swimming Kinematics.

    PubMed

    Figueiredo, Pedro; Machado, Leandro; Vilas-Boas, João Paulo; Fernandes, Ricardo J

    2011-09-01

    The aim of this study was to investigate the accuracy and reliability of above and underwater 3D reconstruction of three calibration volumes with different control points disposal (#1 - on vertical and horizontal rods; #2 - on vertical and horizontal rods and facets; #3 - on crossed horizontal rods). Each calibration volume (3 × 2 × 3 m) was positioned in a 25 m swimming pool (half above and half below the water surface) and recorded with four underwater and two above water synchronised cameras (50 Hz). Reconstruction accuracy was determined calculating the RMS error of twelve validation points. The standard deviation across all digitisation of the same marker was used for assessing the reliability estimation. Comparison among different number of control points showed that the set of 24 points produced the most accurate results. The volume #2 presented higher accuracy (RMS errors: 5.86 and 3.59 mm for x axis, 3.45 and 3.11 mm for y axis and 4.38 and 4.00 mm for z axis, considering under and above water, respectively) and reliability (SD: underwater cameras ± [0.2; 0.6] mm; above water cameras ± [0.2; 0.3] mm) that may be considered suitable for 3D swimming kinematic analysis. Results revealed that RMS error was greater during underwater analysis, possibly due to refraction. PMID:23486761

  4. Diagnostic Capability of Peripapillary Retinal Thickness in Glaucoma Using 3D Volume Scans

    PubMed Central

    Simavli, Huseyin; Que, Christian John; Akduman, Mustafa; Rizzo, Jennifer L.; Tsikata, Edem; de Boer, Johannes F.; Chen, Teresa C.

    2015-01-01

    Purpose To determine the diagnostic capability of spectral domain optical coherence tomography (SD-OCT) peripapillary retinal thickness (RT) measurements from 3-dimensional (3D) volume scans for primary open angle glaucoma (POAG). Design Cross-sectional study. Methods Setting Institutional Study population 156 patients (89 POAG and 67 normal subjects) Observation procedures One eye of each subject was included. SD-OCT peripapillary RT values from 3D volume scans were calculated for four quadrants of three different sized annuli. Peripapillary retinal nerve fiber layer (RNFL) thickness values were also determined. Main outcome measures Area under the receiver operating characteristic curve (AUROC) values, sensitivity, specificity, positive and negative predictive values, and positive and negative likelihood ratios. Results The top five RT AUROCs for all glaucoma patients and for a subset of early glaucoma patients were for the inferior quadrant of outer circumpapillary annulus of circular grid (OCA) 1 (0.959, 0.939), inferior quadrant of OCA2 (0.945, 0.921), superior quadrant of OCA1 (0.890, 0.811), inferior quadrant of OCA3 (0.887, 0.854), and superior quadrant of OCA2 (0.879, 0.807). Smaller RT annuli OCA1 and OCA2 consistently showed better diagnostic performance than the larger RT annulus OCA3. For both RNFL and RT measurements, best AUROC values were found for inferior RT OCA1 and OCA2, followed by inferior and overall RNFL thickness. Conclusion Peripapillary RT measurements from 3D volume scans showed excellent diagnostic performance for detecting both glaucoma and early glaucoma patients. Peripapillary RT values have the same or better diagnostic capability compared to peripapillary RNFL thickness measurements, while also having fewer algorithm errors. PMID:25498354

  5. Illustrating Surface Shape in Volume Data via Principal Direction-Driven 3D Line Integral Convolution

    NASA Technical Reports Server (NTRS)

    Interrante, Victoria

    1997-01-01

    The three-dimensional shape and relative depth of a smoothly curving layered transparent surface may be communicated particularly effectively when the surface is artistically enhanced with sparsely distributed opaque detail. This paper describes how the set of principal directions and principal curvatures specified by local geometric operators can be understood to define a natural 'flow' over the surface of an object, and can be used to guide the placement of the lines of a stroke texture that seeks to represent 3D shape information in a perceptually intuitive way. The driving application for this work is the visualization of layered isovalue surfaces in volume data, where the particular identity of an individual surface is not generally known a priori and observers will typically wish to view a variety of different level surfaces from the same distribution, superimposed over underlying opaque structures. By advecting an evenly distributed set of tiny opaque particles, and the empty space between them, via 3D line integral convolution through the vector field defined by the principal directions and principal curvatures of the level surfaces passing through each gridpoint of a 3D volume, it is possible to generate a single scan-converted solid stroke texture that may intuitively represent the essential shape information of any level surface in the volume. To generate longer strokes over more highly curved areas, where the directional information is both most stable and most relevant, and to simultaneously downplay the visual impact of directional information in the flatter regions, one may dynamically redefine the length of the filter kernel according to the magnitude of the maximum principal curvature of the level surface at the point around which it is applied.

  6. Interpretation of a 3D Seismic-Reflection Volume in the Basin and Range, Hawthorne, Nevada

    NASA Astrophysics Data System (ADS)

    Louie, J. N.; Kell, A. M.; Pullammanappallil, S.; Oldow, J. S.; Sabin, A.; Lazaro, M.

    2009-12-01

    A collaborative effort by the Great Basin Center for Geothermal Energy at the University of Nevada, Reno, and Optim Inc. of Reno has interpreted a 3d seismic data set recorded by the U.S. Navy Geothermal Programs Office (GPO) at the Hawthorne Army Depot, Nevada. The 3d survey incorporated about 20 NNW-striking lines covering an area of approximately 3 by 10 km. The survey covered an alluvial area below the eastern flank of the Wassuk Range. In the reflection volume the most prominent events are interpreted to be the base of Quaternary alluvium, the Quaternary Wassuk Range-front normal fault zone, and sequences of intercalated Tertiary volcanic flows and sediments. Such a data set is rare in the Basin and Range. Our interpretation reveals structural and stratigraphic details that form a basis for rapid development of the geothermal-energy resources underlying the Depot. We interpret a map of the time-elevation of the Wassuk Range fault and its associated splays and basin-ward step faults. The range-front fault is the deepest, and its isochron map provides essentially a map of "economic basement" under the prospect area. There are three faults that are the most readily picked through vertical sections. The fault reflections show an uncertainty in the time-depth that we can interpret for them of 50 to 200 ms, due to the over-migrated appearance of the processing contractor’s prestack time-migrated data set. Proper assessment of velocities for mitigating the migration artifacts through prestack depth migration is not possible from this data set alone, as the offsets are not long enough for sufficiently deep velocity tomography. The three faults we interpreted appear as gradients in potential-field maps. In addition, the southern boundary of a major Tertiary graben may be seen within the volume as the northward termination of the strong reflections from older Tertiary volcanics. Using a transparent volume view across the survey gives a view of the volcanics in full

  7. Web-based volume slicer for 3D electron-microscopy data from EMDB

    PubMed Central

    Salavert-Torres, José; Iudin, Andrii; Lagerstedt, Ingvar; Sanz-García, Eduardo; Kleywegt, Gerard J.; Patwardhan, Ardan

    2016-01-01

    We describe the functionality and design of the Volume slicer – a web-based slice viewer for EMDB entries. This tool uniquely provides the facility to view slices from 3D EM reconstructions along the three orthogonal axes and to rapidly switch between them and navigate through the volume. We have employed multiple rounds of user-experience testing with members of the EM community to ensure that the interface is easy and intuitive to use and the information provided is relevant. The impetus to develop the Volume slicer has been calls from the EM community to provide web-based interactive visualisation of 2D slice data. This would be useful for quick initial checks of the quality of a reconstruction. Again in response to calls from the community, we plan to further develop the Volume slicer into a fully-fledged Volume browser that provides integrated visualisation of EMDB and PDB entries from the molecular to the cellular scale. PMID:26876163

  8. Quantification of cerebral ventricle volume change of preterm neonates using 3D ultrasound images

    NASA Astrophysics Data System (ADS)

    Chen, Yimin; Kishimoto, Jessica; Qiu, Wu; de Ribaupierre, Sandrine; Fenster, Aaron; Chiu, Bernard

    2015-03-01

    Intraventricular hemorrhage (IVH) is a major cause of brain injury in preterm neonates. Quantitative measurement of ventricular dilation or shrinkage is important for monitoring patients and in evaluation of treatment options. 3D ultrasound (US) has been used to monitor the ventricle volume as a biomarker for ventricular dilation. However, volumetric quantification does not provide information as to where dilation occurs. The location where dilation occurs may be related to specific neurological problems later in life. For example, posterior horn enlargement, with thinning of the corpus callosum and parietal white matter fibres, could be linked to poor visuo-spatial abilities seen in hydrocephalic children. In this work, we report on the development and application of a method used to analyze local surface change of the ventricles of preterm neonates with IVH from 3D US images. The technique is evaluated using manual segmentations from 3D US images acquired in two imaging sessions. The surfaces from baseline and follow-up were registered and then matched on a point-by-point basis. The distance between each pair of corresponding points served as an estimate of local surface change of the brain ventricle at each vertex. The measurements of local surface change were then superimposed on the ventricle surface to produce the 3D local surface change map that provide information on the spatio-temporal dilation pattern of brain ventricles following IVH. This tool can be used to monitor responses to different treatment options, and may provide important information for elucidating the deficiencies a patient will have later in life.

  9. Volume analysis of treatment response of head and neck lesions using 3D level set segmentation

    NASA Astrophysics Data System (ADS)

    Hadjiiski, Lubomir; Street, Ethan; Sahiner, Berkman; Gujar, Sachin; Ibrahim, Mohannad; Chan, Heang-Ping; Mukherji, Suresh K.

    2008-03-01

    A computerized system for segmenting lesions in head and neck CT scans was developed to assist radiologists in estimation of the response to treatment of malignant lesions. The system performs 3D segmentations based on a level set model and uses as input an approximate bounding box for the lesion of interest. In this preliminary study, CT scans from a pre-treatment exam and a post one-cycle chemotherapy exam of 13 patients containing head and neck neoplasms were used. A radiologist marked 35 temporal pairs of lesions. 13 pairs were primary site cancers and 22 pairs were metastatic lymph nodes. For all lesions, a radiologist outlined a contour on the best slice on both the pre- and post treatment scans. For the 13 primary lesion pairs, full 3D contours were also extracted by a radiologist. The average pre- and post-treatment areas on the best slices for all lesions were 4.5 and 2.1 cm2, respectively. For the 13 primary site pairs the average pre- and post-treatment primary lesions volumes were 15.4 and 6.7 cm 3 respectively. The correlation between the automatic and manual estimates for the pre-to-post-treatment change in area for all 35 pairs was r=0.97, while the correlation for the percent change in area was r=0.80. The correlation for the change in volume for the 13 primary site pairs was r=0.89, while the correlation for the percent change in volume was r=0.79. The average signed percent error between the automatic and manual areas for all 70 lesions was 11.0+/-20.6%. The average signed percent error between the automatic and manual volumes for all 26 primary lesions was 37.8+/-42.1%. The preliminary results indicate that the automated segmentation system can reliably estimate tumor size change in response to treatment relative to radiologist's hand segmentation.

  10. 3D robust Chan-Vese model for industrial computed tomography volume data segmentation

    NASA Astrophysics Data System (ADS)

    Liu, Linghui; Zeng, Li; Luan, Xiao

    2013-11-01

    Industrial computed tomography (CT) has been widely applied in many areas of non-destructive testing (NDT) and non-destructive evaluation (NDE). In practice, CT volume data to be dealt with may be corrupted by noise. This paper addresses the segmentation of noisy industrial CT volume data. Motivated by the research on the Chan-Vese (CV) model, we present a region-based active contour model that draws upon intensity information in local regions with a controllable scale. In the presence of noise, a local energy is firstly defined according to the intensity difference within a local neighborhood. Then a global energy is defined to integrate local energy with respect to all image points. In a level set formulation, this energy is represented by a variational level set function, where a surface evolution equation is derived for energy minimization. Comparative analysis with the CV model indicates the comparable performance of the 3D robust Chan-Vese (RCV) model. The quantitative evaluation also shows the segmentation accuracy of 3D RCV. In addition, the efficiency of our approach is validated under several types of noise, such as Poisson noise, Gaussian noise, salt-and-pepper noise and speckle noise.

  11. Ash3d: A finite-volume, conservative numerical model for ash transport and tephra deposition

    USGS Publications Warehouse

    Schwaiger, Hans F.; Denlinger, Roger P.; Mastin, Larry G.

    2012-01-01

    We develop a transient, 3-D Eulerian model (Ash3d) to predict airborne volcanic ash concentration and tephra deposition during volcanic eruptions. This model simulates downwind advection, turbulent diffusion, and settling of ash injected into the atmosphere by a volcanic eruption column. Ash advection is calculated using time-varying pre-existing wind data and a robust, high-order, finite-volume method. Our routine is mass-conservative and uses the coordinate system of the wind data, either a Cartesian system local to the volcano or a global spherical system for the Earth. Volcanic ash is specified with an arbitrary number of grain sizes, which affects the fall velocity, distribution and duration of transport. Above the source volcano, the vertical mass distribution with elevation is calculated using a Suzuki distribution for a given plume height, eruptive volume, and eruption duration. Multiple eruptions separated in time may be included in a single simulation. We test the model using analytical solutions for transport. Comparisons of the predicted and observed ash distributions for the 18 August 1992 eruption of Mt. Spurr in Alaska demonstrate to the efficacy and efficiency of the routine.

  12. Local intersection volume: a new 3D descriptor applied to develop a 3D-QSAR pharmacophore model for benzodiazepine receptor ligands.

    PubMed

    Verli, Hugo; Albuquerque, Magaly Girão; Bicca de Alencastro, Ricardo; Barreiro, Eliezer J

    2002-03-01

    In this work, we have developed a new descriptor, named local intersection volume (LIV), in order to compose a 3D-QSAR pharmacophore model for benzodiazepine receptor ligands. The LIV can be classified as a 3D local shape descriptor in contraposition to the global shape descriptors. We have selected from the literature 49 non-benzodiazepine compounds as a training data set and the model was obtained and evaluated by genetic algorithms (GA) and partial least-squares (PLS) methods using LIVs as descriptors. The LIV 3D-QSAR model has a good predictive capacity according the cross-validation test by "leave-one-out" procedure (Q(2)=0.72). The developed model was compared to a comprehensive and extensive SAR pharmacophore model, recently proposed by Cook and co-workers, for benzodiazepine receptor ligands [J. Med. Chem. 43 (2000) 71]. It showed a relevant correlation with the pharmacophore groups pointed out in that work. Our LIV 3D-QSAR model was also able to predict affinity values for a series of nine compounds (test data set) that was not included into the training data set. PMID:11900866

  13. Automated volume of interest delineation and rendering of cone beam CT images in interventional cardiology

    NASA Astrophysics Data System (ADS)

    Lorenz, Cristian; Schäfer, Dirk; Eshuis, Peter; Carroll, John; Grass, Michael

    2012-02-01

    Interventional C-arm systems allow the efficient acquisition of 3D cone beam CT images. They can be used for intervention planning, navigation, and outcome assessment. We present a fast and completely automated volume of interest (VOI) delineation for cardiac interventions, covering the whole visceral cavity including mediastinum and lungs but leaving out rib-cage and spine. The problem is addressed in a model based approach. The procedure has been evaluated on 22 patient cases and achieves an average surface error below 2mm. The method is able to cope with varying image intensities, varying truncations due to the limited reconstruction volume, and partially with heavy metal and motion artifacts.

  14. Tangible 3D printouts of scientific data volumes with FOSS - an emerging field for research

    NASA Astrophysics Data System (ADS)

    Löwe, Peter; Klump, Jens; Wickert, Jens; Ludwig, Marcel; Frigeri, Alessandro

    2013-04-01

    Humans are very good in using both hands and eyes for tactile pattern recognition: The german verb for understanding, "begreifen" literally means "getting a (tactile) grip on a matter". This proven and time honoured concept has been in use since prehistoric times. While the amount of scientific data continues to grow, researchers still need all the support to help them visualize the data content before their inner eye. Immersive data-visualisations are helpful, yet fail to provide tactile feedback as provided from tangible objects. The need for tangible representations of geospatial information to solve real world problems eventually led to the advent of 3d-globes by M. Behaim in the 15th century and has continued since. The production of a tangible representation of a scientific data set with some fidelity is just the final step of an arc, leading from the physical world into scientific reasoning and back: The process starts with a physical observation, or a model, by a sensor which produces a data stream which is turned into a geo-referenced data set. This data is turned into a volume representation which is converted into command sequences for the printing device, leading to the creation of a 3d-printout. Finally, the new specimen has to be linked to its metadata to ensure its scientific meaning and context. On the technical side, the production of a tangible data-print has been realized as a pilot workflow based on the Free and Open Source Geoinformatics tools GRASS GIS and Paraview to convert scientific data volume into stereolithography datasets (stl) for printing on a RepRap printer. The initial motivation to use tangible representations of complex data was the task of quality assessments on tsunami simulation data sets in the FP7 TRIDEC project (www.tridec-online.eu). For this, 3d-prints of space time cubes of tsunami wave spreading patterns were produced. This was followed by print-outs of volume data derived from radar sounders (MARSIS, SHARAD) imaging

  15. Fast Time-Varying Volume Rendering Using Time-Space Partition (TSP) Tree

    NASA Technical Reports Server (NTRS)

    Shen, Han-Wei; Chiang, Ling-Jen; Ma, Kwan-Liu

    1999-01-01

    We present a new, algorithm for rapid rendering of time-varying volumes. A new hierarchical data structure that is capable of capturing both the temporal and the spatial coherence is proposed. Conventional hierarchical data structures such as octrees are effective in characterizing the homogeneity of the field values existing in the spatial domain. However, when treating time merely as another dimension for a time-varying field, difficulties frequently arise due to the discrepancy between the field's spatial and temporal resolutions. In addition, treating spatial and temporal dimensions equally often prevents the possibility of detecting the coherence that is unique in the temporal domain. Using the proposed data structure, our algorithm can meet the following goals. First, both spatial and temporal coherence are identified and exploited for accelerating the rendering process. Second, our algorithm allows the user to supply the desired error tolerances at run time for the purpose of image-quality/rendering-speed trade-off. Third, the amount of data that are required to be loaded into main memory is reduced, and thus the I/O overhead is minimized. This low I/O overhead makes our algorithm suitable for out-of-core applications.

  16. Benchmarking of state-of-the-art needle detection algorithms in 3D ultrasound data volumes

    NASA Astrophysics Data System (ADS)

    Pourtaherian, Arash; Zinger, Svitlana; de With, Peter H. N.; Korsten, Hendrikus H. M.; Mihajlovic, Nenad

    2015-03-01

    Ultrasound-guided needle interventions are widely practiced in medical diagnostics and therapy, i.e. for biopsy guidance, regional anesthesia or for brachytherapy. Needle guidance using 2D ultrasound can be very challenging due to the poor needle visibility and the limited field of view. Since 3D ultrasound transducers are becoming more widely used, needle guidance can be improved and simplified with appropriate computer-aided analyses. In this paper, we compare two state-of-the-art 3D needle detection techniques: a technique based on line filtering from literature and a system employing Gabor transformation. Both algorithms utilize supervised classification to pre-select candidate needle voxels in the volume and then fit a model of the needle on the selected voxels. The major differences between the two approaches are in extracting the feature vectors for classification and selecting the criterion for fitting. We evaluate the performance of the two techniques using manually-annotated ground truth in several ex-vivo situations of different complexities, containing three different needle types with various insertion angles. This extensive evaluation provides better understanding on the limitations and advantages of each technique under different acquisition conditions, which is leading to the development of improved techniques for more reliable and accurate localization. Benchmarking results that the Gabor features are better capable of distinguishing the needle voxels in all datasets. Moreover, it is shown that the complete processing chain of the Gabor-based method outperforms the line filtering in accuracy and stability of the detection results.

  17. Extended volume and surface scatterometer for optical characterization of 3D-printed elements

    NASA Astrophysics Data System (ADS)

    Dannenberg, Florian; Uebeler, Denise; Weiß, Jürgen; Pescoller, Lukas; Weyer, Cornelia; Hahlweg, Cornelius

    2015-09-01

    The use of 3d printing technology seems to be a promising way for low cost prototyping, not only of mechanical, but also of optical components or systems. It is especially useful in applications where customized equipment repeatedly is subject to immediate destruction, as in experimental detonics and the like. Due to the nature of the 3D-printing process, there is a certain inner texture and therefore inhomogeneous optical behaviour to be taken into account, which also indicates mechanical anisotropy. Recent investigations are dedicated to quantification of optical properties of such printed bodies and derivation of corresponding optimization strategies for the printing process. Beside mounting, alignment and illumination means, also refractive and reflective elements are subject to investigation. The proposed measurement methods are based on an imaging nearfield scatterometer for combined volume and surface scatter measurements as proposed in previous papers. In continuation of last year's paper on the use of near field imaging, which basically is a reflective shadowgraph method, for characterization of glossy surfaces like printed matter or laminated material, further developments are discussed. The device has been extended for observation of photoelasticity effects and therefore homogeneity of polarization behaviour. A refined experimental set-up is introduced. Variation of plane of focus and incident angle are used for separation of various the images of the layers of the surface under test, cross and parallel polarization techniques are applied. Practical examples from current research studies are included.

  18. Characterization of neonatal patients with intraventricular hemorrhage using 3D ultrasound cerebral ventricle volumes

    NASA Astrophysics Data System (ADS)

    Kishimoto, Jessica; Fenster, Aaron; Lee, David S. C.; de Ribaupierre, Sandrine

    2015-03-01

    One of the major non-congenital cause of neurological impairment among neonates born very preterm is intraventricular hemorrhage (IVH) - bleeding within the lateral ventricles. Most IVH patients will have a transient period of ventricle dilation that resolves spontaneously. However, those patients most at risk of long-term impairment are those who have progressive ventricle dilation as this causes macrocephaly, an abnormally enlarged head, then later causes increases intracranial pressure (ICP). 2D ultrasound (US) images through the fontanelles of the patients are serially acquired to monitor the progression of the ventricle dilation. These images are used to determine when interventional therapies such as needle aspiration of the built up CSF might be indicated for a patient. Initial therapies usually begin during the third week of life. Such interventions have been shown to decrease morbidity and mortality in IVH patients; however, this comes with risks of further hemorrhage or infection; therefore only patients requiring it should be treated. Previously we have developed and validated a 3D US system to monitor the progression of ventricle volumes (VV) in IVH patients. This system has been validated using phantoms and a small set of patient images. The aim of this work is to determine the ability of 3D US generated VV to categorize patients into those who will require interventional therapies, and those who will have spontaneous resolution. Patients with higher risks could therefore be monitored better, by re-allocating some of the resources as the low risks infants would need less monitoring.

  19. Swarm Intelligence Integrated Graph-Cut for Liver Segmentation from 3D-CT Volumes

    PubMed Central

    Eapen, Maya; Korah, Reeba; Geetha, G.

    2015-01-01

    The segmentation of organs in CT volumes is a prerequisite for diagnosis and treatment planning. In this paper, we focus on liver segmentation from contrast-enhanced abdominal CT volumes, a challenging task due to intensity overlapping, blurred edges, large variability in liver shape, and complex background with cluttered features. The algorithm integrates multidiscriminative cues (i.e., prior domain information, intensity model, and regional characteristics of liver in a graph-cut image segmentation framework). The paper proposes a swarm intelligence inspired edge-adaptive weight function for regulating the energy minimization of the traditional graph-cut model. The model is validated both qualitatively (by clinicians and radiologists) and quantitatively on publically available computed tomography (CT) datasets (MICCAI 2007 liver segmentation challenge, 3D-IRCAD). Quantitative evaluation of segmentation results is performed using liver volume calculations and a mean score of 80.8% and 82.5% on MICCAI and IRCAD dataset, respectively, is obtained. The experimental result illustrates the efficiency and effectiveness of the proposed method. PMID:26689833

  20. Snow Volumes 3D Modeling on the Karstic Plateau of Mount Lebanon (Lebanon)

    NASA Astrophysics Data System (ADS)

    Janine, S.; Luxey, P.; Dhont, D.

    2006-12-01

    Fresh water availability is a major player in the middle east geo-politics. Its correct management implies accurate knowledge of the underground reserves as well as surface flow. In Lebanon, snow fall is a major surface input. Evaluations of snow volumes have already been performed in 2000-2001 but they were preliminary and open the door to further more accurate studies. Our goal is to evaluate the snow volumes remaining at the end of the winter, using a 3D geo-modeler (normally used in the oil business). The studied snow is deposited onto porous and rough terrains making it a good candidate for infiltration in the underlaying karst reservoirs. The deposits are studied in two different areas, one with circular shaped dolines where the snow is trapped (Jabal Jraid, between 1760 and 1884 meters), the other is characterized by more elongated lows (Sannine plateau, between 2450 and 2625 meters). Our technique uses remotely sensed data as satellite images and DEM. The combination of both data sets leads to an automated method to determine the snow volumes. This automation is of high importance as the measures can be reproduced at different time intervals allowing the determination of a melting rate.

  1. Automated breast mass detection in 3D reconstructed tomosynthesis volumes: a featureless approach.

    PubMed

    Singh, Swatee; Tourassi, Georgia D; Baker, Jay A; Samei, Ehsan; Lo, Joseph Y

    2008-08-01

    The purpose of this study was to propose and implement a computer aided detection (CADe) tool for breast tomosynthesis. This task was accomplished in two stages-a highly sensitive mass detector followed by a false positive (FP) reduction stage. Breast tomosynthesis data from 100 human subject cases were used, of which 25 subjects had one or more mass lesions and the rest were normal. For stage 1, filter parameters were optimized via a grid search. The CADe identified suspicious locations were reconstructed to yield 3D CADe volumes of interest. The first stage yielded a maximum sensitivity of 93% with 7.7 FPs/breast volume. Unlike traditional CADe algorithms in which the second stage FP reduction is done via feature extraction and analysis, instead information theory principles were used with mutual information as a similarity metric. Three schemes were proposed, all using leave-one-case-out cross validation sampling. The three schemes, A, B, and C, differed in the composition of their knowledge base of regions of interest (ROIs). Scheme A's knowledge base was comprised of all the mass and FP ROIs generated by the first stage of the algorithm. Scheme B had a knowledge base that contained information from mass ROIs and randomly extracted normal ROIs. Scheme C had information from three sources of information-masses, FPs, and normal ROIs. Also, performance was assessed as a function of the composition of the knowledge base in terms of the number of FP or normal ROIs needed by the system to reach optimal performance. The results indicated that the knowledge base needed no more than 20 times as many FPs and 30 times as many normal ROIs as masses to attain maximal performance. The best overall system performance was 85% sensitivity with 2.4 FPs per breast volume for scheme A, 3.6 FPs per breast volume for scheme B, and 3 FPs per breast volume for scheme C. PMID:18777923

  2. An open source workflow for 3D printouts of scientific data volumes

    NASA Astrophysics Data System (ADS)

    Loewe, P.; Klump, J. F.; Wickert, J.; Ludwig, M.; Frigeri, A.

    2013-12-01

    As the amount of scientific data continues to grow, researchers need new tools to help them visualize complex data. Immersive data-visualisations are helpful, yet fail to provide tactile feedback and sensory feedback on spatial orientation, as provided from tangible objects. The gap in sensory feedback from virtual objects leads to the development of tangible representations of geospatial information to solve real world problems. Examples are animated globes [1], interactive environments like tangible GIS [2], and on demand 3D prints. The production of a tangible representation of a scientific data set is one step in a line of scientific thinking, leading from the physical world into scientific reasoning and back: The process starts with a physical observation, or from a data stream generated by an environmental sensor. This data stream is turned into a geo-referenced data set. This data is turned into a volume representation which is converted into command sequences for the printing device, leading to the creation of a 3D printout. As a last, but crucial step, this new object has to be documented and linked to the associated metadata, and curated in long term repositories to preserve its scientific meaning and context. The workflow to produce tangible 3D data-prints from science data at the German Research Centre for Geosciences (GFZ) was implemented as a software based on the Free and Open Source Geoinformatics tools GRASS GIS and Paraview. The workflow was successfully validated in various application scenarios at GFZ using a RapMan printer to create 3D specimens of elevation models, geological underground models, ice penetrating radar soundings for planetology, and space time stacks for Tsunami model quality assessment. While these first pilot applications have demonstrated the feasibility of the overall approach [3], current research focuses on the provision of the workflow as Software as a Service (SAAS), thematic generalisation of information content and

  3. Low dose four-dimensional computerized tomography with volume rendering reconstruction for primary hyperparathyroidism: How I do it?

    PubMed

    Platz, Timothy A; Kukar, Moshim; Elmarzouky, Rania; Cance, William; Abdelhalim, Ahmed

    2014-09-28

    Modification of 4-dimensional computed tomography (4D-CT) technique with volume rendering reconstructions and significant dose reduction is a safe and accurate method of pre-operative localization for primary hyperparathyroidism. Modified low dose 4D-CT with volume rendering reconstructions provides precise preoperative localization and is associated with a significant reduction in radiation exposure compared to classic preoperative localizing techniques. It should be considered the preoperative localization study of choice for primary hyperparathyroidism. PMID:25276315

  4. Four-chamber heart modeling and automatic segmentation for 3D cardiac CT volumes

    NASA Astrophysics Data System (ADS)

    Zheng, Yefeng; Georgescu, Bogdan; Barbu, Adrian; Scheuering, Michael; Comaniciu, Dorin

    2008-03-01

    Multi-chamber heart segmentation is a prerequisite for quantification of the cardiac function. In this paper, we propose an automatic heart chamber segmentation system. There are two closely related tasks to develop such a system: heart modeling and automatic model fitting to an unseen volume. The heart is a complicated non-rigid organ with four chambers and several major vessel trunks attached. A flexible and accurate model is necessary to capture the heart chamber shape at an appropriate level of details. In our four-chamber surface mesh model, the following two factors are considered and traded-off: 1) accuracy in anatomy and 2) easiness for both annotation and automatic detection. Important landmarks such as valves and cusp points on the interventricular septum are explicitly represented in our model. These landmarks can be detected reliably to guide the automatic model fitting process. We also propose two mechanisms, the rotation-axis based and parallel-slice based resampling methods, to establish mesh point correspondence, which is necessary to build a statistical shape model to enforce priori shape constraints in the model fitting procedure. Using this model, we develop an efficient and robust approach for automatic heart chamber segmentation in 3D computed tomography (CT) volumes. Our approach is based on recent advances in learning discriminative object models and we exploit a large database of annotated CT volumes. We formulate the segmentation as a two step learning problem: anatomical structure localization and boundary delineation. A novel algorithm, Marginal Space Learning (MSL), is introduced to solve the 9-dimensional similarity transformation search problem for localizing the heart chambers. After determining the pose of the heart chambers, we estimate the 3D shape through learning-based boundary delineation. Extensive experiments demonstrate the efficiency and robustness of the proposed approach, comparing favorably to the state-of-the-art. This

  5. Feed-forward volume rendering algorithm for moderately parallel MIMD machines

    NASA Technical Reports Server (NTRS)

    Yagel, Roni

    1993-01-01

    Algorithms for direct volume rendering on parallel and vector processors are investigated. Volumes are transformed efficiently on parallel processors by dividing the data into slices and beams of voxels. Equal sized sets of slices along one axis are distributed to processors. Parallelism is achieved at two levels. Because each slice can be transformed independently of others, processors transform their assigned slices with no communication, thus providing maximum possible parallelism at the first level. Within each slice, consecutive beams are incrementally transformed using coherency in the transformation computation. Also, coherency across slices can be exploited to further enhance performance. This coherency yields the second level of parallelism through the use of the vector processing or pipelining. Other ongoing efforts include investigations into image reconstruction techniques, load balancing strategies, and improving performance.

  6. A data distributed, parallel algorithm for ray-traced volume rendering

    SciTech Connect

    Ma, Kwan-Liu; Painter, J.S.; Hansen, C.D.; Krogh, M.F.

    1993-03-30

    This paper presents a divide-and-conquer ray-traced volume rendering algorithm and its implementation on networked workstations and a massively parallel computer, the Connection Machine CM-5. This algorithm distributes the data and the computational load to individual processing units to achieve fast, high-quality rendering of high-resolution data, even when only a modest amount of memory is available on each machine. The volume data, once distributed, is left intact. The processing nodes perform local ray-tracing of their subvolume concurrently. No communication between processing units is needed during this locally ray-tracing process. A subimage is generated by each processing unit and the final image is obtained by compositing subimages in the proper order, which can be determined a priori. Implementations and tests on a group of networked workstations and on the Thinking Machines CM-5 demonstrate the practicality of our algorithm and expose different performance tuning issues for each platform. We use data sets from medical imaging and computational fluid dynamics simulations in the study of this algorithm.

  7. In Vivo CT Direct Volume Rendering: A Three-Dimensional Anatomical Description of the Heart

    PubMed Central

    Cutroneo, Giuseppina; Bruschetta, Daniele; Trimarchi, Fabio; Cacciola, Alberto; Cinquegrani, Maria; Duca, Antonio; Rizzo, Giuseppina; Alati, Emanuela; Gaeta, Michele; Milardi, Demetrio

    2016-01-01

    Summary Background Since cardiac anatomy continues to play an important role in the practice of medicine and in the development of medical devices, the study of the heart in three dimensions is particularly useful to understand its real structure, function and proper location in the body. Material/Methods This study demonstrates a fine use of direct volume rendering, processing the data set images obtained by Computed Tomography (CT) of the heart of 5 subjects with age range between 18 and 42 years (2 male, 3 female), with no history of any overt cardiac disease. The cardiac structure in CT images was first extracted from the thorax by marking manually the regions of interest on the computer, and then it was stacked to create new volumetric data. Results The use of a specific algorithm allowed us to observe with a good perception of depth the heart and the skeleton of the thorax at the same time. Besides, in all examined subjects, it was possible to depict its structure and its position within the body and to study the integrity of papillary muscles, the fibrous tissue of cardiac valve and chordae tendineae and the course of coronary arteries. Conclusions Our results demonstrated that one of the greatest advantages of algorithmic modifications of direct volume rendering parameters is that this method provides much necessary information in a single radiologic study. It implies a better accuracy in the study of the heart, being complementary to other diagnostic methods and facilitating the therapeutic plans. PMID:26858778

  8. Texture splats for 3D vector and scalar field visualization

    SciTech Connect

    Crawfis, R.A.; Max, N.

    1993-04-06

    Volume Visualization is becoming an important tool for understanding large 3D datasets. A popular technique for volume rendering is known as splatting. With new hardware architectures offering substantial improvements in the performance of rendering texture mapped objects, we present textured splats. An ideal reconstruction function for 3D signals is developed which can be used as a texture map for a splat. Extensions to the basic splatting technique are then developed to additionally represent vector fields.

  9. Cell type-specific adaptation of cellular and nuclear volume in micro-engineered 3D environments.

    PubMed

    Greiner, Alexandra M; Klein, Franziska; Gudzenko, Tetyana; Richter, Benjamin; Striebel, Thomas; Wundari, Bayu G; Autenrieth, Tatjana J; Wegener, Martin; Franz, Clemens M; Bastmeyer, Martin

    2015-11-01

    Bio-functionalized three-dimensional (3D) structures fabricated by direct laser writing (DLW) are structurally and mechanically well-defined and ideal for systematically investigating the influence of three-dimensionality and substrate stiffness on cell behavior. Here, we show that different fibroblast-like and epithelial cell lines maintain normal proliferation rates and form functional cell-matrix contacts in DLW-fabricated 3D scaffolds of different mechanics and geometry. Furthermore, the molecular composition of cell-matrix contacts forming in these 3D micro-environments and under conventional 2D culture conditions is identical, based on the analysis of several marker proteins (paxillin, phospho-paxillin, phospho-focal adhesion kinase, vinculin, β1-integrin). However, fibroblast-like and epithelial cells differ markedly in the way they adapt their total cell and nuclear volumes in 3D environments. While fibroblast-like cell lines display significantly increased cell and nuclear volumes in 3D substrates compared to 2D substrates, epithelial cells retain similar cell and nuclear volumes in 2D and 3D environments. Despite differential cell volume regulation between fibroblasts and epithelial cells in 3D environments, the nucleus-to-cell (N/C) volume ratios remain constant for all cell types and culture conditions. Thus, changes in cell and nuclear volume during the transition from 2D to 3D environments are strongly cell type-dependent, but independent of scaffold stiffness, while cells maintain the N/C ratio regardless of culture conditions. PMID:26283159

  10. Automatic intensity-based 3D-to-2D registration of CT volume and dual-energy digital radiography for the detection of cardiac calcification

    NASA Astrophysics Data System (ADS)

    Chen, Xiang; Gilkeson, Robert; Fei, Baowei

    2007-03-01

    We are investigating three-dimensional (3D) to two-dimensional (2D) registration methods for computed tomography (CT) and dual-energy digital radiography (DR) for the detection of coronary artery calcification. CT is an established tool for the diagnosis of coronary artery diseases (CADs). Dual-energy digital radiography could be a cost-effective alternative for screening coronary artery calcification. In order to utilize CT as the "gold standard" to evaluate the ability of DR images for the detection and localization of calcium, we developed an automatic intensity-based 3D-to-2D registration method for 3D CT volumes and 2D DR images. To generate digital rendering radiographs (DRR) from the CT volumes, we developed three projection methods, i.e. Gaussian-weighted projection, threshold-based projection, and average-based projection. We tested normalized cross correlation (NCC) and normalized mutual information (NMI) as similarity measurement. We used the Downhill Simplex method as the search strategy. Simulated projection images from CT were fused with the corresponding DR images to evaluate the localization of cardiac calcification. The registration method was evaluated by digital phantoms, physical phantoms, and clinical data sets. The results from the digital phantoms show that the success rate is 100% with mean errors of less 0.8 mm and 0.2 degree for both NCC and NMI. The registration accuracy of the physical phantoms is 0.34 +/- 0.27 mm. Color overlay and 3D visualization of the clinical data show that the two images are registered well. This is consistent with the improvement of the NMI values from 0.20 +/- 0.03 to 0.25 +/- 0.03 after registration. The automatic 3D-to-2D registration method is accurate and robust and may provide a useful tool to evaluate the dual-energy DR images for the detection of coronary artery calcification.

  11. Automatic Intensity-based 3D-to-2D Registration of CT Volume and Dual-energy Digital Radiography for the Detection of Cardiac Calcification

    PubMed Central

    Chen, Xiang; Gilkeson, Robert; Fei, Baowei

    2013-01-01

    We are investigating three-dimensional (3D) to two-dimensional (2D) registration methods for computed tomography (CT) and dual-energy digital radiography (DR) for the detection of coronary artery calcification. CT is an established tool for the diagnosis of coronary artery diseases (CADs). Dual-energy digital radiography could be a cost-effective alternative for screening coronary artery calcification. In order to utilize CT as the “gold standard” to evaluate the ability of DR images for the detection and localization of calcium, we developed an automatic intensity-based 3D-to-2D registration method for 3D CT volumes and 2D DR images. To generate digital rendering radiographs (DRR) from the CT volumes, we developed three projection methods, i.e. Gaussian-weighted projection, threshold-based projection, and average-based projection. We tested normalized cross correlation (NCC) and normalized mutual information (NMI) as similarity measurement. We used the Downhill Simplex method as the search strategy. Simulated projection images from CT were fused with the corresponding DR images to evaluate the localization of cardiac calcification. The registration method was evaluated by digital phantoms, physical phantoms, and clinical data sets. The results from the digital phantoms show that the success rate is 100% with mean errors of less 0.8 mm and 0.2 degree for both NCC and NMI. The registration accuracy of the physical phantoms is 0.34 ± 0.27 mm. Color overlay and 3D visualization of the clinical data show that the two images are registered well. This is consistent with the improvement of the NMI values from 0.20 ± 0.03 to 0.25 ± 0.03 after registration. The automatic 3D-to-2D registration method is accurate and robust and may provide a useful tool to evaluate the dual-energy DR images for the detection of coronary artery calcification. PMID:24386527

  12. Exploring the seismic expression of fault zones in 3D seismic volumes

    NASA Astrophysics Data System (ADS)

    Iacopini, David; Butler, Rob; Purves, Steve

    2016-04-01

    Mapping and understanding distributed deformation is a major challenge for the structural interpretation of seismic data. However, volumes of seismic signal disturbance with low signal/noise ratio are systematically observed within 3D seismic datasets around fault systems. These seismic disturbance zones (SDZ) are commonly characterized by complex perturbations of the signal and occur at the sub-seismic to seismic scale. They may store important information on deformation distributed around those larger scale structures that may be readily interpreted in conventional amplitude displays of seismic data scale. We introduce a method to detect fault-related disturbance zones and to discriminate between this and other noise sources such as those associated with the seismic acquisition (footprint noise). Two case studies, from the Taranaki basin and deep-water Niger delta are presented. These resolve structure within SDZs using tensor and semblance attributes along with conventional seismic mapping. The tensor attribute is more efficient in tracking volumes containing structural displacements while structurally-oriented semblance coherency is commonly disturbed by small waveform variations around the fault throw. We propose a workflow to map and cross-plot seismic waveform signal properties extracted from the seismic disturbance zone as a tool to investigate the seismic signature and explore seismic facies of a SDZ.

  13. Exploring the seismic expression of fault zones in 3D seismic volumes

    NASA Astrophysics Data System (ADS)

    Iacopini, D.; Butler, R. W. H.; Purves, S.; McArdle, N.; De Freslon, N.

    2016-08-01

    Mapping and understanding distributed deformation is a major challenge for the structural interpretation of seismic data. However, volumes of seismic signal disturbance with low signal/noise ratio are systematically observed within 3D seismic datasets around fault systems. These seismic disturbance zones (SDZ) are commonly characterized by complex perturbations of the signal and occur at the sub-seismic (10 s m) to seismic scale (100 s m). They may store important information on deformation distributed around those larger scale structures that may be readily interpreted in conventional amplitude displays of seismic data. We introduce a method to detect fault-related disturbance zones and to discriminate between this and other noise sources such as those associated with the seismic acquisition (footprint noise). Two case studies from the Taranaki basin and deep-water Niger delta are presented. These resolve SDZs using tensor and semblance attributes along with conventional seismic mapping. The tensor attribute is more efficient in tracking volumes containing structural displacements while structurally-oriented semblance coherency is commonly disturbed by small waveform variations around the fault throw. We propose a workflow to map and cross-plot seismic waveform signal properties extracted from the seismic disturbance zone as a tool to investigate the seismic signature and explore seismic facies of a SDZ.

  14. Acoustic Scattering by Three-Dimensional Stators and Rotors Using the SOURCE3D Code. Volume 2; Scattering Plots

    NASA Technical Reports Server (NTRS)

    Meyer, Harold D.

    1999-01-01

    This second volume of Acoustic Scattering by Three-Dimensional Stators and Rotors Using the SOURCE3D Code provides the scattering plots referenced by Volume 1. There are 648 plots. Half are for the 8750 rpm "high speed" operating condition and the other half are for the 7031 rpm "mid speed" operating condition.

  15. A 3-D implicit finite-volume model of shallow water flows

    NASA Astrophysics Data System (ADS)

    Wu, Weiming; Lin, Qianru

    2015-09-01

    A three-dimensional (3-D) model has been developed to simulate shallow water flows in large water bodies, such as coastal and estuarine waters. The eddy viscosity is determined using a newly modified mixing length model that uses different mixing length functions for the horizontal and vertical shear strain rates. The 3-D shallow water flow equations with the hydrostatic pressure assumption are solved using an implicit finite-volume method based on a quadtree (telescoping) rectangular mesh on the horizontal plane and the sigma coordinate in the vertical direction. The quadtree technique can locally refine the mesh around structures or in high-gradient regions by splitting a coarse cell into four child cells. The grid nodes are numbered with a one-dimensional index system that has unstructured grid feature for better grid flexibility. All the primary variables are arranged in a non-staggered grid system. Fluxes at cell faces are determined using a Rhie and Chow-type momentum interpolation, to avoid the possible spurious checkerboard oscillations caused by linear interpolation. Each of the discretized governing equations is solved iteratively using the flexible GMRES method with ILUT preconditioning, and coupling of water level and velocity among these equations is achieved by using the SIMPLEC algorithm with under-relaxation. The model has been tested in four cases, including steady flow near a spur-dyke, tidal flows in San Francisco Bay and Gironde Estuary, and wind-induced current in a flume. The calculated water levels and velocities are in good agreement with the measured values.

  16. 99Tcm-pyrophosphate myocardial scintigraphy: the role of volume-rendered three-dimensional imaging in the diagnosis of acute myocardial infarction.

    PubMed

    Howarth, D M; Southee, A E; Allen, L W; Tan, P S

    1995-07-01

    The aim of this study was to evaluate the role of volume-rendered three-dimensional imaging in the diagnosis of acute myocardial infarction (AMI) using 99Tcm-pyrophosphate (99Tcm-PYP) scintigraphy in a diagnostically difficult group of patients. In total, 117 patients were studied using planar, single photon emission tomography (SPET) and 3-D volume-rendered imaging performed 3 h after receiving 555 MBq (15 mCi) of 99Tcm-PYP intravenously. 555MBq (15 mCi) of 99Tcm-PYP intravenously. Two teams of physicians reported in a blinded and random fashion on each planar, SPET and 3-D rotating image study. Individual reports were compared with reports that used all three imaging modalities in combination. Observer reproducibility was between 85 and 90% and inter-observer team agreement was between 87 and 91%. A score based on clinical history, electrocardiography, serum cardiac enzyme levels and cardiac risk factors was validated and used as a de facto 'gold standard' for AMI for 75 of the subjects for whom all these details were available. The sensitivity, specificity and overall accuracy of combined planar, SPET and 3-D rotating image modalities were all 84%. Analysis of each modality in isolation showed SPET imaging to have the highest sensitivity (74%) and specificity (89%). In combination with SPET and planar imaging, 3-D rotation image presentation increases diagnostic sensitivity without appreciably altering overall diagnostic accuracy. 99Tcm-PYP myocardial SPET imaging shows good utility in the diagnosis of AMI in diagnostically difficult patients. PMID:7478393

  17. Determining gully volume from straightforward photo-based 3D reconstruction

    NASA Astrophysics Data System (ADS)

    James, M. R.; Castillo, C.; Pérez, R.; Taguas, E. V.; Gomez, J. A.; Quinton, J. N.

    2012-04-01

    In order to quantify soil loss through gully erosion, accurate measurements of gully volume are required. However, gullys are usually extended features, often with complex morphologies and are challenging to survey appropriately and efficiently. Here we explore the use of a photo-based technique for deriving 3D gully models suitable for detailed erosion studies. Traditional aerial and oblique close-range photogrammetry approaches have been previously used to produce accurate digital elevation models (DEMs) from photographs. However, these techniques require expertise to carry out successfully, use proprietry software and usually need apriori camera calibration. The computer vision approach we adopt here relaxes these requirements and allows 3D models to be automatically produced from collections of unordered photos. We use a freely available 'reconstruction pipeline' (http://blog.neonascent.net/archives/bundler-photogrammetry-package/) that combines structure-from-motion and multi-view stereo algorithms (SfM-MVS) to generate dense point clouds (millions of points). The model is derived from photos taken from different positions with a consumer camera and is then scaled and georeferenced using additional software (http://www.lancs.ac.uk/staff/jamesm/software/sfm_georef.htm) and observations of some control points in the scene. The approach was tested on a ~7-m long sinous gully section (average width and depth ~2.4 and 1.2 m respectively) in Vertisol soils, near Cordoba, Spain. For benchmark data, the gully topography was determined with a terrestrial laser scanner (Riegl LMS-Z420i, with a cited range accuracy of 10 mm). 191 photos were taken with a Canon EOS 450D with a prime (fixed) 28 mm lens over a period of ~10 minutes. In order to georeference the SfM-MVS model for comparison with the TLS data, 6 control targets were located around the gully and their locations determined by dGPS. Differences between the TLS and SfM-MVS surfaces are dominated by areas of data

  18. 3D MR imaging in real time

    NASA Astrophysics Data System (ADS)

    Guttman, Michael A.; McVeigh, Elliot R.

    2001-05-01

    A system has been developed to produce live 3D volume renderings from an MR scanner. Whereas real-time 2D MR imaging has been demonstrated by several groups, 3D volumes are currently rendered off-line to gain greater understanding of anatomical structures. For example, surgical planning is sometimes performed by viewing 2D images or 3D renderings from previously acquired image data. A disadvantage of this approach is misregistration which could occur if the anatomy changes due to normal muscle contractions or surgical manipulation. The ability to produce volume renderings in real-time and present them in the magnet room could eliminate this problem, and enable or benefit other types of interventional procedures. The system uses the data stream generated by a fast 2D multi- slice pulse sequence to update a volume rendering immediately after a new slice is available. We demonstrate some basic types of user interaction with the rendering during imaging at a rate of up to 20 frames per second.

  19. 3D Quantitative Confocal Laser Microscopy of Ilmenite Volume Distribution in Alpe Arami Olivine

    NASA Astrophysics Data System (ADS)

    Bozhilov, K. N.

    2001-12-01

    The deep origin of the Alpe Arami garnet lherzolite massif in the Swiss Alps proposed by Dobrzhinetskaya et al. (Science, 1996) has been a focus of heated debate. One of the lines of evidence supporting an exhumation from more than 200 km depth includes the abundance, distribution, and orientation of magnesian ilmenite rods in the oldest generation of olivine. This argument has been disputed in terms of the abundance of ilmenite and consequently the maximum TiO2 content in the discussed olivine. In order to address this issue, we have directly measured the volume fraction of ilmenite of the oldest generation of olivine by applying confocal laser scanning microscopy (CLSM). CLSM is a method which allows for three-dimensional imaging and quantitative volume determination by optical sectioning of the objects. The images for 3D reconstruction and measurements were acquired from petrographic thin sections in reflected laser light with 488 nm wavelength. Measurements of more than 80 olivine grains in six thin sections of our material yielded an average volume fraction of 0.31% ilmenite in the oldest generation of olivine from Alpe Arami. This translates into 0.23 wt.% TiO2 in olivine with error in determination of ±0.097 wt.%, a value significantly different from that of 0.02 to 0.03 wt.% TiO2 determined by Hacker et al. (Science, 1997) by a broad-beam microanalysis technique. During the complex geological history of the Alpe Arami massif, several events of metamorphism are recorded which all could have caused increased mobility of the mineral components. Evidence for loss of TiO2 from olivine is the tendency for high densities of ilmenite to be restricted to cores of old grains, the complete absence of ilmenite inclusions from the younger, recrystallized, generation of olivine, and reduction in ilmenite size and abundance in more serpentinized specimens. These observations suggest that only olivine grains with the highest concentrations of ilmenite are close to the

  20. The effect of object speed and direction on the performance of 3D speckle tracking using a 3D swept-volume ultrasound probe

    NASA Astrophysics Data System (ADS)

    Harris, Emma J.; Miller, Naomi R.; Bamber, Jeffrey C.; Symonds-Tayler, J. Richard N.; Evans, Philip M.

    2011-11-01

    Three-dimensional (3D) soft tissue tracking using 3D ultrasound is of interest for monitoring organ motion during therapy. Previously we demonstrated feature tracking of respiration-induced liver motion in vivo using a 3D swept-volume ultrasound probe. The aim of this study was to investigate how object speed affects the accuracy of tracking ultrasonic speckle in the absence of any structural information, which mimics the situation in homogenous tissue for motion in the azimuthal and elevational directions. For object motion prograde and retrograde to the sweep direction of the transducer, the spatial sampling frequency increases or decreases with object speed, respectively. We examined the effect object motion direction of the transducer on tracking accuracy. We imaged a homogenous ultrasound speckle phantom whilst moving the probe with linear motion at a speed of 0-35 mm s-1. Tracking accuracy and precision were investigated as a function of speed, depth and direction of motion for fixed displacements of 2 and 4 mm. For the azimuthal direction, accuracy was better than 0.1 and 0.15 mm for displacements of 2 and 4 mm, respectively. For a 2 mm displacement in the elevational direction, accuracy was better than 0.5 mm for most speeds. For 4 mm elevational displacement with retrograde motion, accuracy and precision reduced with speed and tracking failure was observed at speeds of greater than 14 mm s-1. Tracking failure was attributed to speckle de-correlation as a result of decreasing spatial sampling frequency with increasing speed of retrograde motion. For prograde motion, tracking failure was not observed. For inter-volume displacements greater than 2 mm, only prograde motion should be tracked which will decrease temporal resolution by a factor of 2. Tracking errors of the order of 0.5 mm for prograde motion in the elevational direction indicates that using the swept probe technology speckle tracking accuracy is currently too poor to track homogenous tissue over

  1. Hybrid Parallelism for Volume Rendering on Large, Multi- and Many-core Systems

    SciTech Connect

    Howison, Mark; Bethel, E. Wes; Childs, Hank

    2011-01-01

    With the computing industry trending towards multi- and many-core processors, we study how a standard visualization algorithm, ray-casting volume rendering, can benefit from a hybrid parallelism approach. Hybrid parallelism provides the best of both worlds: using distributed-memory parallelism across a large numbers of nodes increases available FLOPs and memory, while exploiting shared-memory parallelism among the cores within each node ensures that each node performs its portion of the larger calculation as efficiently as possible. We demonstrate results from weak and strong scaling studies, at levels of concurrency ranging up to 216,000, and with datasets as large as 12.2 trillion cells. The greatest benefit from hybrid parallelism lies in the communication portion of the algorithm, the dominant cost at higher levels of concurrency. We show that reducing the number of participants with a hybrid approach significantly improves performance.

  2. Hybrid Parallelism for Volume Rendering on Large, Multi-core Systems

    SciTech Connect

    Howison, Mark; Bethel, E. Wes; Childs, Hank

    2010-07-12

    This work studies the performance and scalability characteristics of"hybrid'"parallel programming and execution as applied to raycasting volume rendering -- a staple visualization algorithm -- on a large, multi-core platform. Historically, the Message Passing Interface (MPI) has become the de-facto standard for parallel programming and execution on modern parallel systems. As the computing industry trends towards multi-core processors, with four- and six-core chips common today and 128-core chips coming soon, we wish to better understand how algorithmic and parallel programming choices impact performance and scalability on large, distributed-memory multi-core systems. Our findings indicate that the hybrid-parallel implementation, at levels of concurrency ranging from 1,728 to 216,000, performs better, uses a smaller absolute memory footprint, and consumes less communication bandwidth than the traditional, MPI-only implementation.

  3. Hybrid Parallelism for Volume Rendering on Large, Multi-core Systems

    SciTech Connect

    Howison, Mark; Bethel, E. Wes; Childs, Hank

    2010-06-14

    This work studies the performance and scalability characteristics of"hybrid" parallel programming and execution as applied to raycasting volume rendering -- a staple visualization algorithm -- on a large, multi-core platform. Historically, the Message Passing Interface (MPI) has become the de-facto standard for parallel programming and execution on modern parallel systems. As the computing industry trends towards multi-core processors, with four- and six-core chips common today and 128-core chips coming soon, we wish to better understand how algorithmic and parallel programming choices impact performance and scalability on large, distributed-memory multi-core systems. Our findings indicate that the hybrid-parallel implementation, at levels of concurrency ranging from 1,728 to 216,000, performs better, uses a smaller absolute memory footprint, and consumes less communication bandwidth than the traditional, MPI-only implementation.

  4. MPI-hybrid Parallelism for Volume Rendering on Large, Multi-core Systems

    SciTech Connect

    Howison, Mark; Bethel, E. Wes; Childs, Hank

    2010-03-20

    This work studies the performance and scalability characteristics of"hybrid'" parallel programming and execution as applied to raycasting volume rendering -- a staple visualization algorithm -- on a large, multi-core platform. Historically, the Message Passing Interface (MPI) has become the de-facto standard for parallel programming and execution on modern parallel systems. As the computing industry trends towards multi-core processors, with four- and six-core chips common today and 128-core chips coming soon, we wish to better understand how algorithmic and parallel programming choices impact performance and scalability on large, distributed-memory multi-core systems. Our findings indicate that the hybrid-parallel implementation, at levels of concurrency ranging from 1,728 to 216,000, performs better, uses a smaller absolute memory footprint, and consumes less communication bandwidth than the traditional, MPI-only implementation.

  5. Topological Galleries: A High Level User Interface for Topology Controlled Volume Rendering

    SciTech Connect

    MacCarthy, Brian; Carr, Hamish; Weber, Gunther H.

    2011-06-30

    Existing topological interfaces to volume rendering are limited by their reliance on sophisticated knowledge of topology by the user. We extend previous work by describing topological galleries, an interface for novice users that is based on the design galleries approach. We report three contributions: an interface based on hierarchical thumbnail galleries to display the containment relationships between topologically identifiable features, the use of the pruning hierarchy instead of branch decomposition for contour tree simplification, and drag-and-drop transfer function assignment for individual components. Initial results suggest that this approach suffers from limitations due to rapid drop-off of feature size in the pruning hierarchy. We explore these limitations by providing statistics of feature size as function of depth in the pruning hierarchy of the contour tree.

  6. Location constraint based 2D-3D registration of fluoroscopic images and CT volumes for image-guided EP procedures

    NASA Astrophysics Data System (ADS)

    Liao, Rui; Xu, Ning; Sun, Yiyong

    2008-03-01

    Presentation of detailed anatomical structures via 3D Computed Tomographic (CT) volumes helps visualization and navigation in electrophysiology procedures (EP). Registration of the CT volume with the online fluoroscopy however is a challenging task for EP applications due to the lack of discernable features in fluoroscopic images. In this paper, we propose to use the coronary sinus (CS) catheter in bi-plane fluoroscopic images and the coronary sinus in the CT volume as a location constraint to accomplish 2D-3D registration. Two automatic registration algorithms are proposed in this study, and their performances are investigated on both simulated and real data. It is shown that compared to registration using mono-plane fluoroscopy, registration using bi-plane images results in substantially higher accuracy in 3D and enhanced robustness. In addition, compared to registering the projection of CS to the 2D CS catheter, it is more desirable to reconstruct a 3D CS catheter from the bi-plane fluoroscopy and then perform a 3D-3D registration between the CS and the reconstructed CS catheter. Quantitative validation based on simulation and visual inspection on real data demonstrates the feasibility of the proposed workflow in EP procedures.

  7. Error propagation in the computation of volumes in 3D city models with the Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Biljecki, F.; Ledoux, H.; Stoter, J.

    2014-11-01

    This paper describes the analysis of the propagation of positional uncertainty in 3D city models to the uncertainty in the computation of their volumes. Current work related to error propagation in GIS is limited to 2D data and 2D GIS operations, especially of rasters. In this research we have (1) developed two engines, one that generates random 3D buildings in CityGML in multiple LODs, and one that simulates acquisition errors to the geometry; (2) performed an error propagation analysis on volume computation based on the Monte Carlo method; and (3) worked towards establishing a framework for investigating error propagation in 3D GIS. The results of the experiments show that a comparatively small error in the geometry of a 3D city model may cause significant discrepancies in the computation of its volume. This has consequences for several applications, such as in estimation of energy demand and property taxes. The contribution of this work is twofold: this is the first error propagation analysis in 3D city modelling, and the novel approach and the engines that we have created can be used for analysing most of 3D GIS operations, supporting related research efforts in the future.

  8. Toward automatic detection of vessel stenoses in cerebral 3D DSA volumes

    NASA Astrophysics Data System (ADS)

    Mualla, F.; Pruemmer, M.; Hahn, D.; Hornegger, J.

    2012-05-01

    Vessel diseases are a very common reason for permanent organ damage, disability and death. This fact necessitates further research for extracting meaningful and reliable medical information from the 3D DSA volumes. Murray's law states that at each branch point of a lumen-based system, the sum of the minor branch diameters each raised to the power x, is equal to the main branch diameter raised to the power x. The principle of minimum work and other factors like the vessel type, impose typical values for the junction exponent x. Therefore, deviations from these typical values may signal pathological cases. In this paper, we state the necessary and the sufficient conditions for the existence and the uniqueness of the solution for x. The second contribution is a scale- and orientation- independent set of features for stenosis classification. A support vector machine classifier was trained in the space of these features. Only one branch was misclassified in a cross validation on 23 branches. The two contributions fit into a pipeline for the automatic detection of the cerebral vessel stenoses.

  9. Quality control of dose volume histogram computation characteristics of 3D treatment planning systems

    NASA Astrophysics Data System (ADS)

    Panitsa, E.; Rosenwald, J. C.; Kappas, C.

    1998-10-01

    Detailed quality control (QC) protocols are a necessity for modern radiotherapy departments. The established QC protocols for treatment planning systems (TPS) do not include recommendations on the advanced features of three-dimensional (3D) treatment planning, like the dose volume histograms (DVH). In this study, a test protocol for DVH characteristics was developed. The protocol assesses the consistency of the DVH computation to the dose distribution calculated by the same TPS by comparing DVH parameters with values obtained by the isodose distributions. The computation parameters (such as the dimension of the computation grid) that are applied to the TPS during the tests are not fixed but set by the user as if the test represents a typical clinical case. Six commercial TPS were examined with this protocol within the frame of the EC project Dynarad (Biomed I). The results of the intercomparison prove the consistency of the DVH results to the isodose values for most of the examined TPS. However, special attention should be paid when working with cases of adverse conditions such as high dose gradient regions. In these cases, higher errors are derived, especially when an insufficient number of dose calculation points are used for the DVH computation.

  10. Chest-wall segmentation in automated 3D breast ultrasound images using thoracic volume classification

    NASA Astrophysics Data System (ADS)

    Tan, Tao; van Zelst, Jan; Zhang, Wei; Mann, Ritse M.; Platel, Bram; Karssemeijer, Nico

    2014-03-01

    Computer-aided detection (CAD) systems are expected to improve effectiveness and efficiency of radiologists in reading automated 3D breast ultrasound (ABUS) images. One challenging task on developing CAD is to reduce a large number of false positives. A large amount of false positives originate from acoustic shadowing caused by ribs. Therefore determining the location of the chestwall in ABUS is necessary in CAD systems to remove these false positives. Additionally it can be used as an anatomical landmark for inter- and intra-modal image registration. In this work, we extended our previous developed chestwall segmentation method that fits a cylinder to automated detected rib-surface points and we fit the cylinder model by minimizing a cost function which adopted a term of region cost computed from a thoracic volume classifier to improve segmentation accuracy. We examined the performance on a dataset of 52 images where our previous developed method fails. Using region-based cost, the average mean distance of the annotated points to the segmented chest wall decreased from 7.57±2.76 mm to 6.22±2.86 mm.art.

  11. Method of interactive specification of interested regions via a volume-rendered image with application to virtualized endoscope system

    NASA Astrophysics Data System (ADS)

    Mori, Kensaku; Higuchi, Yoshitake; Suenaga, Yasuhito; Toriwaki, Jun-ichiro; Hasegawa, Jun-ichi; Katada, Kazuhiro

    2000-04-01

    This paper proposes a method to specify interested regions including points, lines, surfaces and mass regions through a volume rendered image,directly and interactively, and its application to virtual endoscopy system. Measurement function is one of the most important functions in the virtual endoscopy. We should specify a target region on the virtual endoscopic image for measurements. It is hard to specify target regions on the organ wall in a volume rendered image, since the organ is not explicitly segmented from an input image when observed using the volume rendering. The proposed method enables the user to specify interested regions directly by analyzing change of accumulated opacity along a casting ray. When the user specify a point on a volume rendered image, we cast a ray that passes through a specified point of an image plane from a viewpoint. We considered the position that has the highest accumulated opacity as the three-dimensional position of the specified point. Line and surface regions are obtained by iterating the point specification method. A mass region is obtained by finding the interval on the ray where the opacity is greater than zero. We have implemented those specifying methods to our virtual endoscopy system. The result showed that we could specify points, lines, surfaces and mass regions on volume rendered images.

  12. SRB-3D Solid Rocket Booster performance prediction program. Volume 3: Programmer's manual

    NASA Technical Reports Server (NTRS)

    Winkler, J. C.

    1976-01-01

    The programmer's manual for the Modified Solid Rocket Booster Performance Prediction Program (SRB-3D) describes the major control routines of SRB-3D, followed by a super index listing of the program and a cross-reference of the program variables.

  13. Perspective volume rendering of cross-sectional images for simulated endoscopy and intraparenchymal viewing

    NASA Astrophysics Data System (ADS)

    Napel, Sandy; Rubin, Geoffrey D.; Beaulieu, Christopher F.; Jeffrey, R. Brooke, Jr.; Argiro, Vincent

    1996-04-01

    The capability of today's clinical scanners to create large quantities of high resolution and near isotropically sampled volume data, coupled with a rapidly improving performance/price ratio of computers, has created the challenge and feasibility of creating new ways to explore cross- sectional medical imagery. Perspective volume rendering (PVR) allows an observer to 'fly- through' image data and view its contents from within for diagnostic and treatment planning purposes. We simulated flights through 14 data sets and, where possible, these were compared to conventional endoscopy. We demonstrated colonic masses and polyps as small as 5 mm, tracheal obstructions and precise positioning of endoluminal stent-grafts. Simulated endoscopy was capable of generating views not possible with conventional endoscopy due to its restrictions on camera location and orientation. Interactive adjustment of tissue opacities permitted views beyond the interior of lumina to reveal other structures such as masses, thrombus, and calcifications. We conclude that PVR is an exciting new technique with the potential to supplement and/or replace some conventional diagnostic imaging procedures. It has further utility for treatment planning and communication with colleagues, and the potential to reduce the number of normal people who would otherwise undergo more invasive procedures without benefit.

  14. MaterialVis: material visualization tool using direct volume and surface rendering techniques.

    PubMed

    Okuyan, Erhan; Güdükbay, Uğur; Bulutay, Ceyhun; Heinig, Karl-Heinz

    2014-05-01

    Visualization of the materials is an indispensable part of their structural analysis. We developed a visualization tool for amorphous as well as crystalline structures, called MaterialVis. Unlike the existing tools, MaterialVis represents material structures as a volume and a surface manifold, in addition to plain atomic coordinates. Both amorphous and crystalline structures exhibit topological features as well as various defects. MaterialVis provides a wide range of functionality to visualize such topological structures and crystal defects interactively. Direct volume rendering techniques are used to visualize the volumetric features of materials, such as crystal defects, which are responsible for the distinct fingerprints of a specific sample. In addition, the tool provides surface visualization to extract hidden topological features within the material. Together with the rich set of parameters and options to control the visualization, MaterialVis allows users to visualize various aspects of materials very efficiently as generated by modern analytical techniques such as the Atom Probe Tomography. PMID:24739396

  15. Three-dimensional structure of the curved mixing layer using image reconstruction and volume rendering

    NASA Astrophysics Data System (ADS)

    Karasso, P. S.; Mungal, M. G.

    1991-05-01

    This study investigates the structure and mixing of the two-dimensional turbulent mixing layer when subjected to longitudinal streamwise curvature. The straight layer is now well known to be dominated by the primary Kelvin-Helmholtz (KH) instability as well as the secondary Taylor-Goertler (TG) instability. For equal density fluids, placing the high-speed fluid on the inside of a streamwise bend causes the TG instability to be enhanced (unstable case), while placing the low-speed fluid on the inside of the same bend leads to the suppression of the TG instability (stable case). The location of the mixing transition is correspondingly altered. Our goal is to study the changes to the mixing field and growth rate resulting from the competition between instabilities. Our studies are performed in a newly constructed blow-down water facility capable of high Reynolds numbers and excellent optical access. Maximum flow speeds are 2 and 0.25 m/sec for the high- and low-speed sides, respectively, leading to maximum Reynolds numbers of 80 000 based on velocity difference and the width of the layer. We are able to dye one stream with a fluorescent dye, thus providing several planar views of the flow under laser sheet illumination. These views are superior to conventional approaches as they are free of wall effects and are not spatially integrating. However, our most useful diagnostic of the structure of the flow is the ability to record high-speed images of the end view of the flow that are then reconstructed by computer using the volume rendering technique of Jiménez et al.1 This approach is especially useful as it allows us to compare the structural changes to the flow resulting from the competition between the KH and TG instabilities. Another advantage is the fact that several hundred frames, covering many characteristic times, are incorporated into the rendered image and thus capture considerably more flow physics than do still images. We currently have our rendering

  16. A methodology to mesh mesoscopic representative volume element of 3D interlock woven composites impregnated with resin

    NASA Astrophysics Data System (ADS)

    Ha, Manh Hung; Cauvin, Ludovic; Rassineux, Alain

    2016-04-01

    We present a new numerical methodology to build a Representative Volume Element (RVE) of a wide range of 3D woven composites in order to determine the mechanical behavior of the fabric unit cell by a mesoscopic approach based on a 3D finite element analysis. Emphasis is put on the numerous difficulties of creating a mesh of these highly complex weaves embedded in a resin. A conforming mesh at the numerous interfaces between yarns is created by a multi-quadtree adaptation technique, which makes it possible thereafter to build an unstructured 3D mesh of the resin with tetrahedral elements. The technique is not linked with any specific tool, but can be carried out with the use of any 2D and 3D robust mesh generators.

  17. Random forest classification of large volume structures for visuo-haptic rendering in CT images

    NASA Astrophysics Data System (ADS)

    Mastmeyer, Andre; Fortmeier, Dirk; Handels, Heinz

    2016-03-01

    For patient-specific voxel-based visuo-haptic rendering of CT scans of the liver area, the fully automatic segmentation of large volume structures such as skin, soft tissue, lungs and intestine (risk structures) is important. Using a machine learning based approach, several existing segmentations from 10 segmented gold-standard patients are learned by random decision forests individually and collectively. The core of this paper is feature selection and the application of the learned classifiers to a new patient data set. In a leave-some-out cross-validation, the obtained full volume segmentations are compared to the gold-standard segmentations of the untrained patients. The proposed classifiers use a multi-dimensional feature space to estimate the hidden truth, instead of relying on clinical standard threshold and connectivity based methods. The result of our efficient whole-body section classification are multi-label maps with the considered tissues. For visuo-haptic simulation, other small volume structures would have to be segmented additionally. We also take a look into these structures (liver vessels). For an experimental leave-some-out study consisting of 10 patients, the proposed method performs much more efficiently compared to state of the art methods. In two variants of leave-some-out experiments we obtain best mean DICE ratios of 0.79, 0.97, 0.63 and 0.83 for skin, soft tissue, hard bone and risk structures. Liver structures are segmented with DICE 0.93 for the liver, 0.43 for blood vessels and 0.39 for bile vessels.

  18. A simple method for the production of large volume 3D macroporous hydrogels for advanced biotechnological, medical and environmental applications

    PubMed Central

    Savina, Irina N.; Ingavle, Ganesh C.; Cundy, Andrew B.; Mikhalovsky, Sergey V.

    2016-01-01

    The development of bulk, three-dimensional (3D), macroporous polymers with high permeability, large surface area and large volume is highly desirable for a range of applications in the biomedical, biotechnological and environmental areas. The experimental techniques currently used are limited to the production of small size and volume cryogel material. In this work we propose a novel, versatile, simple and reproducible method for the synthesis of large volume porous polymer hydrogels by cryogelation. By controlling the freezing process of the reagent/polymer solution, large-scale 3D macroporous gels with wide interconnected pores (up to 200 μm in diameter) and large accessible surface area have been synthesized. For the first time, macroporous gels (of up to 400 ml bulk volume) with controlled porous structure were manufactured, with potential for scale up to much larger gel dimensions. This method can be used for production of novel 3D multi-component macroporous composite materials with a uniform distribution of embedded particles. The proposed method provides better control of freezing conditions and thus overcomes existing drawbacks limiting production of large gel-based devices and matrices. The proposed method could serve as a new design concept for functional 3D macroporous gels and composites preparation for biomedical, biotechnological and environmental applications. PMID:26883390

  19. A simple method for the production of large volume 3D macroporous hydrogels for advanced biotechnological, medical and environmental applications

    NASA Astrophysics Data System (ADS)

    Savina, Irina N.; Ingavle, Ganesh C.; Cundy, Andrew B.; Mikhalovsky, Sergey V.

    2016-02-01

    The development of bulk, three-dimensional (3D), macroporous polymers with high permeability, large surface area and large volume is highly desirable for a range of applications in the biomedical, biotechnological and environmental areas. The experimental techniques currently used are limited to the production of small size and volume cryogel material. In this work we propose a novel, versatile, simple and reproducible method for the synthesis of large volume porous polymer hydrogels by cryogelation. By controlling the freezing process of the reagent/polymer solution, large-scale 3D macroporous gels with wide interconnected pores (up to 200 μm in diameter) and large accessible surface area have been synthesized. For the first time, macroporous gels (of up to 400 ml bulk volume) with controlled porous structure were manufactured, with potential for scale up to much larger gel dimensions. This method can be used for production of novel 3D multi-component macroporous composite materials with a uniform distribution of embedded particles. The proposed method provides better control of freezing conditions and thus overcomes existing drawbacks limiting production of large gel-based devices and matrices. The proposed method could serve as a new design concept for functional 3D macroporous gels and composites preparation for biomedical, biotechnological and environmental applications.

  20. Improved volume rendering for the visualization of living cells examined with confocal microscopy

    NASA Astrophysics Data System (ADS)

    Enloe, L. Charity; Griffing, Lawrence R.

    2000-02-01

    This research applies recent advances in 3D isosurface reconstruction to images of test spheres and plant cells growing in suspension culture. Isosurfaces that represent object boundaries are constructed with a Marching Cubes algorithm applied to simple data sets, i.e., fluorescent test beads, and complex data sets, i.e., fluorescent plant cells, acquired with a Zeiss Confocal Laser Scanning Microscope (LSM). The marching cubes algorithm treats each pixel or voxel of the image as a separate entity when performing computations. To test the spatial accuracy of the reconstruction, control data representing the volume of a 25 micrometer test shaper was obtained with the LSM. This volume was then judged on the basis of uniformity and smoothness. Using polygon decimation and smoothing algorithms available through the visualization toolkit, 'voxellated' test spheres and cells were smoothed using several different smoothing algorithms after unessential polygons were eliminated. With these improvements, the shape of subcellular organelles could be modeled at various levels of accuracy. However, in order to accurately reconstruct these complex structures of interest to us, the subcellular organelles of the endosomal system or the endoplasmic reticulum of plant cells, measurements of the accuracy of connectedness of structures need to be developed.

  1. Subjective quality and depth assessment in stereoscopic viewing of volume-rendered medical images

    NASA Astrophysics Data System (ADS)

    Rousson, Johanna; Couturou, Jeanne; Vetsuypens, Arnout; Platisa, Ljiljana; Kumcu, Asli; Kimpe, Tom; Philips, Wilfried

    2014-03-01

    No study to-date explored the relationship between perceived image quality (IQ) and perceived depth (DP) in stereoscopic medical images. However, this is crucial to design objective quality metrics suitable for stereoscopic medical images. This study examined this relationship using volume-rendered stereoscopic medical images for both dual- and single-view distortions. The reference image was modified to simulate common alterations occurring during the image acquisition stage or at the display side: added white Gaussian noise, Gaussian filtering, changes in luminance, brightness and contrast. We followed a double stimulus five-point quality scale methodology to conduct subjective tests with eight non-expert human observers. The results suggested that DP was very robust to luminance, contrast and brightness alterations and insensitive to noise distortions until standard deviation σ=20 and crosstalk rates of 7%. In contrast, IQ seemed sensitive to all distortions. Finally, for both DP and IQ, the Friedman test indicated that the quality scores for dual-view distortions were significantly worse than scores for single-view distortions for multiple blur levels and crosstalk impairments. No differences were found for most levels of brightness, contrast and noise distortions. So, DP and IQ didn't react equivalently to identical impairments, and both depended whether dual- or single-view distortions were applied.

  2. 3D thoracoscopic ultrasound volume measurement validation in an ex vivo and in vivo porcine model of lung tumours

    NASA Astrophysics Data System (ADS)

    Hornblower, V. D. M.; Yu, E.; Fenster, A.; Battista, J. J.; Malthaner, R. A.

    2007-01-01

    The purpose of this study was to validate the accuracy and reliability of volume measurements obtained using three-dimensional (3D) thoracoscopic ultrasound (US) imaging. Artificial 'tumours' were created by injecting a liquid agar mixture into spherical moulds of known volume. Once solidified, the 'tumours' were implanted into the lung tissue in both a porcine lung sample ex vivo and a surgical porcine model in vivo. 3D US images were created by mechanically rotating the thoracoscopic ultrasound probe about its long axis while the transducer was maintained in close contact with the tissue. Volume measurements were made by one observer using the ultrasound images and a manual-radial segmentation technique and these were compared with the known volumes of the agar. In vitro measurements had average accuracy and precision of 4.76% and 1.77%, respectively; in vivo measurements had average accuracy and precision of 8.18% and 1.75%, respectively. The 3D thoracoscopic ultrasound can be used to accurately and reproducibly measure 'tumour' volumes both in vivo and ex vivo.

  3. Geometry modeling and grid generation using 3D NURBS control volume

    NASA Technical Reports Server (NTRS)

    Yu, Tzu-Yi; Soni, Bharat K.; Shih, Ming-Hsin

    1995-01-01

    The algorithms for volume grid generation using NURBS geometric representation are presented. The parameterization algorithm is enhanced to yield a desired physical distribution on the curve, surface and volume. This approach bridges the gap between CAD surface/volume definition and surface/volume grid generation. Computational examples associated with practical configurations have shown the utilization of these algorithms.

  4. A low volume 3D-printed temperature-controllable cuvette for UV visible spectroscopy.

    PubMed

    Pisaruka, Jelena; Dymond, Marcus K

    2016-10-01

    We report the fabrication of a 3D-printed water-heated cuvette that fits into a standard UV visible spectrophotometer. Full 3D-printable designs are provided and 3D-printing conditions have been optimised to provide options to print the cuvette in either acrylonitrile butadiene styrene or polylactic acid polymers, extending the range of solvents that are compatible with the design. We demonstrate the efficacy of the cuvette by determining the critical micelle concentration of sodium dodecyl sulphate at 40 °C, the molar extinction coefficients of cobalt nitrate and dsDNA and by reproducing the thermochromic UV visible spectrum of a mixture of cobalt chloride, water and propan-2-ol. PMID:27443958

  5. 3D Surface Reconstruction of Plant Seeds by Volume Carving: Performance and Accuracies

    PubMed Central

    Roussel, Johanna; Geiger, Felix; Fischbach, Andreas; Jahnke, Siegfried; Scharr, Hanno

    2016-01-01

    We describe a method for 3D reconstruction of plant seed surfaces, focusing on small seeds with diameters as small as 200 μm. The method considers robotized systems allowing single seed handling in order to rotate a single seed in front of a camera. Even though such systems feature high position repeatability, at sub-millimeter object scales, camera pose variations have to be compensated. We do this by robustly estimating the tool center point from each acquired image. 3D reconstruction can then be performed by a simple shape-from-silhouette approach. In experiments we investigate runtimes, theoretically achievable accuracy, experimentally achieved accuracy, and show as a proof of principle that the proposed method is well sufficient for 3D seed phenotyping purposes. PMID:27375628

  6. Three dimensional level set based semiautomatic segmentation of atherosclerotic carotid artery wall volume using 3D ultrasound imaging

    NASA Astrophysics Data System (ADS)

    Hossain, Md. Murad; AlMuhanna, Khalid; Zhao, Limin; Lal, Brajesh K.; Sikdar, Siddhartha

    2014-03-01

    3D segmentation of carotid plaque from ultrasound (US) images is challenging due to image artifacts and poor boundary definition. Semiautomatic segmentation algorithms for calculating vessel wall volume (VWV) have been proposed for the common carotid artery (CCA) but they have not been applied on plaques in the internal carotid artery (ICA). In this work, we describe a 3D segmentation algorithm that is robust to shadowing and missing boundaries. Our algorithm uses distance regularized level set method with edge and region based energy to segment the adventitial wall boundary (AWB) and lumen-intima boundary (LIB) of plaques in the CCA, ICA and external carotid artery (ECA). The algorithm is initialized by manually placing points on the boundary of a subset of transverse slices with an interslice distance of 4mm. We propose a novel user defined stopping surface based energy to prevent leaking of evolving surface across poorly defined boundaries. Validation was performed against manual segmentation using 3D US volumes acquired from five asymptomatic patients with carotid stenosis using a linear 4D probe. A pseudo gold-standard boundary was formed from manual segmentation by three observers. The Dice similarity coefficient (DSC), Hausdor distance (HD) and modified HD (MHD) were used to compare the algorithm results against the pseudo gold-standard on 1205 cross sectional slices of 5 3D US image sets. The algorithm showed good agreement with the pseudo gold standard boundary with mean DSC of 93.3% (AWB) and 89.82% (LIB); mean MHD of 0.34 mm (AWB) and 0.24 mm (LIB); mean HD of 1.27 mm (AWB) and 0.72 mm (LIB). The proposed 3D semiautomatic segmentation is the first step towards full characterization of 3D plaque progression and longitudinal monitoring.

  7. 3-D segmentation of retinal blood vessels in spectral-domain OCT volumes of the optic nerve head

    NASA Astrophysics Data System (ADS)

    Lee, Kyungmoo; Abràmoff, Michael D.; Niemeijer, Meindert; Garvin, Mona K.; Sonka, Milan

    2010-03-01

    Segmentation of retinal blood vessels can provide important information for detecting and tracking retinal vascular diseases including diabetic retinopathy, arterial hypertension, arteriosclerosis and retinopathy of prematurity (ROP). Many studies on 2-D segmentation of retinal blood vessels from a variety of medical images have been performed. However, 3-D segmentation of retinal blood vessels from spectral-domain optical coherence tomography (OCT) volumes, which is capable of providing geometrically accurate vessel models, to the best of our knowledge, has not been previously studied. The purpose of this study is to develop and evaluate a method that can automatically detect 3-D retinal blood vessels from spectral-domain OCT scans centered on the optic nerve head (ONH). The proposed method utilized a fast multiscale 3-D graph search to segment retinal surfaces as well as a triangular mesh-based 3-D graph search to detect retinal blood vessels. An experiment on 30 ONH-centered OCT scans (15 right eye scans and 15 left eye scans) from 15 subjects was performed, and the mean unsigned error in 3-D of the computer segmentations compared with the independent standard obtained from a retinal specialist was 3.4 +/- 2.5 voxels (0.10 +/- 0.07 mm).

  8. Using numerical models and volume rendering to interpret acoustic imaging of hydrothermal flow

    NASA Astrophysics Data System (ADS)

    Bemis, K. G.; Bennett, K.; Takle, J.; Rona, P. A.; Silver, D.

    2009-12-01

    Our acoustic imaging system will be installed onto the Neptune Canada observatory at the Main Endeavour Field, Juan de Fuca Ridge, which is a Ridge 2000 Integrated Study Site. Thereafter, 16-30 Gb of acoustic imaging data will be collected daily. We are developing a numerical model of merging plumes that will be used to guide expectations and volume rendering software that transforms volumetric acoustic data into photo-like images. Hydrothermal flow is modeled as a combination of merged point sources which can be configured in any geometry. The model stipulates the dissipation or dilution of the flow and uses potential fields and complex analysis to combine the entrainment fields produced by each source. The strengths of this model are (a) the ability to handle a variety of scales especially the small scale as the potential fields can be specified with an effectively infinite boundary condition, (b) the ability to handle line, circle and areal source configurations, and (c) the ability to handle both high temperature focused flow and low temperature diffuse flow. This model predicts the vertical and horizontal velocities and the spatial distribution of effluent from combined sources of variable strength in a steady ambient velocity field. To verify the accuracy of the model’s results, we compare the model predictions of plume centerlines for the merging of two relatively strong point sources with the acoustic imaging data collected at Clam Acres, Southwest Vent Field, EPR 21°N in 1990. The two chimneys are 3.5 m apart and the plumes emanating from their tops merge approximately 18 mab. The model is able to predict the height of merging and the bending of the centerlines. Merging is implicitly observed at Grotto Vent, Main Endeavour Field, in our VIP 2000 data from July 2000: although there are at least 5 vigorous black smokers only a single plume is discernable in the acoustic imaging data. Furthermore, the observed Doppler velocity data increases with height

  9. [Measurement of left atrial and ventricular volumes in real-time 3D echocardiography. Validation by nuclear magnetic resonance

    NASA Technical Reports Server (NTRS)

    Bauer, F.; Shiota, T.; Qin, J. X.; White, R. D.; Thomas, J. D.

    2001-01-01

    The measurement of the left ventricular ejection fraction is important for the evaluation of cardiomyopathy and depends on the measurement of left ventricular volumes. There are no existing conventional echocardiographic means of measuring the true left atrial and ventricular volumes without mathematical approximations. The aim of this study was to test anew real time 3-dimensional echocardiographic system of calculating left atrial and ventricular volumes in 40 patients after in vitro validation. The volumes of the left atrium and ventricle acquired from real time 3-D echocardiography in the apical view, were calculated in 7 sections parallel to the surface of the probe and compared with atrial (10 patients) and ventricular (30 patients) volumes calculated by nuclear magnetic resonance with the simpson method and with volumes of water in balloons placed in a cistern. Linear regression analysis showed an excellent correlation between the real volume of water in the balloons and volumes given in real time 3-dimensional echocardiography (y = 0.94x + 5.5, r = 0.99, p < 0.001, D = -10 +/- 4.5 ml). A good correlation was observed between real time 3-dimensional echocardiography and nuclear magnetic resonance for the measurement of left atrial and ventricular volumes (y = 0.95x - 10, r = 0.91, p < 0.001, D = -14.8 +/- 19.5 ml and y = 0.87x + 10, r = 0.98, P < 0.001, D = -8.3 +/- 18.7 ml, respectively. The authors conclude that real time three-dimensional echocardiography allows accurate measurement of left heart volumes underlying the clinical potential of this new 3-D method.

  10. The alteration in placental volume and placental mean grey value in growth-restricted pregnancies assessed by 3D ultrasound (Growth Restriction & 3D Ultrasonography).

    PubMed

    Artunc Ulkumen, B; Pala, H G; Uyar, Y; Koyuncu, F M; Bulbul Baytur, Y

    2015-01-01

    We aimed to evaluate the volumetric and echogenic alterations in placentas between the intrauterine growth restriction (IUGR) and normal pregnancies using three-dimensional ultrasound and virtual organ computer-aided analysis (VOCAL) software. This case-control prospective study consisted of 48 singleton pregnancies complicated by IUGR and 60 healthy singleton pregnancies matched for maternal age, gestational age and parity. Placental volume (PV) and placental volumetric mean grey values (MGV) were evaluated. PV (cm(3)) was analysed using the VOCAL imaging analysis program, and 3D histogram was used to calculate the volumetric MGV (%). PV was 278.50 ± 63.68 and 370.98 ± 97.82 cm(3) in IUGR and control groups, respectively (p = 0.004). MGV of the placenta was 38.24 ± 8.41 and 38.24 ± 8.41 in IUGR and control groups, respectively (p = 0.30). receiver operator curve (ROC) curve analysis revealed that area under curve was 0.731 for PV. Correlation analysis revealed that PV was significantly associated with estimated fetal weight (r = 0.319, p = 0.003), biparietal diameter (r = 0.346, p = 0.002), head circumference (r = 0.269, p = 0.019), abdominal circumference (r = 0.344, p = 0.002) and femur length (r = 0.328, p = 0.004). PV was inversely related to the umbilical artery pulsatility index (r = - 0.244, p = 0.017). To the best of our knowledge, this is the first study evaluating volumetric MGV in IUGR placentas by comparing them with healthy pregnancies. Our study showed that PV diminishes significantly in IUGR pregnancies, whereas volumetric MGV does not alter significantly. PMID:25409488