Sample records for volume rendering imaging

  1. Foundations for Measuring Volume Rendering Quality

    NASA Technical Reports Server (NTRS)

    Williams, Peter L.; Uselton, Samuel P.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    The goal of this paper is to provide a foundation for objectively comparing volume rendered images. The key elements of the foundation are: (1) a rigorous specification of all the parameters that need to be specified to define the conditions under which a volume rendered image is generated; (2) a methodology for difference classification, including a suite of functions or metrics to quantify and classify the difference between two volume rendered images that will support an analysis of the relative importance of particular differences. The results of this method can be used to study the changes caused by modifying particular parameter values, to compare and quantify changes between images of similar data sets rendered in the same way, and even to detect errors in the design, implementation or modification of a volume rendering system. If one has a benchmark image, for example one created by a high accuracy volume rendering system, the method can be used to evaluate the accuracy of a given image.

  2. Real-time volume rendering of 4D image using 3D texture mapping

    NASA Astrophysics Data System (ADS)

    Hwang, Jinwoo; Kim, June-Sic; Kim, Jae Seok; Kim, In Young; Kim, Sun Il

    2001-05-01

    Four dimensional image is 3D volume data that varies with time. It is used to express deforming or moving object in virtual surgery of 4D ultrasound. It is difficult to render 4D image by conventional ray-casting or shear-warp factorization methods because of their time-consuming rendering time or pre-processing stage whenever the volume data are changed. Even 3D texture mapping is used, repeated volume loading is also time-consuming in 4D image rendering. In this study, we propose a method to reduce data loading time using coherence between currently loaded volume and previously loaded volume in order to achieve real time rendering based on 3D texture mapping. Volume data are divided into small bricks and each brick being loaded is tested for similarity to one which was already loaded in memory. If the brick passed the test, it is defined as 3D texture by OpenGL functions. Later, the texture slices of the brick are mapped into polygons and blended by OpenGL blending functions. All bricks undergo this test. Continuously deforming fifty volumes are rendered in interactive time with SGI ONYX. Real-time volume rendering based on 3D texture mapping is currently available on PC.

  3. PRISM: An open source framework for the interactive design of GPU volume rendering shaders.

    PubMed

    Drouin, Simon; Collins, D Louis

    2018-01-01

    Direct volume rendering has become an essential tool to explore and analyse 3D medical images. Despite several advances in the field, it remains a challenge to produce an image that highlights the anatomy of interest, avoids occlusion of important structures, provides an intuitive perception of shape and depth while retaining sufficient contextual information. Although the computer graphics community has proposed several solutions to address specific visualization problems, the medical imaging community still lacks a general volume rendering implementation that can address a wide variety of visualization use cases while avoiding complexity. In this paper, we propose a new open source framework called the Programmable Ray Integration Shading Model, or PRISM, that implements a complete GPU ray-casting solution where critical parts of the ray integration algorithm can be replaced to produce new volume rendering effects. A graphical user interface allows clinical users to easily experiment with pre-existing rendering effect building blocks drawn from an open database. For programmers, the interface enables real-time editing of the code inside the blocks. We show that in its default mode, the PRISM framework produces images very similar to those produced by a widely-adopted direct volume rendering implementation in VTK at comparable frame rates. More importantly, we demonstrate the flexibility of the framework by showing how several volume rendering techniques can be implemented in PRISM with no more than a few lines of code. Finally, we demonstrate the simplicity of our system in a usability study with 5 medical imaging expert subjects who have none or little experience with volume rendering. The PRISM framework has the potential to greatly accelerate development of volume rendering for medical applications by promoting sharing and enabling faster development iterations and easier collaboration between engineers and clinical personnel.

  4. PRISM: An open source framework for the interactive design of GPU volume rendering shaders

    PubMed Central

    Collins, D. Louis

    2018-01-01

    Direct volume rendering has become an essential tool to explore and analyse 3D medical images. Despite several advances in the field, it remains a challenge to produce an image that highlights the anatomy of interest, avoids occlusion of important structures, provides an intuitive perception of shape and depth while retaining sufficient contextual information. Although the computer graphics community has proposed several solutions to address specific visualization problems, the medical imaging community still lacks a general volume rendering implementation that can address a wide variety of visualization use cases while avoiding complexity. In this paper, we propose a new open source framework called the Programmable Ray Integration Shading Model, or PRISM, that implements a complete GPU ray-casting solution where critical parts of the ray integration algorithm can be replaced to produce new volume rendering effects. A graphical user interface allows clinical users to easily experiment with pre-existing rendering effect building blocks drawn from an open database. For programmers, the interface enables real-time editing of the code inside the blocks. We show that in its default mode, the PRISM framework produces images very similar to those produced by a widely-adopted direct volume rendering implementation in VTK at comparable frame rates. More importantly, we demonstrate the flexibility of the framework by showing how several volume rendering techniques can be implemented in PRISM with no more than a few lines of code. Finally, we demonstrate the simplicity of our system in a usability study with 5 medical imaging expert subjects who have none or little experience with volume rendering. The PRISM framework has the potential to greatly accelerate development of volume rendering for medical applications by promoting sharing and enabling faster development iterations and easier collaboration between engineers and clinical personnel. PMID:29534069

  5. Three-dimensional spiral CT during arterial portography: comparison of three rendering techniques.

    PubMed

    Heath, D G; Soyer, P A; Kuszyk, B S; Bliss, D F; Calhoun, P S; Bluemke, D A; Choti, M A; Fishman, E K

    1995-07-01

    The three most common techniques for three-dimensional reconstruction are surface rendering, maximum-intensity projection (MIP), and volume rendering. Surface-rendering algorithms model objects as collections of geometric primitives that are displayed with surface shading. The MIP algorithm renders an image by selecting the voxel with the maximum intensity signal along a line extended from the viewer's eye through the data volume. Volume-rendering algorithms sum the weighted contributions of all voxels along the line. Each technique has advantages and shortcomings that must be considered during selection of one for a specific clinical problem and during interpretation of the resulting images. With surface rendering, sharp-edged, clear three-dimensional reconstruction can be completed on modest computer systems; however, overlapping structures cannot be visualized and artifacts are a problem. MIP is computationally a fast technique, but it does not allow depiction of overlapping structures, and its images are three-dimensionally ambiguous unless depth cues are provided. Both surface rendering and MIP use less than 10% of the image data. In contrast, volume rendering uses nearly all of the data, allows demonstration of overlapping structures, and engenders few artifacts, but it requires substantially more computer power than the other techniques.

  6. Real-time volume rendering of digital medical images on an iOS device

    NASA Astrophysics Data System (ADS)

    Noon, Christian; Holub, Joseph; Winer, Eliot

    2013-03-01

    Performing high quality 3D visualizations on mobile devices, while tantalizingly close in many areas, is still a quite difficult task. This is especially true for 3D volume rendering of digital medical images. Allowing this would empower medical personnel a powerful tool to diagnose and treat patients and train the next generation of physicians. This research focuses on performing real time volume rendering of digital medical images on iOS devices using custom developed GPU shaders for orthogonal texture slicing. An interactive volume renderer was designed and developed with several new features including dynamic modification of render resolutions, an incremental render loop, a shader-based clipping algorithm to support OpenGL ES 2.0, and an internal backface culling algorithm for properly sorting rendered geometry with alpha blending. The application was developed using several application programming interfaces (APIs) such as OpenSceneGraph (OSG) as the primary graphics renderer coupled with iOS Cocoa Touch for user interaction, and DCMTK for DICOM I/O. The developed application rendered volume datasets over 450 slices up to 50-60 frames per second, depending on the specific model of the iOS device. All rendering is done locally on the device so no Internet connection is required.

  7. Application of volume rendering technique (VRT) for musculoskeletal imaging.

    PubMed

    Darecki, Rafał

    2002-10-30

    A review of the applications of volume rendering technique in musculoskeletal three-dimensional imaging from CT data. General features, potential and indications for applying the method are presented.

  8. Enhanced visualization of MR angiogram with modified MIP and 3D image fusion

    NASA Astrophysics Data System (ADS)

    Kim, JongHyo; Yeon, Kyoung M.; Han, Man Chung; Lee, Dong Hyuk; Cho, Han I.

    1997-05-01

    We have developed a 3D image processing and display technique that include image resampling, modification of MIP, volume rendering, and fusion of MIP image with volumetric rendered image. This technique facilitates the visualization of the 3D spatial relationship between vasculature and surrounding organs by overlapping the MIP image on the volumetric rendered image of the organ. We applied this technique to a MR brain image data to produce an MRI angiogram that is overlapped with 3D volume rendered image of brain. MIP technique was used to visualize the vasculature of brain, and volume rendering was used to visualize the other structures of brain. The two images are fused after adjustment of contrast and brightness levels of each image in such a way that both the vasculature and brain structure are well visualized either by selecting the maximum value of each image or by assigning different color table to each image. The resultant image with this technique visualizes both the brain structure and vasculature simultaneously, allowing the physicians to inspect their relationship more easily. The presented technique will be useful for surgical planning for neurosurgery.

  9. Rapid Decimation for Direct Volume Rendering

    NASA Technical Reports Server (NTRS)

    Gibbs, Jonathan; VanGelder, Allen; Verma, Vivek; Wilhelms, Jane

    1997-01-01

    An approach for eliminating unnecessary portions of a volume when producing a direct volume rendering is described. This reduction in volume size sacrifices some image quality in the interest of rendering speed. Since volume visualization is often used as an exploratory visualization technique, it is important to reduce rendering times, so the user can effectively explore the volume. The methods presented can speed up rendering by factors of 2 to 3 with minor image degradation. A family of decimation algorithms to reduce the number of primitives in the volume without altering the volume's grid in any way is introduced. This allows the decimation to be computed rapidly, making it easier to change decimation levels on the fly. Further, because very little extra space is required, this method is suitable for the very large volumes that are becoming common. The method is also grid-independent, so it is suitable for multiple overlapping curvilinear and unstructured, as well as regular, grids. The decimation process can proceed automatically, or can be guided by the user so that important regions of the volume are decimated less than unimportant regions. A formal error measure is described based on a three-dimensional analog of the Radon transform. Decimation methods are evaluated based on this metric and on direct comparison with reference images.

  10. View compensated compression of volume rendered images for remote visualization.

    PubMed

    Lalgudi, Hariharan G; Marcellin, Michael W; Bilgin, Ali; Oh, Han; Nadar, Mariappan S

    2009-07-01

    Remote visualization of volumetric images has gained importance over the past few years in medical and industrial applications. Volume visualization is a computationally intensive process, often requiring hardware acceleration to achieve a real time viewing experience. One remote visualization model that can accomplish this would transmit rendered images from a server, based on viewpoint requests from a client. For constrained server-client bandwidth, an efficient compression scheme is vital for transmitting high quality rendered images. In this paper, we present a new view compensation scheme that utilizes the geometric relationship between viewpoints to exploit the correlation between successive rendered images. The proposed method obviates motion estimation between rendered images, enabling significant reduction to the complexity of a compressor. Additionally, the view compensation scheme, in conjunction with JPEG2000 performs better than AVC, the state of the art video compression standard.

  11. A Graph Based Interface for Representing Volume Visualization Results

    NASA Technical Reports Server (NTRS)

    Patten, James M.; Ma, Kwan-Liu

    1998-01-01

    This paper discusses a graph based user interface for representing the results of the volume visualization process. As images are rendered, they are connected to other images in a graph based on their rendering parameters. The user can take advantage of the information in this graph to understand how certain rendering parameter changes affect a dataset, making the visualization process more efficient. Because the graph contains more information than is contained in an unstructured history of images, the image graph is also helpful for collaborative visualization and animation.

  12. Framework for cognitive analysis of dynamic perfusion computed tomography with visualization of large volumetric data

    NASA Astrophysics Data System (ADS)

    Hachaj, Tomasz; Ogiela, Marek R.

    2012-10-01

    The proposed framework for cognitive analysis of perfusion computed tomography images is a fusion of image processing, pattern recognition, and image analysis procedures. The output data of the algorithm consists of: regions of perfusion abnormalities, anatomy atlas description of brain tissues, measures of perfusion parameters, and prognosis for infracted tissues. That information is superimposed onto volumetric computed tomography data and displayed to radiologists. Our rendering algorithm enables rendering large volumes on off-the-shelf hardware. This portability of rendering solution is very important because our framework can be run without using expensive dedicated hardware. The other important factors are theoretically unlimited size of rendered volume and possibility of trading of image quality for rendering speed. Such rendered, high quality visualizations may be further used for intelligent brain perfusion abnormality identification, and computer aided-diagnosis of selected types of pathologies.

  13. Elasticity-based three dimensional ultrasound real-time volume rendering

    NASA Astrophysics Data System (ADS)

    Boctor, Emad M.; Matinfar, Mohammad; Ahmad, Omar; Rivaz, Hassan; Choti, Michael; Taylor, Russell H.

    2009-02-01

    Volumetric ultrasound imaging has not gained wide recognition, despite the availability of real-time 3D ultrasound scanners and the anticipated potential of 3D ultrasound imaging in diagnostic and interventional radiology. Their use, however, has been hindered by the lack of real-time visualization methods that are capable of producing high quality 3D rendering of the target/surface of interest. Volume rendering is a known visualization method, which can display clear surfaces out of the acquired volumetric data, and has an increasing number of applications utilizing CT and MRI data. The key element of any volume rendering pipeline is the ability to classify the target/surface of interest by setting an appropriate opacity function. Practical and successful real-time 3D ultrasound volume rendering can be achieved in Obstetrics and Angio applications where setting these opacity functions can be done rapidly, and reliably. Unfortunately, 3D ultrasound volume rendering of soft tissues is a challenging task due to the presence of significant amount of noise and speckle. Recently, several research groups have shown the feasibility of producing 3D elasticity volume from two consecutive 3D ultrasound scans. This report describes a novel volume rendering pipeline utilizing elasticity information. The basic idea is to compute B-mode voxel opacity from the rapidly calculated strain values, which can also be mixed with conventional gradient based opacity function. We have implemented the volume renderer using GPU unit, which gives an update rate of 40 volume/sec.

  14. Efficient visibility-driven medical image visualisation via adaptive binned visibility histogram.

    PubMed

    Jung, Younhyun; Kim, Jinman; Kumar, Ashnil; Feng, David Dagan; Fulham, Michael

    2016-07-01

    'Visibility' is a fundamental optical property that represents the observable, by users, proportion of the voxels in a volume during interactive volume rendering. The manipulation of this 'visibility' improves the volume rendering processes; for instance by ensuring the visibility of regions of interest (ROIs) or by guiding the identification of an optimal rendering view-point. The construction of visibility histograms (VHs), which represent the distribution of all the visibility of all voxels in the rendered volume, enables users to explore the volume with real-time feedback about occlusion patterns among spatially related structures during volume rendering manipulations. Volume rendered medical images have been a primary beneficiary of VH given the need to ensure that specific ROIs are visible relative to the surrounding structures, e.g. the visualisation of tumours that may otherwise be occluded by neighbouring structures. VH construction and its subsequent manipulations, however, are computationally expensive due to the histogram binning of the visibilities. This limits the real-time application of VH to medical images that have large intensity ranges and volume dimensions and require a large number of histogram bins. In this study, we introduce an efficient adaptive binned visibility histogram (AB-VH) in which a smaller number of histogram bins are used to represent the visibility distribution of the full VH. We adaptively bin medical images by using a cluster analysis algorithm that groups the voxels according to their intensity similarities into a smaller subset of bins while preserving the distribution of the intensity range of the original images. We increase efficiency by exploiting the parallel computation and multiple render targets (MRT) extension of the modern graphical processing units (GPUs) and this enables efficient computation of the histogram. We show the application of our method to single-modality computed tomography (CT), magnetic resonance (MR) imaging and multi-modality positron emission tomography-CT (PET-CT). In our experiments, the AB-VH markedly improved the computational efficiency for the VH construction and thus improved the subsequent VH-driven volume manipulations. This efficiency was achieved without major degradation in the VH visually and numerical differences between the AB-VH and its full-bin counterpart. We applied several variants of the K-means clustering algorithm with varying Ks (the number of clusters) and found that higher values of K resulted in better performance at a lower computational gain. The AB-VH also had an improved performance when compared to the conventional method of down-sampling of the histogram bins (equal binning) for volume rendering visualisation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. NOTE: Wobbled splatting—a fast perspective volume rendering method for simulation of x-ray images from CT

    NASA Astrophysics Data System (ADS)

    Birkfellner, Wolfgang; Seemann, Rudolf; Figl, Michael; Hummel, Johann; Ede, Christopher; Homolka, Peter; Yang, Xinhui; Niederer, Peter; Bergmann, Helmar

    2005-05-01

    3D/2D registration, the automatic assignment of a global rigid-body transformation matching the coordinate systems of patient and preoperative volume scan using projection images, is an important topic in image-guided therapy and radiation oncology. A crucial part of most 3D/2D registration algorithms is the fast computation of digitally rendered radiographs (DRRs) to be compared iteratively to radiographs or portal images. Since registration is an iterative process, fast generation of DRRs—which are perspective summed voxel renderings—is desired. In this note, we present a simple and rapid method for generation of DRRs based on splat rendering. As opposed to conventional splatting, antialiasing of the resulting images is not achieved by means of computing a discrete point spread function (a so-called footprint), but by stochastic distortion of either the voxel positions in the volume scan or by the simulation of a focal spot of the x-ray tube with non-zero diameter. Our method generates slightly blurred DRRs suitable for registration purposes at framerates of approximately 10 Hz when rendering volume images with a size of 30 MB.

  16. Real-time reconstruction of three-dimensional brain surface MR image using new volume-surface rendering technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watanabe, T.; Momose, T.; Oku, S.

    It is essential to obtain realistic brain surface images, in which sulci and gyri are easily recognized, when examining the correlation between functional (PET or SPECT) and anatomical (MRI) brain studies. The volume rendering technique (VRT) is commonly employed to make three-dimensional (3D) brain surface images. This technique, however, takes considerable time to make only one 3D image. Therefore it has not been practical to make the brain surface images in arbitrary directions on a real-time basis using ordinary work stations or personal computers. The surface rendering technique (SRT), on the other hand, is much less computationally demanding, but themore » quality of resulting images is not satisfactory for our purpose. A new computer algorithm has been developed to make 3D brain surface MR images very quickly using a volume-surface rendering technique (VSRT), in which the quality of resulting images is comparable to that of VRT and computation time to SRT. In VSRT the process of volume rendering is done only once to the direction of the normal vector of each surface point, rather than each time a new view point is determined as in VRT. Subsequent reconstruction of the 3D image uses a similar algorithm to that of SRT. Thus we can obtain brain surface MR images of sufficient quality viewed from any direction on a real-time basis using an easily available personal computer (Macintosh Quadra 800). The calculation time to make a 3D image is less than 1 sec. in VSRT, while that is more than 15 sec. in the conventional VRT. The difference of resulting image quality between VSRT and VRT is almost imperceptible. In conclusion, our new technique for real-time reconstruction of 3D brain surface MR image is very useful and practical in the functional and anatomical correlation study.« less

  17. Standardized volume-rendering of contrast-enhanced renal magnetic resonance angiography.

    PubMed

    Smedby, O; Oberg, R; Asberg, B; Stenström, H; Eriksson, P

    2005-08-01

    To propose a technique for standardizing volume-rendering technique (VRT) protocols and to compare this with maximum intensity projection (MIP) in regard to image quality and diagnostic confidence in stenosis diagnosis with magnetic resonance angiography (MRA). Twenty patients were examined with MRA under suspicion of renal artery stenosis. Using the histogram function in the volume-rendering software, the 95th and 99th percentiles of the 3D data set were identified and used to define the VRT transfer function. Two radiologists assessed the stenosis pathology and image quality from rotational sequences of MIP and VRT images. Good overall agreement (mean kappa=0.72) was found between MIP and VRT diagnoses. The agreement between MIP and VRT was considerably better than that between observers (mean kappa=0.43). One of the observers judged VRT images as having higher image quality than MIP images. Presenting renal MRA images with VRT gave results in good agreement with MIP. With VRT protocols defined from the histogram of the image, the lack of an absolute gray scale in MRI need not be a major problem.

  18. A data distributed parallel algorithm for ray-traced volume rendering

    NASA Technical Reports Server (NTRS)

    Ma, Kwan-Liu; Painter, James S.; Hansen, Charles D.; Krogh, Michael F.

    1993-01-01

    This paper presents a divide-and-conquer ray-traced volume rendering algorithm and a parallel image compositing method, along with their implementation and performance on the Connection Machine CM-5, and networked workstations. This algorithm distributes both the data and the computations to individual processing units to achieve fast, high-quality rendering of high-resolution data. The volume data, once distributed, is left intact. The processing nodes perform local ray tracing of their subvolume concurrently. No communication between processing units is needed during this locally ray-tracing process. A subimage is generated by each processing unit and the final image is obtained by compositing subimages in the proper order, which can be determined a priori. Test results on both the CM-5 and a group of networked workstations demonstrate the practicality of our rendering algorithm and compositing method.

  19. Direct Visuo-Haptic 4D Volume Rendering Using Respiratory Motion Models.

    PubMed

    Fortmeier, Dirk; Wilms, Matthias; Mastmeyer, Andre; Handels, Heinz

    2015-01-01

    This article presents methods for direct visuo-haptic 4D volume rendering of virtual patient models under respiratory motion. Breathing models are computed based on patient-specific 4D CT image data sequences. Virtual patient models are visualized in real-time by ray casting based rendering of a reference CT image warped by a time-variant displacement field, which is computed using the motion models at run-time. Furthermore, haptic interaction with the animated virtual patient models is provided by using the displacements computed at high rendering rates to translate the position of the haptic device into the space of the reference CT image. This concept is applied to virtual palpation and the haptic simulation of insertion of a virtual bendable needle. To this aim, different motion models that are applicable in real-time are presented and the methods are integrated into a needle puncture training simulation framework, which can be used for simulated biopsy or vessel puncture in the liver. To confirm real-time applicability, a performance analysis of the resulting framework is given. It is shown that the presented methods achieve mean update rates around 2,000 Hz for haptic simulation and interactive frame rates for volume rendering and thus are well suited for visuo-haptic rendering of virtual patients under respiratory motion.

  20. High-performance GPU-based rendering for real-time, rigid 2D/3D-image registration and motion prediction in radiation oncology.

    PubMed

    Spoerk, Jakob; Gendrin, Christelle; Weber, Christoph; Figl, Michael; Pawiro, Supriyanto Ardjo; Furtado, Hugo; Fabri, Daniella; Bloch, Christoph; Bergmann, Helmar; Gröller, Eduard; Birkfellner, Wolfgang

    2012-02-01

    A common problem in image-guided radiation therapy (IGRT) of lung cancer as well as other malignant diseases is the compensation of periodic and aperiodic motion during dose delivery. Modern systems for image-guided radiation oncology allow for the acquisition of cone-beam computed tomography data in the treatment room as well as the acquisition of planar radiographs during the treatment. A mid-term research goal is the compensation of tumor target volume motion by 2D/3D Registration. In 2D/3D registration, spatial information on organ location is derived by an iterative comparison of perspective volume renderings, so-called digitally rendered radiographs (DRR) from computed tomography volume data, and planar reference x-rays. Currently, this rendering process is very time consuming, and real-time registration, which should at least provide data on organ position in less than a second, has not come into existence. We present two GPU-based rendering algorithms which generate a DRR of 512×512 pixels size from a CT dataset of 53 MB size at a pace of almost 100 Hz. This rendering rate is feasible by applying a number of algorithmic simplifications which range from alternative volume-driven rendering approaches - namely so-called wobbled splatting - to sub-sampling of the DRR-image by means of specialized raycasting techniques. Furthermore, general purpose graphics processing unit (GPGPU) programming paradigms were consequently utilized. Rendering quality and performance as well as the influence on the quality and performance of the overall registration process were measured and analyzed in detail. The results show that both methods are competitive and pave the way for fast motion compensation by rigid and possibly even non-rigid 2D/3D registration and, beyond that, adaptive filtering of motion models in IGRT. Copyright © 2011. Published by Elsevier GmbH.

  1. High-performance GPU-based rendering for real-time, rigid 2D/3D-image registration and motion prediction in radiation oncology

    PubMed Central

    Spoerk, Jakob; Gendrin, Christelle; Weber, Christoph; Figl, Michael; Pawiro, Supriyanto Ardjo; Furtado, Hugo; Fabri, Daniella; Bloch, Christoph; Bergmann, Helmar; Gröller, Eduard; Birkfellner, Wolfgang

    2012-01-01

    A common problem in image-guided radiation therapy (IGRT) of lung cancer as well as other malignant diseases is the compensation of periodic and aperiodic motion during dose delivery. Modern systems for image-guided radiation oncology allow for the acquisition of cone-beam computed tomography data in the treatment room as well as the acquisition of planar radiographs during the treatment. A mid-term research goal is the compensation of tumor target volume motion by 2D/3D registration. In 2D/3D registration, spatial information on organ location is derived by an iterative comparison of perspective volume renderings, so-called digitally rendered radiographs (DRR) from computed tomography volume data, and planar reference x-rays. Currently, this rendering process is very time consuming, and real-time registration, which should at least provide data on organ position in less than a second, has not come into existence. We present two GPU-based rendering algorithms which generate a DRR of 512 × 512 pixels size from a CT dataset of 53 MB size at a pace of almost 100 Hz. This rendering rate is feasible by applying a number of algorithmic simplifications which range from alternative volume-driven rendering approaches – namely so-called wobbled splatting – to sub-sampling of the DRR-image by means of specialized raycasting techniques. Furthermore, general purpose graphics processing unit (GPGPU) programming paradigms were consequently utilized. Rendering quality and performance as well as the influence on the quality and performance of the overall registration process were measured and analyzed in detail. The results show that both methods are competitive and pave the way for fast motion compensation by rigid and possibly even non-rigid 2D/3D registration and, beyond that, adaptive filtering of motion models in IGRT. PMID:21782399

  2. Three-dimensional microscopic tomographic imagings of the cataract in a human lens in vivo

    NASA Astrophysics Data System (ADS)

    Masters, Barry R.

    1998-10-01

    The problem of three-dimensional visualization of a human lens in vivo has been solved by a technique of volume rendering a transformed series of 60 rotated Scheimpflug (a dual slit reflected light microscope) digital images. The data set was obtained by rotating the Scheimpflug camera about the optic axis of the lens in 3 degree increments. The transformed set of optical sections were first aligned to correct for small eye movements, and then rendered into a volume reconstruction with volume rendering computer graphics techniques. To help visualize the distribution of lens opacities (cataracts) in the living, human lens the intensity of light scattering was pseudocolor coded and the cataract opacities were displayed as a movie.

  3. Tools for Analysis and Visualization of Large Time-Varying CFD Data Sets

    NASA Technical Reports Server (NTRS)

    Wilhelms, Jane; VanGelder, Allen

    1997-01-01

    In the second year, we continued to built upon and improve our scanline-based direct volume renderer that we developed in the first year of this grant. This extremely general rendering approach can handle regular or irregular grids, including overlapping multiple grids, and polygon mesh surfaces. It runs in parallel on multi-processors. It can also be used in conjunction with a k-d tree hierarchy, where approximate models and error terms are stored in the nodes of the tree, and approximate fast renderings can be created. We have extended our software to handle time-varying data where the data changes but the grid does not. We are now working on extending it to handle more general time-varying data. We have also developed a new extension of our direct volume renderer that uses automatic decimation of the 3D grid, as opposed to an explicit hierarchy. We explored this alternative approach as being more appropriate for very large data sets, where the extra expense of a tree may be unacceptable. We also describe a new approach to direct volume rendering using hardware 3D textures and incorporates lighting effects. Volume rendering using hardware 3D textures is extremely fast, and machines capable of using this technique are becoming more moderately priced. While this technique, at present, is limited to use with regular grids, we are pursuing possible algorithms extending the approach to more general grid types. We have also begun to explore a new method for determining the accuracy of approximate models based on the light field method described at ACM SIGGRAPH '96. In our initial implementation, we automatically image the volume from 32 equi-distant positions on the surface of an enclosing tessellated sphere. We then calculate differences between these images under different conditions of volume approximation or decimation. We are studying whether this will give a quantitative measure of the effects of approximation. We have created new tools for exploring the differences between images produced by various rendering methods. Images created by our software can be stored in the SGI RGB format. Our idtools software reads in pair of images and compares them using various metrics. The differences of the images using the RGB, HSV, and HSL color models can be calculated and shown. We can also calculate the auto-correlation function and the Fourier transform of the image and image differences. We will explore how these image differences compare in order to find useful metrics for quantifying the success of various visualization approaches. In general, progress was consistent with our research plan for the second year of the grant.

  4. Automatic transfer function generation for volume rendering of high-resolution x-ray 3D digital mammography images

    NASA Astrophysics Data System (ADS)

    Alyassin, Abdal M.

    2002-05-01

    3D Digital mammography (3DDM) is a new technology that provides high resolution X-ray breast tomographic data. Like any other tomographic medical imaging modalities, viewing a stack of tomographic images may require time especially if the images are of large matrix size. In addition, it may cause difficulty to conceptually construct 3D breast structures. Therefore, there is a need to readily visualize the data in 3D. However, one of the issues that hinder the usage of volume rendering (VR) is finding an automatic way to generate transfer functions that efficiently map the important diagnostic information in the data. We have developed a method that randomly samples the volume. Based on the mean and the standard deviation of these samples, the technique determines the lower limit and upper limit of a piecewise linear ramp transfer function. We have volume rendered several 3DDM data using this technique and compared visually the outcome with the result from a conventional automatic technique. The transfer function generated through the proposed technique provided superior VR images over the conventional technique. Furthermore, the improvement in the reproducibility of the transfer function correlated with the number of samples taken from the volume at the expense of the processing time.

  5. A novel approach to segmentation and measurement of medical image using level set methods.

    PubMed

    Chen, Yao-Tien

    2017-06-01

    The study proposes a novel approach for segmentation and visualization plus value-added surface area and volume measurements for brain medical image analysis. The proposed method contains edge detection and Bayesian based level set segmentation, surface and volume rendering, and surface area and volume measurements for 3D objects of interest (i.e., brain tumor, brain tissue, or whole brain). Two extensions based on edge detection and Bayesian level set are first used to segment 3D objects. Ray casting and a modified marching cubes algorithm are then adopted to facilitate volume and surface visualization of medical-image dataset. To provide physicians with more useful information for diagnosis, the surface area and volume of an examined 3D object are calculated by the techniques of linear algebra and surface integration. Experiment results are finally reported in terms of 3D object extraction, surface and volume rendering, and surface area and volume measurements for medical image analysis. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. A Virtual Reality System for PTCD Simulation Using Direct Visuo-Haptic Rendering of Partially Segmented Image Data.

    PubMed

    Fortmeier, Dirk; Mastmeyer, Andre; Schröder, Julian; Handels, Heinz

    2016-01-01

    This study presents a new visuo-haptic virtual reality (VR) training and planning system for percutaneous transhepatic cholangio-drainage (PTCD) based on partially segmented virtual patient models. We only use partially segmented image data instead of a full segmentation and circumvent the necessity of surface or volume mesh models. Haptic interaction with the virtual patient during virtual palpation, ultrasound probing and needle insertion is provided. Furthermore, the VR simulator includes X-ray and ultrasound simulation for image-guided training. The visualization techniques are GPU-accelerated by implementation in Cuda and include real-time volume deformations computed on the grid of the image data. Computation on the image grid enables straightforward integration of the deformed image data into the visualization components. To provide shorter rendering times, the performance of the volume deformation algorithm is improved by a multigrid approach. To evaluate the VR training system, a user evaluation has been performed and deformation algorithms are analyzed in terms of convergence speed with respect to a fully converged solution. The user evaluation shows positive results with increased user confidence after a training session. It is shown that using partially segmented patient data and direct volume rendering is suitable for the simulation of needle insertion procedures such as PTCD.

  7. Frontal slab composite magnetic resonance neurography of the brachial plexus: implications for infraclavicular block approaches.

    PubMed

    Raphael, David T; McIntee, Diane; Tsuruda, Jay S; Colletti, Patrick; Tatevossian, Ray

    2005-12-01

    Magnetic resonance neurography (MRN) is an imaging method by which nerves can be selectively highlighted. Using commercial software, the authors explored a variety of approaches to develop a three-dimensional volume-rendered MRN image of the entire brachial plexus and used it to evaluate the accuracy of infraclavicular block approaches. With institutional review board approval, MRN of the brachial plexus was performed in 10 volunteer subjects. MRN imaging was performed on a GE 1.5-tesla magnetic resonance scanner (General Electric Healthcare Technologies, Waukesha, WI) using a phased array torso coil. Coronal STIR and T1 oblique sagittal sequences of the brachial plexus were obtained. Multiple software programs were explored for enhanced display and manipulation of the composite magnetic resonance images. The authors developed a frontal slab composite approach that allows single-frame reconstruction of a three-dimensional volume-rendered image of the entire brachial plexus. Automatic segmentation was supplemented by manual segmentation in nearly all cases. For each of three infraclavicular approaches (posteriorly directed needle below midclavicle, infracoracoid, or caudomedial to coracoid), the targeting error was measured as the distance from the MRN plexus midpoint to the approach-targeted site. Composite frontal slabs (coronal views), which are single-frame three-dimensional volume renderings from image-enhanced two-dimensional frontal view projections of the underlying coronal slices, were created. The targeting errors (mean +/- SD) for the approaches-midclavicle, infracoracoid, caudomedial to coracoid-were 0.43 +/- 0.67, 0.99 +/- 1.22, and 0.65 +/- 1.14 cm, respectively. Image-processed three-dimensional volume-rendered MNR scans, which allow visualization of the entire brachial plexus within a single composite image, have educational value in illustrating the complexity and individual variation of the plexus. Suggestions for improved guidance during infraclavicular block procedures are presented.

  8. Hyoid bone development: An assessment of optimal CT scanner parameters and 3D volume rendering techniques

    PubMed Central

    Cotter, Meghan M.; Whyms, Brian J.; Kelly, Michael P.; Doherty, Benjamin M.; Gentry, Lindell R.; Bersu, Edward T.; Vorperian, Houri K.

    2015-01-01

    The hyoid bone anchors and supports the vocal tract. Its complex shape is best studied in three dimensions, but it is difficult to capture on computed tomography (CT) images and three-dimensional volume renderings. The goal of this study was to determine the optimal CT scanning and rendering parameters to accurately measure the growth and developmental anatomy of the hyoid and to determine whether it is feasible and necessary to use these parameters in the measurement of hyoids from in vivo CT scans. Direct linear and volumetric measurements of skeletonized hyoid bone specimens were compared to corresponding CT images to determine the most accurate scanning parameters and three-dimensional rendering techniques. A pilot study was undertaken using in vivo scans from a retrospective CT database to determine feasibility of quantifying hyoid growth. Scanning parameters and rendering technique affected accuracy of measurements. Most linear CT measurements were within 10% of direct measurements; however, volume was overestimated when CT scans were acquired with a slice thickness greater than 1.25 mm. Slice-by-slice thresholding of hyoid images decreased volume overestimation. The pilot study revealed that the linear measurements tested correlate with age. A fine-tuned rendering approach applied to small slice thickness CT scans produces the most accurate measurements of hyoid bones. However, linear measurements can be accurately assessed from in vivo CT scans at a larger slice thickness. Such findings imply that investigation into the growth and development of the hyoid bone, and the vocal tract as a whole, can now be performed using these techniques. PMID:25810349

  9. Hyoid Bone Development: An Assessment Of Optimal CT Scanner Parameters and Three-Dimensional Volume Rendering Techniques.

    PubMed

    Cotter, Meghan M; Whyms, Brian J; Kelly, Michael P; Doherty, Benjamin M; Gentry, Lindell R; Bersu, Edward T; Vorperian, Houri K

    2015-08-01

    The hyoid bone anchors and supports the vocal tract. Its complex shape is best studied in three dimensions, but it is difficult to capture on computed tomography (CT) images and three-dimensional volume renderings. The goal of this study was to determine the optimal CT scanning and rendering parameters to accurately measure the growth and developmental anatomy of the hyoid and to determine whether it is feasible and necessary to use these parameters in the measurement of hyoids from in vivo CT scans. Direct linear and volumetric measurements of skeletonized hyoid bone specimens were compared with corresponding CT images to determine the most accurate scanning parameters and three-dimensional rendering techniques. A pilot study was undertaken using in vivo scans from a retrospective CT database to determine feasibility of quantifying hyoid growth. Scanning parameters and rendering technique affected accuracy of measurements. Most linear CT measurements were within 10% of direct measurements; however, volume was overestimated when CT scans were acquired with a slice thickness greater than 1.25 mm. Slice-by-slice thresholding of hyoid images decreased volume overestimation. The pilot study revealed that the linear measurements tested correlate with age. A fine-tuned rendering approach applied to small slice thickness CT scans produces the most accurate measurements of hyoid bones. However, linear measurements can be accurately assessed from in vivo CT scans at a larger slice thickness. Such findings imply that investigation into the growth and development of the hyoid bone, and the vocal tract as a whole, can now be performed using these techniques. © 2015 Wiley Periodicals, Inc.

  10. GPU-based multi-volume ray casting within VTK for medical applications.

    PubMed

    Bozorgi, Mohammadmehdi; Lindseth, Frank

    2015-03-01

    Multi-volume visualization is important for displaying relevant information in multimodal or multitemporal medical imaging studies. The main objective with the current study was to develop an efficient GPU-based multi-volume ray caster (MVRC) and validate the proposed visualization system in the context of image-guided surgical navigation. Ray casting can produce high-quality 2D images from 3D volume data but the method is computationally demanding, especially when multiple volumes are involved, so a parallel GPU version has been implemented. In the proposed MVRC, imaginary rays are sent through the volumes (one ray for each pixel in the view), and at equal and short intervals along the rays, samples are collected from each volume. Samples from all the volumes are composited using front to back α-blending. Since all the rays can be processed simultaneously, the MVRC was implemented in parallel on the GPU to achieve acceptable interactive frame rates. The method is fully integrated within the visualization toolkit (VTK) pipeline with the ability to apply different operations (e.g., transformations, clipping, and cropping) on each volume separately. The implemented method is cross-platform (Windows, Linux and Mac OSX) and runs on different graphics card (NVidia and AMD). The speed of the MVRC was tested with one to five volumes of varying sizes: 128(3), 256(3), and 512(3). A Tesla C2070 GPU was used, and the output image size was 600 × 600 pixels. The original VTK single-volume ray caster and the MVRC were compared when rendering only one volume. The multi-volume rendering system achieved an interactive frame rate (> 15 fps) when rendering five small volumes (128 (3) voxels), four medium-sized volumes (256(3) voxels), and two large volumes (512(3) voxels). When rendering single volumes, the frame rate of the MVRC was comparable to the original VTK ray caster for small and medium-sized datasets but was approximately 3 frames per second slower for large datasets. The MVRC was successfully integrated in an existing surgical navigation system and was shown to be clinically useful during an ultrasound-guided neurosurgical tumor resection. A GPU-based MVRC for VTK is a useful tool in medical visualization. The proposed multi-volume GPU-based ray caster for VTK provided high-quality images at reasonable frame rates. The MVRC was effective when used in a neurosurgical navigation application.

  11. Four-dimensional ultrasonography of the fetal heart using color Doppler spatiotemporal image correlation.

    PubMed

    Gonçalves, Luís F; Romero, Roberto; Espinoza, Jimmy; Lee, Wesley; Treadwell, Marjorie; Chintala, Kavitha; Brandl, Helmut; Chaiworapongsa, Tinnakorn

    2004-04-01

    To describe clinical and research applications of 4-dimensional imaging of the fetal heart using color Doppler spatiotemporal image correlation. Forty-four volume data sets were acquired by color Doppler spatiotemporal image correlation. Seven subjects were examined: 4 fetuses without abnormalities, 1 fetus with ventriculomegaly and a hypoplastic cerebellum but normal cardiac anatomy, and 2 fetuses with cardiac anomalies detected by fetal echocardiography (1 case of a ventricular septal defect associated with trisomy 21 and 1 case of a double-inlet right ventricle with a 46,XX karyotype). The median gestational age at the time of examination was 21 3/7 weeks (range, 19 5/7-34 0/7 weeks). Volume data sets were reviewed offline by multiplanar display and volume-rendering methods. Representative images and online video clips illustrating the diagnostic potential of this technology are presented. Color Doppler spatiotemporal image correlation allowed multiplanar visualization of ventricular septal defects, multiplanar display and volume rendering of tricuspid regurgitation, volume rendering of the outflow tracts by color and power Doppler ultrasonography (both in a normal case and in a case of a double-inlet right ventricle with a double-outlet right ventricle), and visualization of venous streams at the level of the foramen ovale. Color Doppler spatiotemporal image correlation has the potential to simplify visualization of the outflow tracts and improve the evaluation of the location and extent of ventricular septal defects. Other applications include 3-dimensional evaluation of regurgitation jets and venous streams at the level of the foramen ovale.

  12. Remote volume rendering pipeline for mHealth applications

    NASA Astrophysics Data System (ADS)

    Gutenko, Ievgeniia; Petkov, Kaloian; Papadopoulos, Charilaos; Zhao, Xin; Park, Ji Hwan; Kaufman, Arie; Cha, Ronald

    2014-03-01

    We introduce a novel remote volume rendering pipeline for medical visualization targeted for mHealth (mobile health) applications. The necessity of such a pipeline stems from the large size of the medical imaging data produced by current CT and MRI scanners with respect to the complexity of the volumetric rendering algorithms. For example, the resolution of typical CT Angiography (CTA) data easily reaches 512^3 voxels and can exceed 6 gigabytes in size by spanning over the time domain while capturing a beating heart. This explosion in data size makes data transfers to mobile devices challenging, and even when the transfer problem is resolved the rendering performance of the device still remains a bottleneck. To deal with this issue, we propose a thin-client architecture, where the entirety of the data resides on a remote server where the image is rendered and then streamed to the client mobile device. We utilize the display and interaction capabilities of the mobile device, while performing interactive volume rendering on a server capable of handling large datasets. Specifically, upon user interaction the volume is rendered on the server and encoded into an H.264 video stream. H.264 is ubiquitously hardware accelerated, resulting in faster compression and lower power requirements. The choice of low-latency CPU- and GPU-based encoders is particularly important in enabling the interactive nature of our system. We demonstrate a prototype of our framework using various medical datasets on commodity tablet devices.

  13. Fast software-based volume rendering using multimedia instructions on PC platforms and its application to virtual endoscopy

    NASA Astrophysics Data System (ADS)

    Mori, Kensaku; Suenaga, Yasuhito; Toriwaki, Jun-ichiro

    2003-05-01

    This paper describes a software-based fast volume rendering (VolR) method on a PC platform by using multimedia instructions, such as SIMD instructions, which are currently available in PCs' CPUs. This method achieves fast rendering speed through highly optimizing software rather than an improved rendering algorithm. In volume rendering using a ray casting method, the system requires fast execution of the following processes: (a) interpolation of voxel or color values at sample points, (b) computation of normal vectors (gray-level gradient vectors), (c) calculation of shaded values obtained by dot-products of normal vectors and light source direction vectors, (d) memory access to a huge area, and (e) efficient ray skipping at translucent regions. The proposed software implements these fundamental processes in volume rending by using special instruction sets for multimedia processing. The proposed software can generate virtual endoscopic images of a 3-D volume of 512x512x489 voxel size by volume rendering with perspective projection, specular reflection, and on-the-fly normal vector computation on a conventional PC without any special hardware at thirteen frames per second. Semi-translucent display is also possible.

  14. "Tools For Analysis and Visualization of Large Time- Varying CFD Data Sets"

    NASA Technical Reports Server (NTRS)

    Wilhelms, Jane; vanGelder, Allen

    1999-01-01

    During the four years of this grant (including the one year extension), we have explored many aspects of the visualization of large CFD (Computational Fluid Dynamics) datasets. These have included new direct volume rendering approaches, hierarchical methods, volume decimation, error metrics, parallelization, hardware texture mapping, and methods for analyzing and comparing images. First, we implemented an extremely general direct volume rendering approach that can be used to render rectilinear, curvilinear, or tetrahedral grids, including overlapping multiple zone grids, and time-varying grids. Next, we developed techniques for associating the sample data with a k-d tree, a simple hierarchial data model to approximate samples in the regions covered by each node of the tree, and an error metric for the accuracy of the model. We also explored a new method for determining the accuracy of approximate models based on the light field method described at ACM SIGGRAPH (Association for Computing Machinery Special Interest Group on Computer Graphics) '96. In our initial implementation, we automatically image the volume from 32 approximately evenly distributed positions on the surface of an enclosing tessellated sphere. We then calculate differences between these images under different conditions of volume approximation or decimation.

  15. Hybrid rendering of the chest and virtual bronchoscopy [corrected].

    PubMed

    Seemann, M D; Seemann, O; Luboldt, W; Gebicke, K; Prime, G; Claussen, C D

    2000-10-30

    Thin-section spiral computed tomography was used to acquire the volume data sets of the thorax. The tracheobronchial system and pathological changes of the chest were visualized using a color-coded surface rendering method. The structures of interest were then superimposed on a volume rendering of the other thoracic structures, thus producing a hybrid rendering. The hybrid rendering technique exploit the advantages of both rendering methods and enable virtual bronchoscopic examinations using different representation models. Virtual bronchoscopic examinations with a transparent color-coded shaded-surface model enables the simultaneous visualization of both the airways and the adjacent structures behind of the tracheobronchial wall and therefore, offers a practical alternative to fiberoptic bronchoscopy. Hybrid rendering and virtual endoscopy obviate the need for time consuming detailed analysis and presentation of axial source images.

  16. Fast interactive real-time volume rendering of real-time three-dimensional echocardiography: an implementation for low-end computers

    NASA Technical Reports Server (NTRS)

    Saracino, G.; Greenberg, N. L.; Shiota, T.; Corsi, C.; Lamberti, C.; Thomas, J. D.

    2002-01-01

    Real-time three-dimensional echocardiography (RT3DE) is an innovative cardiac imaging modality. However, partly due to lack of user-friendly software, RT3DE has not been widely accepted as a clinical tool. The object of this study was to develop and implement a fast and interactive volume renderer of RT3DE datasets designed for a clinical environment where speed and simplicity are not secondary to accuracy. Thirty-six patients (20 regurgitation, 8 normal, 8 cardiomyopathy) were imaged using RT3DE. Using our newly developed software, all 3D data sets were rendered in real-time throughout the cardiac cycle and assessment of cardiac function and pathology was performed for each case. The real-time interactive volume visualization system is user friendly and instantly provides consistent and reliable 3D images without expensive workstations or dedicated hardware. We believe that this novel tool can be used clinically for dynamic visualization of cardiac anatomy.

  17. Three-dimensional volume rendering of the ankle based on magnetic resonance images enables the generation of images comparable to real anatomy.

    PubMed

    Anastasi, Giuseppe; Cutroneo, Giuseppina; Bruschetta, Daniele; Trimarchi, Fabio; Ielitro, Giuseppe; Cammaroto, Simona; Duca, Antonio; Bramanti, Placido; Favaloro, Angelo; Vaccarino, Gianluigi; Milardi, Demetrio

    2009-11-01

    We have applied high-quality medical imaging techniques to study the structure of the human ankle. Direct volume rendering, using specific algorithms, transforms conventional two-dimensional (2D) magnetic resonance image (MRI) series into 3D volume datasets. This tool allows high-definition visualization of single or multiple structures for diagnostic, research, and teaching purposes. No other image reformatting technique so accurately highlights each anatomic relationship and preserves soft tissue definition. Here, we used this method to study the structure of the human ankle to analyze tendon-bone-muscle relationships. We compared ankle MRI and computerized tomography (CT) images from 17 healthy volunteers, aged 18-30 years (mean 23 years). An additional subject had a partial rupture of the Achilles tendon. The MRI images demonstrated superiority in overall quality of detail compared to the CT images. The MRI series accurately rendered soft tissue and bone in simultaneous image acquisition, whereas CT required several window-reformatting algorithms, with loss of image data quality. We obtained high-quality digital images of the human ankle that were sufficiently accurate for surgical and clinical intervention planning, as well as for teaching human anatomy. Our approach demonstrates that complex anatomical structures such as the ankle, which is rich in articular facets and ligaments, can be easily studied non-invasively using MRI data.

  18. Three-dimensional volume rendering of the ankle based on magnetic resonance images enables the generation of images comparable to real anatomy

    PubMed Central

    Anastasi, Giuseppe; Cutroneo, Giuseppina; Bruschetta, Daniele; Trimarchi, Fabio; Ielitro, Giuseppe; Cammaroto, Simona; Duca, Antonio; Bramanti, Placido; Favaloro, Angelo; Vaccarino, Gianluigi; Milardi, Demetrio

    2009-01-01

    We have applied high-quality medical imaging techniques to study the structure of the human ankle. Direct volume rendering, using specific algorithms, transforms conventional two-dimensional (2D) magnetic resonance image (MRI) series into 3D volume datasets. This tool allows high-definition visualization of single or multiple structures for diagnostic, research, and teaching purposes. No other image reformatting technique so accurately highlights each anatomic relationship and preserves soft tissue definition. Here, we used this method to study the structure of the human ankle to analyze tendon–bone–muscle relationships. We compared ankle MRI and computerized tomography (CT) images from 17 healthy volunteers, aged 18–30 years (mean 23 years). An additional subject had a partial rupture of the Achilles tendon. The MRI images demonstrated superiority in overall quality of detail compared to the CT images. The MRI series accurately rendered soft tissue and bone in simultaneous image acquisition, whereas CT required several window-reformatting algorithms, with loss of image data quality. We obtained high-quality digital images of the human ankle that were sufficiently accurate for surgical and clinical intervention planning, as well as for teaching human anatomy. Our approach demonstrates that complex anatomical structures such as the ankle, which is rich in articular facets and ligaments, can be easily studied non-invasively using MRI data. PMID:19678857

  19. Hierarchical and Parallelizable Direct Volume Rendering for Irregular and Multiple Grids

    NASA Technical Reports Server (NTRS)

    Wilhelms, Jane; VanGelder, Allen; Tarantino, Paul; Gibbs, Jonathan

    1996-01-01

    A general volume rendering technique is described that efficiently produces images of excellent quality from data defined over irregular grids having a wide variety of formats. Rendering is done in software, eliminating the need for special graphics hardware, as well as any artifacts associated with graphics hardware. Images of volumes with about one million cells can be produced in one to several minutes on a workstation with a 150 MHz processor. A significant advantage of this method for applications such as computational fluid dynamics is that it can process multiple intersecting grids. Such grids present problems for most current volume rendering techniques. Also, the wide range of cell sizes (by a factor of 10,000 or more), which is typical of such applications, does not present difficulties, as it does for many techniques. A spatial hierarchical organization makes it possible to access data from a restricted region efficiently. The tree has greater depth in regions of greater detail, determined by the number of cells in the region. It also makes it possible to render useful 'preview' images very quickly (about one second for one-million-cell grids) by displaying each region associated with a tree node as one cell. Previews show enough detail to navigate effectively in very large data sets. The algorithmic techniques include use of a kappa-d tree, with prefix-order partitioning of triangles, to reduce the number of primitives that must be processed for one rendering, coarse-grain parallelism for a shared-memory MIMD architecture, a new perspective transformation that achieves greater numerical accuracy, and a scanline algorithm with depth sorting and a new clipping technique.

  20. Effect of Reduced Tube Voltage on Diagnostic Accuracy of CT Colonography.

    PubMed

    Futamata, Yoshihiro; Koide, Tomoaki; Ihara, Riku

    2017-01-01

    The normal tube voltage in computed tomography colonography (CTC) is 120 kV. Some reports indicate that the use of a low tube voltage (lower than 120 kV) technique plays a significant role in reduction of radiation dose. However, to determine whether a lower tube voltage can reduce radiation dose without compromising diagnostic accuracy, an evaluation of images that are obtained while maintaining the volume CT dose index (CTDI vol ) is required. This study investigated the effect of reduced tube voltage in CTC, without modifying radiation dose (i.e. constant CTDI vol ), on image quality. Evaluation of image quality involved the shape of the noise power spectrum, surface profiling with volume rendering (VR), and receiver operating characteristic (ROC) analysis. The shape of the noise power spectrum obtained with a tube voltage of 80 kV and 100 kV was not similar to the one produced with a tube voltage of 120 kV. Moreover, a higher standard deviation was observed on volume-rendered images that were generated using the reduced tube voltages. In addition, ROC analysis revealed a statistically significant drop in diagnostic accuracy with reduced tube voltage, revealing that the modification of tube voltage affects volume-rendered images. The results of this study suggest that reduction of tube voltage in CTC, so as to reduce radiation dose, affects image quality and diagnostic accuracy.

  1. Combined approach of shell and shear-warp rendering for efficient volume visualization

    NASA Astrophysics Data System (ADS)

    Falcao, Alexandre X.; Rocha, Leonardo M.; Udupa, Jayaram K.

    2003-05-01

    In Medical Imaging, shell rendering (SR) and shear-warp rendering (SWR) are two ultra-fast and effective methods for volume visualization. We have previously shown that, typically, SWR can be on the average 1.38 times faster than SR, but it requires from 2 to 8 times more memory space than SR. In this paper, we propose an extension of the compact shell data structure utilized in SR to allow shear-warp factorization of the viewing matrix in order to obtain speed up gains for SR, without paying the high storage price of SWR. The new approach is called shear-warp shell rendering (SWSR). The paper describes the methods, points out their major differences in the computational aspects, and presents a comparative analysis of them in terms of speed, storage, and image quality. The experiments involve hard and fuzzy boundaries of 10 different objects of various sizes, shapes, and topologies, rendered on a 1GHz Pentium-III PC with 512MB RAM, utilizing surface and volume rendering strategies. The results indicate that SWSR offers the best speed and storage characteristics compromise among these methods. We also show that SWSR improves the rendition quality over SR, and provides renditions similar to those produced by SWR.

  2. Color-coded depth information in volume-rendered magnetic resonance angiography

    NASA Astrophysics Data System (ADS)

    Smedby, Orjan; Edsborg, Karin; Henriksson, John

    2004-05-01

    Magnetic Resonance Angiography (MRA) and Computed Tomography Angiography (CTA) data are usually presented using Maximum Intensity Projection (MIP) or Volume Rendering Technique (VRT), but these often fail to demonstrate a stenosis if the projection angle is not suitably chosen. In order to make vascular stenoses visible in projection images independent of the choice of viewing angle, a method is proposed to supplement these images with colors representing the local caliber of the vessel. After preprocessing the volume image with a median filter, segmentation is performed by thresholding, and a Euclidean distance transform is applied. The distance to the background from each voxel in the vessel is mapped to a color. These colors can either be rendered directly using MIP or be presented together with opacity information based on the original image using VRT. The method was tested in a synthetic dataset containing a cylindrical vessel with stenoses in varying angles. The results suggest that the visibility of stenoses is enhanced by the color information. In clinical feasibility experiments, the technique was applied to clinical MRA data. The results are encouraging and indicate that the technique can be used with clinical images.

  3. Distributed shared memory for roaming large volumes.

    PubMed

    Castanié, Laurent; Mion, Christophe; Cavin, Xavier; Lévy, Bruno

    2006-01-01

    We present a cluster-based volume rendering system for roaming very large volumes. This system allows to move a gigabyte-sized probe inside a total volume of several tens or hundreds of gigabytes in real-time. While the size of the probe is limited by the total amount of texture memory on the cluster, the size of the total data set has no theoretical limit. The cluster is used as a distributed graphics processing unit that both aggregates graphics power and graphics memory. A hardware-accelerated volume renderer runs in parallel on the cluster nodes and the final image compositing is implemented using a pipelined sort-last rendering algorithm. Meanwhile, volume bricking and volume paging allow efficient data caching. On each rendering node, a distributed hierarchical cache system implements a global software-based distributed shared memory on the cluster. In case of a cache miss, this system first checks page residency on the other cluster nodes instead of directly accessing local disks. Using two Gigabit Ethernet network interfaces per node, we accelerate data fetching by a factor of 4 compared to directly accessing local disks. The system also implements asynchronous disk access and texture loading, which makes it possible to overlap data loading, volume slicing and rendering for optimal volume roaming.

  4. Topology-aware illumination design for volume rendering.

    PubMed

    Zhou, Jianlong; Wang, Xiuying; Cui, Hui; Gong, Peng; Miao, Xianglin; Miao, Yalin; Xiao, Chun; Chen, Fang; Feng, Dagan

    2016-08-19

    Direct volume rendering is one of flexible and effective approaches to inspect large volumetric data such as medical and biological images. In conventional volume rendering, it is often time consuming to set up a meaningful illumination environment. Moreover, conventional illumination approaches usually assign same values of variables of an illumination model to different structures manually and thus neglect the important illumination variations due to structure differences. We introduce a novel illumination design paradigm for volume rendering on the basis of topology to automate illumination parameter definitions meaningfully. The topological features are extracted from the contour tree of an input volumetric data. The automation of illumination design is achieved based on four aspects of attenuation, distance, saliency, and contrast perception. To better distinguish structures and maximize illuminance perception differences of structures, a two-phase topology-aware illuminance perception contrast model is proposed based on the psychological concept of Just-Noticeable-Difference. The proposed approach allows meaningful and efficient automatic generations of illumination in volume rendering. Our results showed that our approach is more effective in depth and shape depiction, as well as providing higher perceptual differences between structures.

  5. Automatic Perceptual Color Map Generation for Realistic Volume Visualization

    PubMed Central

    Silverstein, Jonathan C.; Parsad, Nigel M.; Tsirline, Victor

    2008-01-01

    Advances in computed tomography imaging technology and inexpensive high performance computer graphics hardware are making high-resolution, full color (24-bit) volume visualizations commonplace. However, many of the color maps used in volume rendering provide questionable value in knowledge representation and are non-perceptual thus biasing data analysis or even obscuring information. These drawbacks, coupled with our need for realistic anatomical volume rendering for teaching and surgical planning, has motivated us to explore the auto-generation of color maps that combine natural colorization with the perceptual discriminating capacity of grayscale. As evidenced by the examples shown that have been created by the algorithm described, the merging of perceptually accurate and realistically colorized virtual anatomy appears to insightfully interpret and impartially enhance volume rendered patient data. PMID:18430609

  6. FluoRender: joint freehand segmentation and visualization for many-channel fluorescence data analysis.

    PubMed

    Wan, Yong; Otsuna, Hideo; Holman, Holly A; Bagley, Brig; Ito, Masayoshi; Lewis, A Kelsey; Colasanto, Mary; Kardon, Gabrielle; Ito, Kei; Hansen, Charles

    2017-05-26

    Image segmentation and registration techniques have enabled biologists to place large amounts of volume data from fluorescence microscopy, morphed three-dimensionally, onto a common spatial frame. Existing tools built on volume visualization pipelines for single channel or red-green-blue (RGB) channels have become inadequate for the new challenges of fluorescence microscopy. For a three-dimensional atlas of the insect nervous system, hundreds of volume channels are rendered simultaneously, whereas fluorescence intensity values from each channel need to be preserved for versatile adjustment and analysis. Although several existing tools have incorporated support of multichannel data using various strategies, the lack of a flexible design has made true many-channel visualization and analysis unavailable. The most common practice for many-channel volume data presentation is still converting and rendering pseudosurfaces, which are inaccurate for both qualitative and quantitative evaluations. Here, we present an alternative design strategy that accommodates the visualization and analysis of about 100 volume channels, each of which can be interactively adjusted, selected, and segmented using freehand tools. Our multichannel visualization includes a multilevel streaming pipeline plus a triple-buffer compositing technique. Our method also preserves original fluorescence intensity values on graphics hardware, a crucial feature that allows graphics-processing-unit (GPU)-based processing for interactive data analysis, such as freehand segmentation. We have implemented the design strategies as a thorough restructuring of our original tool, FluoRender. The redesign of FluoRender not only maintains the existing multichannel capabilities for a greatly extended number of volume channels, but also enables new analysis functions for many-channel data from emerging biomedical-imaging techniques.

  7. Comparison of three-dimensional visualization techniques for depicting the scala vestibuli and scala tympani of the cochlea by using high-resolution MR imaging.

    PubMed

    Hans, P; Grant, A J; Laitt, R D; Ramsden, R T; Kassner, A; Jackson, A

    1999-08-01

    Cochlear implantation requires introduction of a stimulating electrode array into the scala vestibuli or scala tympani. Although these structures can be separately identified on many high-resolution scans, it is often difficult to ascertain whether these channels are patent throughout their length. The aim of this study was to determine whether an optimized combination of an imaging protocol and a visualization technique allows routine 3D rendering of the scala vestibuli and scala tympani. A submillimeter T2 fast spin-echo imaging sequence was designed to optimize the performance of 3D visualization methods. The spatial resolution was determined experimentally using primary images and 3D surface and volume renderings from eight healthy subjects. These data were used to develop the imaging sequence and to compare the quality and signal-to-noise dependency of four data visualization algorithms: maximum intensity projection, ray casting with transparent voxels, ray casting with opaque voxels, and isosurface rendering. The ability of these methods to produce 3D renderings of the scala tympani and scala vestibuli was also examined. The imaging technique was used in five patients with sensorineural deafness. Visualization techniques produced optimal results in combination with an isotropic volume imaging sequence. Clinicians preferred the isosurface-rendered images to other 3D visualizations. Both isosurface and ray casting displayed the scala vestibuli and scala tympani throughout their length. Abnormalities were shown in three patients, and in one of these, a focal occlusion of the scala tympani was confirmed at surgery. Three-dimensional images of the scala vestibuli and scala tympani can be routinely produced. The combination of an MR sequence optimized for use with isosurface rendering or ray-casting algorithms can produce 3D images with greater spatial resolution and anatomic detail than has been possible previously.

  8. A parallel coordinates style interface for exploratory volume visualization.

    PubMed

    Tory, Melanie; Potts, Simeon; Möller, Torsten

    2005-01-01

    We present a user interface, based on parallel coordinates, that facilitates exploration of volume data. By explicitly representing the visualization parameter space, the interface provides an overview of rendering options and enables users to easily explore different parameters. Rendered images are stored in an integrated history bar that facilitates backtracking to previous visualization options. Initial usability testing showed clear agreement between users and experts of various backgrounds (usability, graphic design, volume visualization, and medical physics) that the proposed user interface is a valuable data exploration tool.

  9. Exposure Render: An Interactive Photo-Realistic Volume Rendering Framework

    PubMed Central

    Kroes, Thomas; Post, Frits H.; Botha, Charl P.

    2012-01-01

    The field of volume visualization has undergone rapid development during the past years, both due to advances in suitable computing hardware and due to the increasing availability of large volume datasets. Recent work has focused on increasing the visual realism in Direct Volume Rendering (DVR) by integrating a number of visually plausible but often effect-specific rendering techniques, for instance modeling of light occlusion and depth of field. Besides yielding more attractive renderings, especially the more realistic lighting has a positive effect on perceptual tasks. Although these new rendering techniques yield impressive results, they exhibit limitations in terms of their exibility and their performance. Monte Carlo ray tracing (MCRT), coupled with physically based light transport, is the de-facto standard for synthesizing highly realistic images in the graphics domain, although usually not from volumetric data. Due to the stochastic sampling of MCRT algorithms, numerous effects can be achieved in a relatively straight-forward fashion. For this reason, we have developed a practical framework that applies MCRT techniques also to direct volume rendering (DVR). With this work, we demonstrate that a host of realistic effects, including physically based lighting, can be simulated in a generic and flexible fashion, leading to interactive DVR with improved realism. In the hope that this improved approach to DVR will see more use in practice, we have made available our framework under a permissive open source license. PMID:22768292

  10. [Virtual endoscopy with a volumetric reconstruction technic: the technical aspects].

    PubMed

    Pavone, P; Laghi, A; Panebianco, V; Catalano, C; Giura, R; Passariello, R

    1998-06-01

    We analyze the peculiar technical features of virtual endoscopy obtained with volume rendering. Our preliminary experience is based on virtual endoscopy images from volumetric data acquired with spiral CT (Siemens, Somatom Plus 4) using acquisition protocols standardized for different anatomic areas. Images are reformatted at the CT console, to obtain 1 mm thick contiguous slices, and transferred in DICOM format to an O2 workstation (Silicon Graphics, Mountain View CA, USA) with processor speed of 180 Mhz, 256 Mbyte RAM memory and 4.1 Gbyte hard disk. The software is Vitrea 1.0 (Vital Images, Fairfield, Iowa), running on a Unix platform. Image output is obtained through the Ethernet network to a Macintosh computer and a thermic printer (Kodak 8600 XLS). Diagnostic quality images were obtained in all the cases. Fly-through in the airways allowed correct evaluation of the main bronchi and of the origin of segmentary bronchi. In the vascular district, both carotid strictures and abdominal aortic aneurysms were depicted, with the same accuracy as with conventional reconstruction techniques. In the colon studies, polypoid lesions were correctly depicted in all the cases, with good correlation with endoscopic and double-contrast barium enema findings. In a case of lipoma of the ascending colon, virtual endoscopy allowed to study the colon both cranially and caudally to the lesion. The simultaneous evaluation of axial CT images permitted to characterize the lesion correctly on the basis of its density values. The peculiar feature of volume rendering is the use of the whole information inside the imaging volume to reconstruct three-dimensional images; no threshold values are used and no data are lost as opposite to conventional image reconstruction techniques. The different anatomic structures are visualized modifying the reciprocal opacities, showing the structures of no interest as translucent. The modulation of different opacities is obtained modifying the shape of the opacity curve, either using pre-set curves or in a completely independent way. Other technical features of volume rendering are the perspective evaluation of the objects, color and lighting. In conclusion, volume rendering is a promising technique to elaborate three-dimensional images, offering very realistic endoscopic views. At present, the main limitation is represented by the need of powerful and high-cost workstations.

  11. Roles of universal three-dimensional image analysis devices that assist surgical operations.

    PubMed

    Sakamoto, Tsuyoshi

    2014-04-01

    The circumstances surrounding medical image analysis have undergone rapid evolution. In such a situation, it can be said that "imaging" obtained through medical imaging modality and the "analysis" that we employ have become amalgamated. Recently, we feel the distance between "imaging" and "analysis" has become closer regarding the imaging analysis of any organ system, as if both terms mentioned above have become integrated. The history of medical image analysis started with the appearance of the computer. The invention of multi-planar reconstruction (MPR) used in the helical scan had a significant impact and became the basis for recent image analysis. Subsequently, curbed MPR (CPR) and other methods were developed, and the 3D diagnostic imaging and image analysis of the human body have started on a full scale. Volume rendering: the development of a new rendering algorithm and the significant improvement of memory and CPUs contributed to the development of "volume rendering," which allows 3D views with retained internal information. A new value was created by this development; computed tomography (CT) images that used to be for "diagnosis" before that time have become "applicable to treatment." In the past, before the development of volume rendering, a clinician had to mentally reconstruct an image reconfigured for diagnosis into a 3D image, but these developments have allowed the depiction of a 3D image on a monitor. Current technology: Currently, in Japan, the estimation of the liver volume and the perfusion area of the portal vein and hepatic vein are vigorously being adopted during preoperative planning for hepatectomy. Such a circumstance seems to be brought by the substantial improvement of said basic techniques and by upgrading the user interface, allowing doctors easy manipulation by themselves. The following describes the specific techniques. Future of post-processing technology: It is expected, in terms of the role of image analysis, for better or worse, that computer-aided diagnosis (CAD) will develop to a highly advanced level in every diagnostic field. Further, it is also expected in the treatment field that a technique coordinating various devices will be strongly required as a surgery navigator. Actually, surgery using an image navigator is being widely studied, and coordination with hardware, including robots, will also be developed. © 2014 Japanese Society of Hepato-Biliary-Pancreatic Surgery.

  12. Four-dimensional ultrasonography of the fetal heart with spatiotemporal image correlation.

    PubMed

    Gonçalves, Luís F; Lee, Wesley; Chaiworapongsa, Tinnakorn; Espinoza, Jimmy; Schoen, Mary Lou; Falkensammer, Peter; Treadwell, Marjorie; Romero, Roberto

    2003-12-01

    This study was undertaken to describe a new technique for the examination of the fetal heart using four-dimensional ultrasonography with spatiotemporal image correlation (STIC). Volume data sets of the fetal heart were acquired with a new cardiac gating technique (STIC), which uses automated transverse and longitudinal sweeps of the anterior chest wall. These volumes were obtained from 69 fetuses: 35 normal, 16 with congenital anomalies not affecting the cardiovascular system, and 18 with cardiac abnormalities. Dynamic multiplanar slicing and surface rendering of cardiac structures were performed. To illustrate the STIC technique, two representative volumes from a normal fetus were compared with volumes obtained from fetuses with the following congenital heart anomalies: atrioventricular septal defect, tricuspid stenosis, tricuspid atresia, and interrupted inferior vena cava with abnormal venous drainage. Volume datasets obtained with a transverse sweep were utilized to demonstrate the cardiac chambers, moderator band, interatrial and interventricular septae, atrioventricular valves, pulmonary veins, and outflow tracts. With the use of a reference dot to navigate the four-chamber view, intracardiac structures could be simultaneously studied in three orthogonal planes. The same volume dataset was used for surface rendering of the atrioventricular valves. The aortic and ductal arches were best visualized when the original plane of acquisition was sagittal. Volumes could be interactively manipulated to simultaneously visualize both outflow tracts, in addition to the aortic and ductal arches. Novel views of specific structures were generated. For example, the location and extent of a ventricular septal defect was imaged in a sagittal view of the interventricular septum. Furthermore, surface-rendered images of the atrioventricular valves were employed to distinguish between normal and pathologic conditions. Representative video clips were posted on the Journal's Web site to demonstrate the diagnostic capabilities of this new technique. Dynamic multiplanar slicing and surface rendering of the fetal heart are feasible with STIC technology. One good quality volume dataset, obtained from a transverse sweep, can be used to examine the four-chamber view and the outflow tracts. This novel method may assist in the evaluation of fetal cardiac anatomy.

  13. [Usefulness of volume rendering stereo-movie in neurosurgical craniotomies].

    PubMed

    Fukunaga, Tateya; Mokudai, Toshihiko; Fukuoka, Masaaki; Maeda, Tomonori; Yamamoto, Kouji; Yamanaka, Kozue; Minakuchi, Kiyomi; Miyake, Hirohisa; Moriki, Akihito; Uchida, Yasufumi

    2007-12-20

    In recent years, the advancements in MR technology combined with the development of the multi-channel coil have resulted in substantially shortened inspection times. In addition, rapid improvement in functional performance in the workstation has produced a more simplified imaging-making process. Consequently, graphical images of intra-cranial lesions can be easily created. For example, the use of three-dimensional spoiled gradient echo (3D-SPGR) volume rendering (VR) after injection of a contrast medium is applied clinically as a preoperative reference image. Recently, improvements in 3D-SPGR VR high-resolution have enabled accurate surface images of the brain to be obtained. We used stereo-imaging created by weighted maximum intensity projection (Weighted MIP) to determine the skin incision line. Furthermore, the stereo imaging technique utilizing 3D-SPGR VR was actually used in cases presented here. The techniques we report here seemed to be very useful in the pre-operative simulation of neurosurgical craniotomy.

  14. Interactive dual-volume rendering visualization with real-time fusion and transfer function enhancement

    NASA Astrophysics Data System (ADS)

    Macready, Hugh; Kim, Jinman; Feng, David; Cai, Weidong

    2006-03-01

    Dual-modality imaging scanners combining functional PET and anatomical CT constitute a challenge in volumetric visualization that can be limited by the high computational demand and expense. This study aims at providing physicians with multi-dimensional visualization tools, in order to navigate and manipulate the data running on a consumer PC. We have maximized the utilization of pixel-shader architecture of the low-cost graphic hardware and the texture-based volume rendering to provide visualization tools with high degree of interactivity. All the software was developed using OpenGL and Silicon Graphics Inc. Volumizer, tested on a Pentium mobile CPU on a PC notebook with 64M graphic memory. We render the individual modalities separately, and performing real-time per-voxel fusion. We designed a novel "alpha-spike" transfer function to interactively identify structure of interest from volume rendering of PET/CT. This works by assigning a non-linear opacity to the voxels, thus, allowing the physician to selectively eliminate or reveal information from the PET/CT volumes. As the PET and CT are rendered independently, manipulations can be applied to individual volumes, for instance, the application of transfer function to CT to reveal the lung boundary while adjusting the fusion ration between the CT and PET to enhance the contrast of a tumour region, with the resultant manipulated data sets fused together in real-time as the adjustments are made. In addition to conventional navigation and manipulation tools, such as scaling, LUT, volume slicing, and others, our strategy permits efficient visualization of PET/CT volume rendering which can potentially aid in interpretation and diagnosis.

  15. Volume rendering based on magnetic resonance imaging: advances in understanding the three-dimensional anatomy of the human knee

    PubMed Central

    Anastasi, Giuseppe; Bramanti, Placido; Di Bella, Paolo; Favaloro, Angelo; Trimarchi, Fabio; Magaudda, Ludovico; Gaeta, Michele; Scribano, Emanuele; Bruschetta, Daniele; Milardi, Demetrio

    2007-01-01

    The choice of medical imaging techniques, for the purpose of the present work aimed at studying the anatomy of the knee, derives from the increasing use of images in diagnostics, research and teaching, and the subsequent importance that these methods are gaining within the scientific community. Medical systems using virtual reality techniques also offer a good alternative to traditional methods, and are considered among the most important tools in the areas of research and teaching. In our work we have shown some possible uses of three-dimensional imaging for the study of the morphology of the normal human knee, and its clinical applications. We used the direct volume rendering technique, and created a data set of images and animations to allow us to visualize the single structures of the human knee in three dimensions. Direct volume rendering makes use of specific algorithms to transform conventional two-dimensional magnetic resonance imaging sets of slices into see-through volume data set images. It is a technique which does not require the construction of intermediate geometric representations, and has the advantage of allowing the visualization of a single image of the full data set, using semi-transparent mapping. Digital images of human structures, and in particular of the knee, offer important information about anatomical structures and their relationships, and are of great value in the planning of surgical procedures. On this basis we studied seven volunteers with an average age of 25 years, who underwent magnetic resonance imaging. After elaboration of the data through post-processing, we analysed the structure of the knee in detail. The aim of our investigation was the three-dimensional image, in order to comprehend better the interactions between anatomical structures. We believe that these results, applied to living subjects, widen the frontiers in the areas of teaching, diagnostics, therapy and scientific research. PMID:17645453

  16. Distributed volume rendering and stereoscopic display for radiotherapy treatment planning

    NASA Astrophysics Data System (ADS)

    Hancock, David J.

    The thesis describes attempts to use direct volume rendering techniques to produce visualisations useful in the preparation of radiotherapy treatment plans. The selected algorithms allow the generation of data-rich images which can be used to assist the radiologist in comprehending complicated three-dimensional phenomena. The treatment plans are formulated using a three dimensional model which combines patient data acquired from CT scanning and the results of a simulation of the radiation delivery. Multiple intersecting beams with shaped profiles are used and the region of intersection is designed to closely match the position and shape of the targeted tumour region. The proposed treatment must be evaluated as to how well the target region is enveloped by the high dose occurring where the beams intersect, and also as to whether the treatment is likely to expose non-tumour regions to unacceptably high levels of radiation. Conventionally the plans are reviewed by examining CT images overlaid with contours indicating dose levels. Volume visualisation offers a possible saving in time by presenting the data in three dimensional form thereby removing the need to examine a set of slices. The most difficult aspect is to depict unambiguously the relationships between the different data. For example, if a particular beam configuration results in unintended irradiation of a sensitive organ, then it is essential to ensure that this is clearly displayed, and that the 3D relationships between the beams and other data can be readily perceived in order to decide how to correct the problem. The user interface has been designed to present a unified view of the different techniques available for identifying features of interest within the data. The system differs from those previously reported in that complex visualisations can be constructed incrementally, and several different combinations of features can be viewed simultaneously. To maximise the quantity of relevant data presented in a single view, large regions of the data are rendered very transparently. This is done to ensure that interesting features buried deep within the data are visible from any viewpoint. Rendering images with high degrees of transparency raises a number of problems, primarily the drop in quality of depth cues in the image, but also the increase in computational requirements over surface-based visualisations. One solution to the increase in image generation times is the use of parallel architectures, which are an attractive platform for large visualisation tasks such as this. A parallel implementation of the direct volume rendering algorithm is described and its performance is evaluated. Several issues must be addressed in implementing an interactive rendering system in a distributed computing environment: principally overcoming the latency and limited bandwidth of the typical network connection. This thesis reports a pipelining strategy developed to improve the level of interactivity in such situations. Stereoscopic image presentation offers a method to offset the reduction in clarity of the depth information in the transparent images. The results of an investigation into the effectiveness of stereoscopic display as an aid to perception in highly transparent images are presented. Subjects were shown scenes of a synthetic test data set in which conventional depth cues were very limited. The experiments were designed to discover what effect stereoscopic viewing of the transparent, volume rendered images had on user's depth perception.

  17. Three-dimensional display of cortical anatomy and vasculature: MR angiography versus multimodality integration

    NASA Astrophysics Data System (ADS)

    Henri, Christopher J.; Pike, Gordon; Collins, D. Louis; Peters, Terence M.

    1990-07-01

    We present two methods for acquiring and viewing integrated 3-D images of cerebral vasculature and cortical anatomy. The aim of each technique is to provide the neurosurgeon or radiologist with a 3-D image containing information which cannot ordinarily be obtained from a single imaging modality. The first approach employs recent developments in MR which is now capable of imaging flowing blood as well as static tissue. Here, true 3-D data are acquired and displayed using volume or surface rendering techniques. The second approach is based on the integration of x-ray projection angiograms and tomographic image data, allowing a composite image of anatomy and vasculature to be viewed in 3-D. This is accomplished by superimposing an angiographic stereo-pair onto volume rendered images of either CT or MR data created from matched viewing geometries. The two approaches are outlined and compared. Results are presented for each technique and potential clinical applications discussed.

  18. Comparison of alternative image reformatting techniques in micro-computed tomography and tooth clearing for detailed canal morphology.

    PubMed

    Lee, Ki-Wook; Kim, Yeun; Perinpanayagam, Hiran; Lee, Jong-Ki; Yoo, Yeon-Jee; Lim, Sang-Min; Chang, Seok Woo; Ha, Byung-Hyun; Zhu, Qiang; Kum, Kee-Yeon

    2014-03-01

    Micro-computed tomography (MCT) shows detailed root canal morphology that is not seen with traditional tooth clearing. However, alternative image reformatting techniques in MCT involving 2-dimensional (2D) minimum intensity projection (MinIP) and 3-dimensional (3D) volume-rendering reconstruction have not been directly compared with clearing. The aim was to compare alternative image reformatting techniques in MCT with tooth clearing on the mesiobuccal (MB) root of maxillary first molars. Eighteen maxillary first molar MB roots were scanned, and 2D MinIP and 3D volume-rendered images were reconstructed. Subsequently, the same MB roots were processed by traditional tooth clearing. Images from 2D, 3D, 2D + 3D, and clearing techniques were assessed by 4 endodontists to classify canal configuration and to identify fine anatomic structures such as accessory canals, intercanal communications, and loops. All image reformatting techniques in MCT showed detailed configurations and numerous fine structures, such that none were classified as simple type I or II canals; several were classified as types III and IV according to Weine classification or types IV, V, and VI according to Vertucci; and most were nonclassifiable because of their complexity. The clearing images showed less detail, few fine structures, and numerous type I canals. Classification of canal configuration was in 100% intraobserver agreement for all 18 roots visualized by any of the image reformatting techniques in MCT but for only 4 roots (22.2%) classified according to Weine and 6 (33.3%) classified according to Vertucci, when using the clearing technique. The combination of 2D MinIP and 3D volume-rendered images showed the most detailed canal morphology and fine anatomic structures. Copyright © 2014 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  19. Virtual Whipple: preoperative surgical planning with volume-rendered MDCT images to identify arterial variants relevant to the Whipple procedure.

    PubMed

    Brennan, Darren D; Zamboni, Giulia; Sosna, Jacob; Callery, Mark P; Vollmer, Charles M V; Raptopoulos, Vassilios D; Kruskal, Jonathan B

    2007-05-01

    The purposes of this study were to combine a thorough understanding of the technical aspects of the Whipple procedure with advanced rendering techniques by introducing a virtual Whipple procedure and to evaluate the utility of this new rendering technique in prediction of the arterial variants that cross the anticipated surgical resection plane. The virtual Whipple is a novel technique that follows the complex surgical steps in a Whipple procedure. Three-dimensional reconstructed angiographic images are used to identify arterial variants for the surgeon as part of the preoperative radiologic assessment of pancreatic and ampullary tumors.

  20. Augmented reality to the rescue of the minimally invasive surgeon. The usefulness of the interposition of stereoscopic images in the Da Vinci™ robotic console.

    PubMed

    Volonté, Francesco; Buchs, Nicolas C; Pugin, François; Spaltenstein, Joël; Schiltz, Boris; Jung, Minoa; Hagen, Monika; Ratib, Osman; Morel, Philippe

    2013-09-01

    Computerized management of medical information and 3D imaging has become the norm in everyday medical practice. Surgeons exploit these emerging technologies and bring information previously confined to the radiology rooms into the operating theatre. The paper reports the authors' experience with integrated stereoscopic 3D-rendered images in the da Vinci surgeon console. Volume-rendered images were obtained from a standard computed tomography dataset using the OsiriX DICOM workstation. A custom OsiriX plugin was created that permitted the 3D-rendered images to be displayed in the da Vinci surgeon console and to appear stereoscopic. These rendered images were displayed in the robotic console using the TilePro multi-input display. The upper part of the screen shows the real endoscopic surgical field and the bottom shows the stereoscopic 3D-rendered images. These are controlled by a 3D joystick installed on the console, and are updated in real time. Five patients underwent a robotic augmented reality-enhanced procedure. The surgeon was able to switch between the classical endoscopic view and a combined virtual view during the procedure. Subjectively, the addition of the rendered images was considered to be an undeniable help during the dissection phase. With the rapid evolution of robotics, computer-aided surgery is receiving increasing interest. This paper details the authors' experience with 3D-rendered images projected inside the surgical console. The use of this intra-operative mixed reality technology is considered very useful by the surgeon. It has been shown that the usefulness of this technique is a step toward computer-aided surgery that will progress very quickly over the next few years. Copyright © 2012 John Wiley & Sons, Ltd.

  1. Intraoperative utilization of advanced imaging modalities in a complex kidney stone case: a pilot case study.

    PubMed

    Christiansen, Andrew R; Shorti, Rami M; Smith, Cory D; Prows, William C; Bishoff, Jay T

    2018-05-01

    Despite the increasing use of advanced 3D imaging techniques and 3D printing, these techniques have not yet been comprehensively compared in a surgical setting. The purpose of this study is to explore the effectiveness of five different advanced imaging modalities during a complex renal surgical procedure. A patient with a horseshoe kidney and multiple large, symptomatic stones that had failed Extracorporeal Shock Wave Lithotripsy (ESWL) and ureteroscopy treatment was used for this evaluation. CT data were used to generate five different imaging modalities, including a 3D printed model, three different volume rendered models, and a geometric CAD model. A survey was used to evaluate the quality and breadth of the imaging modalities during four different phases of the laparoscopic procedure. In the case of a complex kidney procedure, the CAD model, 3D print, volume render on an autostereoscopic 3D display, interactive and basic volume render models demonstrated added insight and complemented the surgical procedure. CAD manual segmentation allowed tissue layers and/or kidney stones to be made colorful and semi-transparent, allowing easier navigation through abnormal vasculature. The 3D print allowed for simultaneous visualization of renal pelvis and surrounding vasculature. Our preliminary exploration indicates that various advanced imaging modalities, when properly utilized and supported during surgery, can be useful in complementing the CT data and laparoscopic display. This study suggests that various imaging modalities, such as ones utilized in this case, can be beneficial intraoperatively depending on the surgical step involved and may be more helpful than 3D printed models. We also present factors to consider when evaluating advanced imaging modalities during complex surgery.

  2. 3D SPECT/CT fusion using image data projection of bone SPECT onto 3D volume-rendered CT images: feasibility and clinical impact in the diagnosis of bone metastasis.

    PubMed

    Ogata, Yuji; Nakahara, Tadaki; Ode, Kenichi; Matsusaka, Yohji; Katagiri, Mari; Iwabuchi, Yu; Itoh, Kazunari; Ichimura, Akira; Jinzaki, Masahiro

    2017-05-01

    We developed a method of image data projection of bone SPECT into 3D volume-rendered CT images for 3D SPECT/CT fusion. The aims of our study were to evaluate its feasibility and clinical usefulness. Whole-body bone scintigraphy (WB) and SPECT/CT scans were performed in 318 cancer patients using a dedicated SPECT/CT systems. Volume data of bone SPECT and CT were fused to obtain 2D SPECT/CT images. To generate our 3D SPECT/CT images, colored voxel data of bone SPECT were projected onto the corresponding location of the volume-rendered CT data after a semi-automatic bone extraction. Then, the resultant 3D images were blended with conventional volume-rendered CT images, allowing to grasp the three-dimensional relationship between bone metabolism and anatomy. WB and SPECT (WB + SPECT), 2D SPECT/CT fusion, and 3D SPECT/CT fusion were evaluated by two independent reviewers in the diagnosis of bone metastasis. The inter-observer variability and diagnostic accuracy in these three image sets were investigated using a four-point diagnostic scale. Increased bone metabolism was found in 744 metastatic sites and 1002 benign changes. On a per-lesion basis, inter-observer agreements in the diagnosis of bone metastasis were 0.72 for WB + SPECT, 0.90 for 2D SPECT/CT, and 0.89 for 3D SPECT/CT. Receiver operating characteristic analyses for the diagnostic accuracy of bone metastasis showed that WB + SPECT, 2D SPECT/CT, and 3D SPECT/CT had an area under the curve of 0.800, 0.983, and 0.983 for reader 1, 0.865, 0.992, and 0.993 for reader 2, respectively (WB + SPECT vs. 2D or 3D SPECT/CT, p < 0.001; 2D vs. 3D SPECT/CT, n.s.). The durations of interpretation of WB + SPECT, 2D SPECT/CT, and 3D SPECT/CT images were 241 ± 75, 225 ± 73, and 182 ± 71 s for reader 1 and 207 ± 72, 190 ± 73, and 179 ± 73 s for reader 2, respectively. As a result, it took shorter time to read 3D SPECT/CT images than 2D SPECT/CT (p < 0.0001) or WB + SPECT images (p < 0.0001). 3D SPECT/CT fusion offers comparable diagnostic accuracy to 2D SPECT/CT fusion. The visual effect of 3D SPECT/CT fusion facilitates reduction of reading time compared to 2D SPECT/CT fusion.

  3. Improving the visualization of 3D ultrasound data with 3D filtering

    NASA Astrophysics Data System (ADS)

    Shamdasani, Vijay; Bae, Unmin; Managuli, Ravi; Kim, Yongmin

    2005-04-01

    3D ultrasound imaging is quickly gaining widespread clinical acceptance as a visualization tool that allows clinicians to obtain unique views not available with traditional 2D ultrasound imaging and an accurate understanding of patient anatomy. The ability to acquire, manipulate and interact with the 3D data in real time is an important feature of 3D ultrasound imaging. Volume rendering is often used to transform the 3D volume into 2D images for visualization. Unlike computed tomography (CT) and magnetic resonance imaging (MRI), volume rendering of 3D ultrasound data creates noisy images in which surfaces cannot be readily discerned due to speckles and low signal-to-noise ratio. The degrading effect of speckles is especially severe when gradient shading is performed to add depth cues to the image. Several researchers have reported that smoothing the pre-rendered volume with a 3D convolution kernel, such as 5x5x5, can significantly improve the image quality, but at the cost of decreased resolution. In this paper, we have analyzed the reasons for the improvement in image quality with 3D filtering and determined that the improvement is due to two effects. The filtering reduces speckles in the volume data, which leads to (1) more accurate gradient computation and better shading and (2) decreased noise during compositing. We have found that applying a moderate-size smoothing kernel (e.g., 7x7x7) to the volume data before gradient computation combined with some smoothing of the volume data (e.g., with a 3x3x3 lowpass filter) before compositing yielded images with good depth perception and no appreciable loss in resolution. Providing the clinician with the flexibility to control both of these effects (i.e., shading and compositing) independently could improve the visualization of the 3D ultrasound data. Introducing this flexibility into the ultrasound machine requires 3D filtering to be performed twice on the volume data, once before gradient computation and again before compositing. 3D filtering of an ultrasound volume containing millions of voxels requires a large amount of computation, and doing it twice decreases the number of frames that can be visualized per second. To address this, we have developed several techniques to make computation efficient. For example, we have used the moving average method to filter a 128x128x128 volume with a 3x3x3 boxcar kernel in 17 ms on a single MAP processor running at 400 MHz. The same methods reduced the computing time on a Pentium 4 running at 3 GHz from 110 ms to 62 ms. We believe that our proposed method can improve 3D ultrasound visualization without sacrificing resolution and incurring an excessive computing time.

  4. Volumetric depth peeling for medical image display

    NASA Astrophysics Data System (ADS)

    Borland, David; Clarke, John P.; Fielding, Julia R.; TaylorII, Russell M.

    2006-01-01

    Volumetric depth peeling (VDP) is an extension to volume rendering that enables display of otherwise occluded features in volume data sets. VDP decouples occlusion calculation from the volume rendering transfer function, enabling independent optimization of settings for rendering and occlusion. The algorithm is flexible enough to handle multiple regions occluding the object of interest, as well as object self-occlusion, and requires no pre-segmentation of the data set. VDP was developed as an improvement for virtual arthroscopy for the diagnosis of shoulder-joint trauma, and has been generalized for use in other simple and complex joints, and to enable non-invasive urology studies. In virtual arthroscopy, the surfaces in the joints often occlude each other, allowing limited viewpoints from which to evaluate these surfaces. In urology studies, the physician would like to position the virtual camera outside the kidney collecting system and see inside it. By rendering invisible all voxels between the observer's point of view and objects of interest, VDP enables viewing from unconstrained positions. In essence, VDP can be viewed as a technique for automatically defining an optimal data- and task-dependent clipping surface. Radiologists using VDP display have been able to perform evaluations of pathologies more easily and more rapidly than with clinical arthroscopy, standard volume rendering, or standard MRI/CT slice viewing.

  5. Server-based Approach to Web Visualization of Integrated Three-dimensional Brain Imaging Data

    PubMed Central

    Poliakov, Andrew V.; Albright, Evan; Hinshaw, Kevin P.; Corina, David P.; Ojemann, George; Martin, Richard F.; Brinkley, James F.

    2005-01-01

    The authors describe a client-server approach to three-dimensional (3-D) visualization of neuroimaging data, which enables researchers to visualize, manipulate, and analyze large brain imaging datasets over the Internet. All computationally intensive tasks are done by a graphics server that loads and processes image volumes and 3-D models, renders 3-D scenes, and sends the renderings back to the client. The authors discuss the system architecture and implementation and give several examples of client applications that allow visualization and analysis of integrated language map data from single and multiple patients. PMID:15561787

  6. Direct volumetric rendering based on point primitives in OpenGL.

    PubMed

    da Rosa, André Luiz Miranda; de Almeida Souza, Ilana; Yuuji Hira, Adilson; Zuffo, Marcelo Knörich

    2006-01-01

    The aim of this project is to present a renderization by software algorithm of acquired volumetric data. The algorithm was implemented in Java language and the LWJGL graphical library was used, allowing the volume renderization by software and thus preventing the necessity to acquire specific graphical boards for the 3D reconstruction. The considered algorithm creates a model in OpenGL, through point primitives, where each voxel becomes a point with the color values related to this pixel position in the corresponding images.

  7. A high-level 3D visualization API for Java and ImageJ.

    PubMed

    Schmid, Benjamin; Schindelin, Johannes; Cardona, Albert; Longair, Mark; Heisenberg, Martin

    2010-05-21

    Current imaging methods such as Magnetic Resonance Imaging (MRI), Confocal microscopy, Electron Microscopy (EM) or Selective Plane Illumination Microscopy (SPIM) yield three-dimensional (3D) data sets in need of appropriate computational methods for their analysis. The reconstruction, segmentation and registration are best approached from the 3D representation of the data set. Here we present a platform-independent framework based on Java and Java 3D for accelerated rendering of biological images. Our framework is seamlessly integrated into ImageJ, a free image processing package with a vast collection of community-developed biological image analysis tools. Our framework enriches the ImageJ software libraries with methods that greatly reduce the complexity of developing image analysis tools in an interactive 3D visualization environment. In particular, we provide high-level access to volume rendering, volume editing, surface extraction, and image annotation. The ability to rely on a library that removes the low-level details enables concentrating software development efforts on the algorithm implementation parts. Our framework enables biomedical image software development to be built with 3D visualization capabilities with very little effort. We offer the source code and convenient binary packages along with extensive documentation at http://3dviewer.neurofly.de.

  8. Establishing the 3-D finite element solid model of femurs in partial by volume rendering.

    PubMed

    Zhang, Yinwang; Zhong, Wuxue; Zhu, Haibo; Chen, Yun; Xu, Lingjun; Zhu, Jianmin

    2013-01-01

    It remains rare to report three-dimensional (3-D) finite element solid model of femurs in partial by volume rendering method, though several methods of femoral 3-D finite element modeling are already available. We aim to analyze the advantages of the modeling method by establishing the 3-D finite element solid model of femurs in partial by volume rendering. A 3-D finite element model of the normal human femurs, made up of three anatomic structures: cortical bone, cancellous bone and pulp cavity, was constructed followed by pretreatment of the CT original image. Moreover, the finite-element analysis was carried on different material properties, three types of materials given for cortical bone, six assigned for cancellous bone, and single for pulp cavity. The established 3-D finite element of femurs contains three anatomical structures: cortical bone, cancellous bone, and pulp cavity. The compressive stress primarily concentrated in the medial surfaces of femur, especially in the calcar femorale. Compared with whole modeling by volume rendering method, the 3-D finite element solid model created in partial is more real and fit for finite element analysis. Copyright © 2013 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.

  9. A service protocol for post-processing of medical images on the mobile device

    NASA Astrophysics Data System (ADS)

    He, Longjun; Ming, Xing; Xu, Lang; Liu, Qian

    2014-03-01

    With computing capability and display size growing, the mobile device has been used as a tool to help clinicians view patient information and medical images anywhere and anytime. It is uneasy and time-consuming for transferring medical images with large data size from picture archiving and communication system to mobile client, since the wireless network is unstable and limited by bandwidth. Besides, limited by computing capability, memory and power endurance, it is hard to provide a satisfactory quality of experience for radiologists to handle some complex post-processing of medical images on the mobile device, such as real-time direct interactive three-dimensional visualization. In this work, remote rendering technology is employed to implement the post-processing of medical images instead of local rendering, and a service protocol is developed to standardize the communication between the render server and mobile client. In order to make mobile devices with different platforms be able to access post-processing of medical images, the Extensible Markup Language is taken to describe this protocol, which contains four main parts: user authentication, medical image query/ retrieval, 2D post-processing (e.g. window leveling, pixel values obtained) and 3D post-processing (e.g. maximum intensity projection, multi-planar reconstruction, curved planar reformation and direct volume rendering). And then an instance is implemented to verify the protocol. This instance can support the mobile device access post-processing of medical image services on the render server via a client application or on the web page.

  10. Simplifying the exploration of volumetric images: development of a 3D user interface for the radiologist's workplace.

    PubMed

    Teistler, M; Breiman, R S; Lison, T; Bott, O J; Pretschner, D P; Aziz, A; Nowinski, W L

    2008-10-01

    Volumetric imaging (computed tomography and magnetic resonance imaging) provides increased diagnostic detail but is associated with the problem of navigation through large amounts of data. In an attempt to overcome this problem, a novel 3D navigation tool has been designed and developed that is based on an alternative input device. A 3D mouse allows for simultaneous definition of position and orientation of orthogonal or oblique multiplanar reformatted images or slabs, which are presented within a virtual 3D scene together with the volume-rendered data set and additionally as 2D images. Slabs are visualized with maximum intensity projection, average intensity projection, or standard volume rendering technique. A prototype has been implemented based on PC technology that has been tested by several radiologists. It has shown to be easily understandable and usable after a very short learning phase. Our solution may help to fully exploit the diagnostic potential of volumetric imaging by allowing for a more efficient reading process compared to currently deployed solutions based on conventional mouse and keyboard.

  11. Forensic 3D Visualization of CT Data Using Cinematic Volume Rendering: A Preliminary Study.

    PubMed

    Ebert, Lars C; Schweitzer, Wolf; Gascho, Dominic; Ruder, Thomas D; Flach, Patricia M; Thali, Michael J; Ampanozi, Garyfalia

    2017-02-01

    The 3D volume-rendering technique (VRT) is commonly used in forensic radiology. Its main function is to explain medical findings to state attorneys, judges, or police representatives. New visualization algorithms permit the generation of almost photorealistic volume renderings of CT datasets. The objective of this study is to present and compare a variety of radiologic findings to illustrate the differences between and the advantages and limitations of the current VRT and the physically based cinematic rendering technique (CRT). Seventy volunteers were shown VRT and CRT reconstructions of 10 different cases. They were asked to mark the findings on the images and rate them in terms of realism and understandability. A total of 48 of the 70 questionnaires were returned and included in the analysis. On the basis of most of the findings presented, CRT appears to be equal or superior to VRT with respect to the realism and understandability of the visualized findings. Overall, in terms of realism, the difference between the techniques was statistically significant (p < 0.05). Most participants perceived the CRT findings to be more understandable than the VRT findings, but that difference was not statistically significant (p > 0.05). CRT, which is similar to conventional VRT, is not primarily intended for diagnostic radiologic image analysis, and therefore it should be used primarily as a tool to deliver visual information in the form of radiologic image reports. Using CRT for forensic visualization might have advantages over using VRT if conveying a high degree of visual realism is of importance. Most of the shortcomings of CRT have to do with the software being an early prototype.

  12. [Hybrid 3-D rendering of the thorax and surface-based virtual bronchoscopy in surgical and interventional therapy control].

    PubMed

    Seemann, M D; Gebicke, K; Luboldt, W; Albes, J M; Vollmar, J; Schäfer, J F; Beinert, T; Englmeier, K H; Bitzer, M; Claussen, C D

    2001-07-01

    The aim of this study was to demonstrate the possibilities of a hybrid rendering method, the combination of a color-coded surface and volume rendering method, with the feasibility of performing surface-based virtual endoscopy with different representation models in the operative and interventional therapy control of the chest. In 6 consecutive patients with partial lung resection (n = 2) and lung transplantation (n = 4) a thin-section spiral computed tomography of the chest was performed. The tracheobronchial system and the introduced metallic stents were visualized using a color-coded surface rendering method. The remaining thoracic structures were visualized using a volume rendering method. For virtual bronchoscopy, the tracheobronchial system was visualized using a triangle surface model, a shaded-surface model and a transparent shaded-surface model. The hybrid 3D visualization uses the advantages of both the color-coded surface and volume rendering methods and facilitates a clear representation of the tracheobronchial system and the complex topographical relationship of morphological and pathological changes without loss of diagnostic information. Performing virtual bronchoscopy with the transparent shaded-surface model facilitates a reasonable to optimal, simultaneous visualization and assessment of the surface structure of the tracheobronchial system and the surrounding mediastinal structures and lesions. Hybrid rendering relieve the morphological assessment of anatomical and pathological changes without the need for time-consuming detailed analysis and presentation of source images. Performing virtual bronchoscopy with a transparent shaded-surface model offers a promising alternative to flexible fiberoptic bronchoscopy.

  13. Advanced texture filtering: a versatile framework for reconstructing multi-dimensional image data on heterogeneous architectures

    NASA Astrophysics Data System (ADS)

    Zellmann, Stefan; Percan, Yvonne; Lang, Ulrich

    2015-01-01

    Reconstruction of 2-d image primitives or of 3-d volumetric primitives is one of the most common operations performed by the rendering components of modern visualization systems. Because this operation is often aided by GPUs, reconstruction is typically restricted to first-order interpolation. With the advent of in situ visualization, the assumption that rendering algorithms are in general executed on GPUs is however no longer adequate. We thus propose a framework that provides versatile texture filtering capabilities: up to third-order reconstruction using various types of cubic filtering and interpolation primitives; cache-optimized algorithms that integrate seamlessly with GPGPU rendering or with software rendering that was optimized for cache-friendly "Structure of Array" (SoA) access patterns; a memory management layer (MML) that gracefully hides the complexities of extra data copies necessary for memory access optimizations such as swizzling, for rendering on GPGPUs, or for reconstruction schemes that rely on pre-filtered data arrays. We prove the effectiveness of our software architecture by integrating it into and validating it using the open source direct volume rendering (DVR) software DeskVOX.

  14. Three-dimensional confocal microscopy of the living cornea and ocular lens

    NASA Astrophysics Data System (ADS)

    Masters, Barry R.

    1991-07-01

    The three-dimensional reconstruction of the optic zone of the cornea and the ocular crystalline lens has been accomplished using confocal microscopy and volume rendering computer techniques. A laser scanning confocal microscope was used in the reflected light mode to obtain the two-dimensional images from the cornea and the ocular lens of a freshly enucleated rabbit eye. The light source was an argon ion laser with a 488 nm wavelength. The microscope objective was a Leitz X25, NA 0.6 water immersion lens. The 400 micron thick cornea was optically sectioned into 133 three micron sections. The semi-transparent cornea and the in-situ ocular lens was visualized as high resolution, high contrast two-dimensional images. The structures observed in the cornea include: superficial epithelial cells and their nuclei, basal epithelial cells and their 'beaded' cell borders, basal lamina, nerve plexus, nerve fibers, nuclei of stromal keratocytes, and endothelial cells. The structures observed in the in- situ ocular lens include: lens capsule, lens epithelial cells, and individual lens fibers. The three-dimensional data sets of the cornea and the ocular lens were reconstructed in the computer using volume rendering techniques. Stereo pairs were also created of the two- dimensional ocular images for visualization. The stack of two-dimensional images was reconstructed into a three-dimensional object using volume rendering techniques. This demonstration of the three-dimensional visualization of the intact, enucleated eye provides an important step toward quantitative three-dimensional morphometry of the eye. The important aspects of three-dimensional reconstruction are discussed.

  15. Post-processing methods of rendering and visualizing 3-D reconstructed tomographic images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, S.T.C.

    The purpose of this presentation is to discuss the computer processing techniques of tomographic images, after they have been generated by imaging scanners, for volume visualization. Volume visualization is concerned with the representation, manipulation, and rendering of volumetric data. Since the first digital images were produced from computed tomography (CT) scanners in the mid 1970s, applications of visualization in medicine have expanded dramatically. Today, three-dimensional (3D) medical visualization has expanded from using CT data, the first inherently digital source of 3D medical data, to using data from various medical imaging modalities, including magnetic resonance scanners, positron emission scanners, digital ultrasound,more » electronic and confocal microscopy, and other medical imaging modalities. We have advanced from rendering anatomy to aid diagnosis and visualize complex anatomic structures to planning and assisting surgery and radiation treatment. New, more accurate and cost-effective procedures for clinical services and biomedical research have become possible by integrating computer graphics technology with medical images. This trend is particularly noticeable in current market-driven health care environment. For example, interventional imaging, image-guided surgery, and stereotactic and visualization techniques are now stemming into surgical practice. In this presentation, we discuss only computer-display-based approaches of volumetric medical visualization. That is, we assume that the display device available is two-dimensional (2D) in nature and all analysis of multidimensional image data is to be carried out via the 2D screen of the device. There are technologies such as holography and virtual reality that do provide a {open_quotes}true 3D screen{close_quotes}. To confine the scope, this presentation will not discuss such approaches.« less

  16. Direct Volume Rendering with Shading via Three-Dimensional Textures

    NASA Technical Reports Server (NTRS)

    VanGelder, Allen; Kim, Kwansik

    1996-01-01

    A new and easy-to-implement method for direct volume rendering that uses 3D texture maps for acceleration, and incorporates directional lighting, is described. The implementation, called Voltx, produces high-quality images at nearly interactive speeds on workstations with hardware support for three-dimensional texture maps. Previously reported methods did not incorporate a light model, and did not address issues of multiple texture maps for large volumes. Our research shows that these extensions impact performance by about a factor of ten. Voltx supports orthographic, perspective, and stereo views. This paper describes the theory and implementation of this technique, and compares it to the shear-warp factorization approach. A rectilinear data set is converted into a three-dimensional texture map containing color and opacity information. Quantized normal vectors and a lookup table provide efficiency. A new tesselation of the sphere is described, which serves as the basis for normal-vector quantization. A new gradient-based shading criterion is described, in which the gradient magnitude is interpreted in the context of the field-data value and the material classification parameters, and not in isolation. In the rendering phase, the texture map is applied to a stack of parallel planes, which effectively cut the texture into many slabs. The slabs are composited to form an image.

  17. Non-photorealistic rendering of virtual implant models for computer-assisted fluoroscopy-based surgical procedures

    NASA Astrophysics Data System (ADS)

    Zheng, Guoyan

    2007-03-01

    Surgical navigation systems visualize the positions and orientations of surgical instruments and implants as graphical overlays onto a medical image of the operated anatomy on a computer monitor. The orthopaedic surgical navigation systems could be categorized according to the image modalities that are used for the visualization of surgical action. In the so-called CT-based systems or 'surgeon-defined anatomy' based systems, where a 3D volume or surface representation of the operated anatomy could be constructed from the preoperatively acquired tomographic data or through intraoperatively digitized anatomy landmarks, a photorealistic rendering of the surgical action has been identified to greatly improve usability of these navigation systems. However, this may not hold true when the virtual representation of surgical instruments and implants is superimposed onto 2D projection images in a fluoroscopy-based navigation system due to the so-called image occlusion problem. Image occlusion occurs when the field of view of the fluoroscopic image is occupied by the virtual representation of surgical implants or instruments. In these situations, the surgeon may miss part of the image details, even if transparency and/or wire-frame rendering is used. In this paper, we propose to use non-photorealistic rendering to overcome this difficulty. Laboratory testing results on foamed plastic bones during various computer-assisted fluoroscopybased surgical procedures including total hip arthroplasty and long bone fracture reduction and osteosynthesis are shown.

  18. Three-Dimensional Reconstruction of Thoracic Structures: Based on Chinese Visible Human

    PubMed Central

    Luo, Na; Tan, Liwen; Fang, Binji; Li, Ying; Xie, Bing; Liu, Kaijun; Chu, Chun; Li, Min

    2013-01-01

    We managed to establish three-dimensional digitized visible model of human thoracic structures and to provide morphological data for imaging diagnosis and thoracic and cardiovascular surgery. With Photoshop software, the contour line of lungs and mediastinal structures including heart, aorta and its ramus, azygos vein, superior vena cava, inferior vena cava, thymus, esophagus, diaphragm, phrenic nerve, vagus nerve, sympathetic trunk, thoracic vertebrae, sternum, thoracic duct, and so forth were segmented from the Chinese Visible Human (CVH)-1 data set. The contour data set of segmented thoracic structures was imported to Amira software and 3D thorax models were reconstructed via surface rendering and volume rendering. With Amira software, surface rendering reconstructed model of thoracic organs and its volume rendering reconstructed model were 3D reconstructed and can be displayed together clearly and accurately. It provides a learning tool of interpreting human thoracic anatomy and virtual thoracic and cardiovascular surgery for medical students and junior surgeons. PMID:24369489

  19. Image fusion for visualization of hepatic vasculature and tumors

    NASA Astrophysics Data System (ADS)

    Chou, Jin-Shin; Chen, Shiuh-Yung J.; Sudakoff, Gary S.; Hoffmann, Kenneth R.; Chen, Chin-Tu; Dachman, Abraham H.

    1995-05-01

    We have developed segmentation and simultaneous display techniques to facilitate the visualization of the three-dimensional spatial relationships between organ structures and organ vasculature. We concentrate on the visualization of the liver based on spiral computed tomography images. Surface-based 3-D rendering and maximal intensity projection algorithms are used for data visualization. To extract the liver in the serial of images accurately and efficiently, we have developed a user-friendly interactive program with a deformable-model segmentation. Surface rendering techniques are used to visualize the extracted structures, adjacent contours are aligned and fitted with a Bezier surface to yield a smooth surface. Visualization of the vascular structures, portal and hepatic veins, is achieved by applying a MIP technique to the extracted liver volume. To integrate the extracted structures they are surface-rendered and their MIP images are aligned and a color table is designed for simultaneous display of the combined liver/tumor and vasculature images. By combining the 3-D surface rendering and MIP techniques, portal veins, hepatic veins, and hepatic tumor can be inspected simultaneously and their spatial relationships can be more easily perceived. The proposed technique will be useful for visualization of both hepatic neoplasm and vasculature in surgical planning for tumor resection or living-donor liver transplantation.

  20. Virtual Sonography Through the Internet: Volume Compression Issues

    PubMed Central

    Vilarchao-Cavia, Joseba; Troyano-Luque, Juan-Mario; Clavijo, Matilde

    2001-01-01

    Background Three-dimensional ultrasound images allow virtual sonography even at a distance. However, the size of final 3-D files limits their transmission through slow networks such as the Internet. Objective To analyze compression techniques that transform ultrasound images into small 3-D volumes that can be transmitted through the Internet without loss of relevant medical information. Methods Samples were selected from ultrasound examinations performed during, 1999-2000, in the Obstetrics and Gynecology Department at the University Hospital in La Laguna, Canary Islands, Spain. The conventional ultrasound video output was recorded at 25 fps (frames per second) on a PC, producing 100- to 120-MB files (for from 500 to 550 frames). Processing to obtain 3-D images progressively reduced file size. Results The original frames passed through different compression stages: selecting the region of interest, rendering techniques, and compression for storage. Final 3-D volumes reached 1:25 compression rates (1.5- to 2-MB files). Those volumes need 7 to 8 minutes to be transmitted through the Internet at a mean data throughput of 6.6 Kbytes per second. At the receiving site, virtual sonography is possible using orthogonal projections or oblique cuts. Conclusions Modern volume-rendering techniques allowed distant virtual sonography through the Internet. This is the result of their efficient data compression that maintains its attractiveness as a main criterion for distant diagnosis. PMID:11720963

  1. 3D Volume Rendering and 3D Printing (Additive Manufacturing).

    PubMed

    Katkar, Rujuta A; Taft, Robert M; Grant, Gerald T

    2018-07-01

    Three-dimensional (3D) volume-rendered images allow 3D insight into the anatomy, facilitating surgical treatment planning and teaching. 3D printing, additive manufacturing, and rapid prototyping techniques are being used with satisfactory accuracy, mostly for diagnosis and surgical planning, followed by direct manufacture of implantable devices. The major limitation is the time and money spent generating 3D objects. Printer type, material, and build thickness are known to influence the accuracy of printed models. In implant dentistry, the use of 3D-printed surgical guides is strongly recommended to facilitate planning and reduce risk of operative complications. Copyright © 2018 Elsevier Inc. All rights reserved.

  2. Validation of percutaneous puncture trajectory during renal access using 4D ultrasound reconstruction

    NASA Astrophysics Data System (ADS)

    Rodrigues, Pedro L.; Rodrigues, Nuno F.; Fonseca, Jaime C.; Vilaça, João. L.

    2015-03-01

    An accurate percutaneous puncture is essential for disintegration and removal of renal stones. Although this procedure has proven to be safe, some organs surrounding the renal target might be accidentally perforated. This work describes a new intraoperative framework where tracked surgical tools are superimposed within 4D ultrasound imaging for security assessment of the percutaneous puncture trajectory (PPT). A PPT is first generated from the skin puncture site towards an anatomical target, using the information retrieved by electromagnetic motion tracking sensors coupled to surgical tools. Then, 2D ultrasound images acquired with a tracked probe are used to reconstruct a 4D ultrasound around the PPT under GPU processing. Volume hole-filling was performed in different processing time intervals by a tri-linear interpolation method. At spaced time intervals, the volume of the anatomical structures was segmented to ascertain if any vital structure is in between PPT and might compromise the surgical success. To enhance the volume visualization of the reconstructed structures, different render transfer functions were used. Results: Real-time US volume reconstruction and rendering with more than 25 frames/s was only possible when rendering only three orthogonal slice views. When using the whole reconstructed volume one achieved 8-15 frames/s. 3 frames/s were reached when one introduce the segmentation and detection if some structure intersected the PPT. The proposed framework creates a virtual and intuitive platform that can be used to identify and validate a PPT to safely and accurately perform the puncture in percutaneous nephrolithotomy.

  3. Intracranial cerebrospinal fluid spaces imaging using a pulse-triggered three-dimensional turbo spin echo MR sequence with variable flip-angle distribution.

    PubMed

    Hodel, Jérôme; Silvera, Jonathan; Bekaert, Olivier; Rahmouni, Alain; Bastuji-Garin, Sylvie; Vignaud, Alexandre; Petit, Eric; Durning, Bruno; Decq, Philippe

    2011-02-01

    To assess the three-dimensional turbo spin echo with variable flip-angle distribution magnetic resonance sequence (SPACE: Sampling Perfection with Application optimised Contrast using different flip-angle Evolution) for the imaging of intracranial cerebrospinal fluid (CSF) spaces. We prospectively investigated 18 healthy volunteers and 25 patients, 20 with communicating hydrocephalus (CH), five with non-communicating hydrocephalus (NCH), using the SPACE sequence at 1.5T. Volume rendering views of both intracranial and ventricular CSF were obtained for all patients and volunteers. The subarachnoid CSF distribution was qualitatively evaluated on volume rendering views using a four-point scale. The CSF volumes within total, ventricular and subarachnoid spaces were calculated as well as the ratio between ventricular and subarachnoid CSF volumes. Three different patterns of subarachnoid CSF distribution were observed. In healthy volunteers we found narrowed CSF spaces within the occipital aera. A diffuse narrowing of the subarachnoid CSF spaces was observed in patients with NCH whereas patients with CH exhibited narrowed CSF spaces within the high midline convexity. The ratios between ventricular and subarachnoid CSF volumes were significantly different among the volunteers, patients with CH and patients with NCH. The assessment of CSF spaces volume and distribution may help to characterise hydrocephalus.

  4. An Agent Based Collaborative Simplification of 3D Mesh Model

    NASA Astrophysics Data System (ADS)

    Wang, Li-Rong; Yu, Bo; Hagiwara, Ichiro

    Large-volume mesh model faces the challenge in fast rendering and transmission by Internet. The current mesh models obtained by using three-dimensional (3D) scanning technology are usually very large in data volume. This paper develops a mobile agent based collaborative environment on the development platform of mobile-C. Communication among distributed agents includes grasping image of visualized mesh model, annotation to grasped image and instant message. Remote and collaborative simplification can be efficiently conducted by Internet.

  5. Automated volume of interest delineation and rendering of cone beam CT images in interventional cardiology

    NASA Astrophysics Data System (ADS)

    Lorenz, Cristian; Schäfer, Dirk; Eshuis, Peter; Carroll, John; Grass, Michael

    2012-02-01

    Interventional C-arm systems allow the efficient acquisition of 3D cone beam CT images. They can be used for intervention planning, navigation, and outcome assessment. We present a fast and completely automated volume of interest (VOI) delineation for cardiac interventions, covering the whole visceral cavity including mediastinum and lungs but leaving out rib-cage and spine. The problem is addressed in a model based approach. The procedure has been evaluated on 22 patient cases and achieves an average surface error below 2mm. The method is able to cope with varying image intensities, varying truncations due to the limited reconstruction volume, and partially with heavy metal and motion artifacts.

  6. Software Method for Computed Tomography Cylinder Data Unwrapping, Re-slicing, and Analysis

    NASA Technical Reports Server (NTRS)

    Roth, Don J.

    2013-01-01

    A software method has been developed that is applicable for analyzing cylindrical and partially cylindrical objects inspected using computed tomography (CT). This method involves unwrapping and re-slicing data so that the CT data from the cylindrical object can be viewed as a series of 2D sheets (or flattened onion skins ) in addition to a series of top view slices and 3D volume rendering. The advantages of viewing the data in this fashion are as follows: (1) the use of standard and specialized image processing and analysis methods is facilitated having 2D array data versus a volume rendering; (2) accurate lateral dimensional analysis of flaws is possible in the unwrapped sheets versus volume rendering; (3) flaws in the part jump out at the inspector with the proper contrast expansion settings in the unwrapped sheets; and (4) it is much easier for the inspector to locate flaws in the unwrapped sheets versus top view slices for very thin cylinders. The method is fully automated and requires no input from the user except proper voxel dimension from the CT experiment and wall thickness of the part. The software is available in 32-bit and 64-bit versions, and can be used with binary data (8- and 16-bit) and BMP type CT image sets. The software has memory (RAM) and hard-drive based modes. The advantage of the (64-bit) RAM-based mode is speed (and is very practical for users of 64-bit Windows operating systems and computers having 16 GB or more RAM). The advantage of the hard-drive based analysis is one can work with essentially unlimited-sized data sets. Separate windows are spawned for the unwrapped/re-sliced data view and any image processing interactive capability. Individual unwrapped images and un -wrapped image series can be saved in common image formats. More information is available at http://www.grc.nasa.gov/WWW/OptInstr/ NDE_CT_CylinderUnwrapper.html.

  7. A Parallel Pipelined Renderer for the Time-Varying Volume Data

    NASA Technical Reports Server (NTRS)

    Chiueh, Tzi-Cker; Ma, Kwan-Liu

    1997-01-01

    This paper presents a strategy for efficiently rendering time-varying volume data sets on a distributed-memory parallel computer. Time-varying volume data take large storage space and visualizing them requires reading large files continuously or periodically throughout the course of the visualization process. Instead of using all the processors to collectively render one volume at a time, a pipelined rendering process is formed by partitioning processors into groups to render multiple volumes concurrently. In this way, the overall rendering time may be greatly reduced because the pipelined rendering tasks are overlapped with the I/O required to load each volume into a group of processors; moreover, parallelization overhead may be reduced as a result of partitioning the processors. We modify an existing parallel volume renderer to exploit various levels of rendering parallelism and to study how the partitioning of processors may lead to optimal rendering performance. Two factors which are important to the overall execution time are re-source utilization efficiency and pipeline startup latency. The optimal partitioning configuration is the one that balances these two factors. Tests on Intel Paragon computers show that in general optimal partitionings do exist for a given rendering task and result in 40-50% saving in overall rendering time.

  8. Three-dimensional rendering in medicine: some common misconceptions

    NASA Astrophysics Data System (ADS)

    Udupa, Jayaram K.

    2001-05-01

    As seen in the medical imaging literature and in the poster presentations at the annual conference of the Radiological Society of North America during the past 10 years, several mis conceptions are held relating to 3D rendering of medical images. The purpose of this presentation is to illustrate and clarify these with medical examples. Most of the misconceptions have to do with a mix up of the issues related to the common visualization techniques, viz., surface rendering (SR) and volume rendering (VR), and methods of image segmentation. In our survey, we came across the following most commonly held conceptions which we believe (and shall demonstrate) are not correct: (1) SR equated to thresholding. (2) VR considered not requiring segmentation. (3) VR considered to achieve higher resolution than SR. (4) SR/VR considered to require specialized hardware to achieve adequate speed. We shall briefly define and establish some fundamental terms to obviate any potential for terminology-related misconceptions. Subsequently, we shall sort out these issues and illustrate with examples as to why the above conceptions are incorrect. There are many SR methods that use segmentations that are far superior to thresholding. All VR techniques (except the straightforward MIP) require some form of fuzzy object specification, that is, fuzzy segmentation. The details seen in renditions depend fundamentally on, in addition to the rendering method, segmentation techniques also. There are fast-software-based rendering methods that give a performance on PCs similar to or exceeding that of expensive hardware systems. Most of the difficulties encountered in visualization (and also in image processing and analysis) stem from the difficulties in segmentation. It is important to identify these and separate them from the issues related purely to 3D rendering.

  9. Toward real-time virtual biopsy of oral lesions using confocal laser endomicroscopy interfaced with embedded computing.

    PubMed

    Thong, Patricia S P; Tandjung, Stephanus S; Movania, Muhammad Mobeen; Chiew, Wei-Ming; Olivo, Malini; Bhuvaneswari, Ramaswamy; Seah, Hock-Soon; Lin, Feng; Qian, Kemao; Soo, Khee-Chee

    2012-05-01

    Oral lesions are conventionally diagnosed using white light endoscopy and histopathology. This can pose a challenge because the lesions may be difficult to visualise under white light illumination. Confocal laser endomicroscopy can be used for confocal fluorescence imaging of surface and subsurface cellular and tissue structures. To move toward real-time "virtual" biopsy of oral lesions, we interfaced an embedded computing system to a confocal laser endomicroscope to achieve a prototype three-dimensional (3-D) fluorescence imaging system. A field-programmable gated array computing platform was programmed to enable synchronization of cross-sectional image grabbing and Z-depth scanning, automate the acquisition of confocal image stacks and perform volume rendering. Fluorescence imaging of the human and murine oral cavities was carried out using the fluorescent dyes fluorescein sodium and hypericin. Volume rendering of cellular and tissue structures from the oral cavity demonstrate the potential of the system for 3-D fluorescence visualization of the oral cavity in real-time. We aim toward achieving a real-time virtual biopsy technique that can complement current diagnostic techniques and aid in targeted biopsy for better clinical outcomes.

  10. Feasibility study: real-time 3-D ultrasound imaging of the brain.

    PubMed

    Smith, Stephen W; Chu, Kengyeh; Idriss, Salim F; Ivancevich, Nikolas M; Light, Edward D; Wolf, Patrick D

    2004-10-01

    We tested the feasibility of real-time, 3-D ultrasound (US) imaging in the brain. The 3-D scanner uses a matrix phased-array transducer of 512 transmit channels and 256 receive channels operating at 2.5 MHz with a 15-mm diameter footprint. The real-time system scans a 65 degrees pyramid, producing up to 30 volumetric scans per second, and features up to five image planes as well as 3-D rendering, 3-D pulsed-wave and color Doppler. In a human subject, the real-time 3-D scans produced simultaneous transcranial horizontal (axial), coronal and sagittal image planes and real-time volume-rendered images of the gross anatomy of the brain. In a transcranial sheep model, we obtained real-time 3-D color flow Doppler scans and perfusion images using bolus injection of contrast agents into the internal carotid artery.

  11. [Big data in imaging].

    PubMed

    Sewerin, Philipp; Ostendorf, Benedikt; Hueber, Axel J; Kleyer, Arnd

    2018-04-01

    Until now, most major medical advancements have been achieved through hypothesis-driven research within the scope of clinical trials. However, due to a multitude of variables, only a certain number of research questions could be addressed during a single study, thus rendering these studies expensive and time consuming. Big data acquisition enables a new data-based approach in which large volumes of data can be used to investigate all variables, thus opening new horizons. Due to universal digitalization of the data as well as ever-improving hard- and software solutions, imaging would appear to be predestined for such analyses. Several small studies have already demonstrated that automated analysis algorithms and artificial intelligence can identify pathologies with high precision. Such automated systems would also seem well suited for rheumatology imaging, since a method for individualized risk stratification has long been sought for these patients. However, despite all the promising options, the heterogeneity of the data and highly complex regulations covering data protection in Germany would still render a big data solution for imaging difficult today. Overcoming these boundaries is challenging, but the enormous potential advances in clinical management and science render pursuit of this goal worthwhile.

  12. Transform coding for hardware-accelerated volume rendering.

    PubMed

    Fout, Nathaniel; Ma, Kwan-Liu

    2007-01-01

    Hardware-accelerated volume rendering using the GPU is now the standard approach for real-time volume rendering, although limited graphics memory can present a problem when rendering large volume data sets. Volumetric compression in which the decompression is coupled to rendering has been shown to be an effective solution to this problem; however, most existing techniques were developed in the context of software volume rendering, and all but the simplest approaches are prohibitive in a real-time hardware-accelerated volume rendering context. In this paper we present a novel block-based transform coding scheme designed specifically with real-time volume rendering in mind, such that the decompression is fast without sacrificing compression quality. This is made possible by consolidating the inverse transform with dequantization in such a way as to allow most of the reprojection to be precomputed. Furthermore, we take advantage of the freedom afforded by off-line compression in order to optimize the encoding as much as possible while hiding this complexity from the decoder. In this context we develop a new block classification scheme which allows us to preserve perceptually important features in the compression. The result of this work is an asymmetric transform coding scheme that allows very large volumes to be compressed and then decompressed in real-time while rendering on the GPU.

  13. An Analysis of Scalable GPU-Based Ray-Guided Volume Rendering

    PubMed Central

    Fogal, Thomas; Schiewe, Alexander; Krüger, Jens

    2014-01-01

    Volume rendering continues to be a critical method for analyzing large-scale scalar fields, in disciplines as diverse as biomedical engineering and computational fluid dynamics. Commodity desktop hardware has struggled to keep pace with data size increases, challenging modern visualization software to deliver responsive interactions for O(N3) algorithms such as volume rendering. We target the data type common in these domains: regularly-structured data. In this work, we demonstrate that the major limitation of most volume rendering approaches is their inability to switch the data sampling rate (and thus data size) quickly. Using a volume renderer inspired by recent work, we demonstrate that the actual amount of visualizable data for a scene is typically bound considerably lower than the memory available on a commodity GPU. Our instrumented renderer is used to investigate design decisions typically swept under the rug in volume rendering literature. The renderer is freely available, with binaries for all major platforms as well as full source code, to encourage reproduction and comparison with future research. PMID:25506079

  14. Hybrid 3D visualization of the chest and virtual endoscopy of the tracheobronchial system: possibilities and limitations of clinical application.

    PubMed

    Seemann, M D; Claussen, C D

    2001-06-01

    A hybrid rendering method which combines a color-coded surface rendering method and a volume rendering method is described, which enables virtual endoscopic examinations using different representation models. 14 patients with malignancies of the lung and mediastinum (n=11) and lung transplantation (n=3) underwent thin-section spiral computed tomography. The tracheobronchial system and anatomical and pathological features of the chest were segmented using an interactive threshold interval volume-growing segmentation algorithm and visualized with a color-coded surface rendering method. The structures of interest were then superimposed on a volume rendering of the other thoracic structures. For the virtual endoscopy of the tracheobronchial system, a shaded-surface model without color coding, a transparent color-coded shaded-surface model and a triangle-surface model were tested and compared. The hybrid rendering technique exploit the advantages of both rendering methods, provides an excellent overview of the tracheobronchial system and allows a clear depiction of the complex spatial relationships of anatomical and pathological features. Virtual bronchoscopy with a transparent color-coded shaded-surface model allows both a simultaneous visualization of an airway, an airway lesion and mediastinal structures and a quantitative assessment of the spatial relationship between these structures, thus improving confidence in the diagnosis of endotracheal and endobronchial diseases. Hybrid rendering and virtual endoscopy obviate the need for time consuming detailed analysis and presentation of axial source images. Virtual bronchoscopy with a transparent color-coded shaded-surface model offers a practical alternative to fiberoptic bronchoscopy and is particularly promising for patients in whom fiberoptic bronchoscopy is not feasible, contraindicated or refused. Furthermore, it can be used as a complementary procedure to fiberoptic bronchoscopy in evaluating airway stenosis and guiding bronchoscopic biopsy, surgical intervention and palliative therapy and is likely to be increasingly accepted as a screening method for people with suspected endobronchial malignancy and as control examination in the aftercare of patients with malignant diseases.

  15. Volume estimation of brain abnormalities in MRI data

    NASA Astrophysics Data System (ADS)

    Suprijadi, Pratama, S. H.; Haryanto, F.

    2014-02-01

    The abnormality of brain tissue always becomes a crucial issue in medical field. This medical condition can be recognized through segmentation of certain region from medical images obtained from MRI dataset. Image processing is one of computational methods which very helpful to analyze the MRI data. In this study, combination of segmentation and rendering image were used to isolate tumor and stroke. Two methods of thresholding were employed to segment the abnormality occurrence, followed by filtering to reduce non-abnormality area. Each MRI image is labeled and then used for volume estimations of tumor and stroke-attacked area. The algorithms are shown to be successful in isolating tumor and stroke in MRI images, based on thresholding parameter and stated detection accuracy.

  16. Microgravity

    NASA Image and Video Library

    2004-04-15

    Computed tomography (CT) images of resin-impregnated Mechanics of Granular Materials (MGM) specimens are assembled to provide 3-D volume renderings of density patterns formed by dislocation under the external loading stress profile applied during the experiments. Experiments flown on STS-79 and STS-89. Principal Investigator: Dr. Stein Sture

  17. Development of a system for acquiring, reconstructing, and visualizing three-dimensional ultrasonic angiograms

    NASA Astrophysics Data System (ADS)

    Edwards, Warren S.; Ritchie, Cameron J.; Kim, Yongmin; Mack, Laurence A.

    1995-04-01

    We have developed a three-dimensional (3D) imaging system using power Doppler (PD) ultrasound (US). This system can be used for visualizing and analyzing the vascular anatomy of parenchymal organs. To create the 3D PD images, we acquired a series of two-dimensional PD images from a commercial US scanner and recorded the position and orientation of each image using a 3D magnetic position sensor. Three-dimensional volumes were reconstructed using specially designed software and then volume rendered for display. We assessed the feasibility and geometric accuracy of our system with various flow phantoms. The system was then tested on a volunteer by scanning a transplanted kidney. The reconstructed volumes of the flow phantom contained less than 1 mm of geometric distortion and the 3D images of the transplanted kidney depicted the segmental, arcuate, and interlobar vessels.

  18. Imaging system for creating 3D block-face cryo-images of whole mice

    NASA Astrophysics Data System (ADS)

    Roy, Debashish; Breen, Michael; Salvado, Olivier; Heinzel, Meredith; McKinley, Eliot; Wilson, David

    2006-03-01

    We developed a cryomicrotome/imaging system that provides high resolution, high sensitivity block-face images of whole mice or excised organs, and applied it to a variety of biological applications. With this cryo-imaging system, we sectioned cryo-preserved tissues at 2-40 μm thickness and acquired high resolution brightfield and fluorescence images with microscopic in-plane resolution (as good as 1.2 μm). Brightfield images of normal and pathological anatomy show exquisite detail, especially in the abdominal cavity. Multi-planar reformatting and 3D renderings allow one to interrogate 3D structures. In this report, we present brightfield images of mouse anatomy, as well as 3D renderings of organs. For BPK mice model of polycystic kidney disease, we compared brightfield cryo-images and kidney volumes to MRI. The color images provided greater contrast and resolution of cysts as compared to in vivo MRI. We note that color cryo-images are closer to what a researcher sees in dissection, making it easier for them to interpret image data. The combination of field of view, depth of field, ultra high resolution and color/fluorescence contrast enables cryo-image volumes to provide details that cannot be found through in vivo imaging or other ex vivo optical imaging approaches. We believe that this novel imaging system will have applications that include identification of mouse phenotypes, characterization of diseases like blood vessel disease, kidney disease, and cancer, assessment of drug and gene therapy delivery and efficacy and validation of other imaging modalities.

  19. Application of gray level mapping in computed tomographic colonography: a pilot study to compare with traditional surface rendering method for identification and differentiation of endoluminal lesions

    PubMed Central

    Chen, Lih-Shyang; Hsu, Ta-Wen; Chang, Shu-Han; Lin, Chih-Wen; Chen, Yu-Ruei; Hsieh, Chin-Chiang; Han, Shu-Chen; Chang, Ku-Yaw; Hou, Chun-Ju

    2017-01-01

    Objective: In traditional surface rendering (SR) computed tomographic endoscopy, only the shape of endoluminal lesion is depicted without gray-level information unless the volume rendering technique is used. However, volume rendering technique is relatively slow and complex in terms of computation time and parameter setting. We use computed tomographic colonography (CTC) images as examples and report a new visualization technique by three-dimensional gray level mapping (GM) to better identify and differentiate endoluminal lesions. Methods: There are 33 various endoluminal cases from 30 patients evaluated in this clinical study. These cases were segmented using gray-level threshold. The marching cube algorithm was used to detect isosurfaces in volumetric data sets. GM is applied using the surface gray level of CTC. Radiologists conducted the clinical evaluation of the SR and GM images. The Wilcoxon signed-rank test was used for data analysis. Results: Clinical evaluation confirms GM is significantly superior to SR in terms of gray-level pattern and spatial shape presentation of endoluminal cases (p < 0.01) and improves the confidence of identification and clinical classification of endoluminal lesions significantly (p < 0.01). The specificity and diagnostic accuracy of GM is significantly better than those of SR in diagnostic performance evaluation (p < 0.01). Conclusion: GM can reduce confusion in three-dimensional CTC and well correlate CTC with sectional images by the location as well as gray-level value. Hence, GM increases identification and differentiation of endoluminal lesions, and facilitates diagnostic process. Advances in knowledge: GM significantly improves the traditional SR method by providing reliable gray-level information for the surface points and is helpful in identification and differentiation of endoluminal lesions according to their shape and density. PMID:27925483

  20. [Depiction of the cranial nerves around the cavernous sinus by 3D reversed FISP with diffusion weighted imaging (3D PSIF-DWI)].

    PubMed

    Ishida, Go; Oishi, Makoto; Jinguji, Shinya; Yoneoka, Yuichiro; Sato, Mitsuya; Fujii, Yukihiko

    2011-10-01

    To evaluate the anatomy of cranial nerves running in and around the cavernous sinus, we employed three-dimensional reversed fast imaging with steady-state precession (FISP) with diffusion weighted imaging (3D PSIF-DWI) on 3-T magnetic resonance (MR) system. After determining the proper parameters to obtain sufficient resolution of 3D PSIF-DWI, we collected imaging data of 20-side cavernous regions in 10 normal subjects. 3D PSIF-DWI provided high contrast between the cranial nerves and other soft tissues, fluid, and blood in all subjects. We also created volume-rendered images of 3D PSIF-DWI and anatomically evaluated the reliability of visualizing optic, oculomotor, trochlear, trigeminal, and abducens nerves on 3D PSIF-DWI. All 20 sets of cranial nerves were visualized and 12 trochlear nerves and 6 abducens nerves were partially identified. We also presented preliminary clinical experiences in two cases with pituitary adenomas. The anatomical relationship between the tumor and cranial nerves running in and around the cavernous sinus could be three-dimensionally comprehended by 3D PSIF-DWI and the volume-rendered images. In conclusion, 3D PSIF-DWI has great potential to provide high resolution "cranial nerve imaging", which visualizes the whole length of the cranial nerves including the parts in the blood flow as in the cavernous sinus region.

  1. Volume-rendering on a 3D hyperwall: A molecular visualization platform for research, education and outreach.

    PubMed

    MacDougall, Preston J; Henze, Christopher E; Volkov, Anatoliy

    2016-11-01

    We present a unique platform for molecular visualization and design that uses novel subatomic feature detection software in tandem with 3D hyperwall visualization technology. We demonstrate the fleshing-out of pharmacophores in drug molecules, as well as reactive sites in catalysts, focusing on subatomic features. Topological analysis with picometer resolution, in conjunction with interactive volume-rendering of the Laplacian of the electronic charge density, leads to new insight into docking and catalysis. Visual data-mining is done efficiently and in parallel using a 4×4 3D hyperwall (a tiled array of 3D monitors driven independently by slave GPUs but displaying high-resolution, synchronized and functionally-related images). The visual texture of images for a wide variety of molecular systems are intuitive to experienced chemists but also appealing to neophytes, making the platform simultaneously useful as a tool for advanced research as well as for pedagogical and STEM education outreach purposes. Copyright © 2016. Published by Elsevier Inc.

  2. Parallel volume ray-casting for unstructured-grid data on distributed-memory architectures

    NASA Technical Reports Server (NTRS)

    Ma, Kwan-Liu

    1995-01-01

    As computing technology continues to advance, computational modeling of scientific and engineering problems produces data of increasing complexity: large in size and unstructured in shape. Volume visualization of such data is a challenging problem. This paper proposes a distributed parallel solution that makes ray-casting volume rendering of unstructured-grid data practical. Both the data and the rendering process are distributed among processors. At each processor, ray-casting of local data is performed independent of the other processors. The global image composing processes, which require inter-processor communication, are overlapped with the local ray-casting processes to achieve maximum parallel efficiency. This algorithm differs from previous ones in four ways: it is completely distributed, less view-dependent, reasonably scalable, and flexible. Without using dynamic load balancing, test results on the Intel Paragon using from two to 128 processors show, on average, about 60% parallel efficiency.

  3. Inadequate increase in the volume of major epicardial coronary arteries compared with that in left ventricular mass. Novel concept for characterization of coronary arteries using 64-slice computed tomography.

    PubMed

    Ehara, Shoichi; Okuyama, Takuhiro; Shirai, Nobuyuki; Sugioka, Kenichi; Oe, Hiroki; Itoh, Toshihide; Matsuoka, Toshiyuki; Ikura, Yoshihiro; Ueda, Makiko; Naruko, Takahiko; Hozumi, Takeshi; Yoshiyama, Minoru

    2009-08-01

    Previous studies have shown a correlation between coronary artery cross-sectional diameter and left ventricular (LV) mass. However, no studies have examined the correlation between actual coronary artery volume (CAV) and LV mass. In the present study, measurements of CAV by 64-multislice computed tomography (MSCT) were validated and the relationship between CAV and LV mass was investigated. First, coronary artery phantoms consisting of syringes filled with solutions of contrast medium moving at simulated heart rates were scanned by 64-MSCT. Display window settings permitting accurate calculation of small volumes were optimized by evaluating volume-rendered images of the segmented contrast medium at different window settings. Next, 61 patients without significant coronary artery stenosis were scanned by 64-MSCT with the same protocol as for the phantoms. Coronary arteries were segmented on a workstation and the same window settings were applied to the volume-rendered images to calculate total CAV. Significant correlations between total CAV and LV mass (r=0.660, P<0.0001) were found, whereas an inverse relation was present between total CAV per 100 g of LV mass and LV mass. The novel concept of "CAV" for the characterization of coronary arteries may prove useful for future research, particularly on the causes of LV hypertrophy.

  4. Volumetric visualization algorithm development for an FPGA-based custom computing machine

    NASA Astrophysics Data System (ADS)

    Sallinen, Sami J.; Alakuijala, Jyrki; Helminen, Hannu; Laitinen, Joakim

    1998-05-01

    Rendering volumetric medical images is a burdensome computational task for contemporary computers due to the large size of the data sets. Custom designed reconfigurable hardware could considerably speed up volume visualization if an algorithm suitable for the platform is used. We present an algorithm and speedup techniques for visualizing volumetric medical CT and MR images with a custom-computing machine based on a Field Programmable Gate Array (FPGA). We also present simulated performance results of the proposed algorithm calculated with a software implementation running on a desktop PC. Our algorithm is capable of generating perspective projection renderings of single and multiple isosurfaces with transparency, simulated X-ray images, and Maximum Intensity Projections (MIP). Although more speedup techniques exist for parallel projection than for perspective projection, we have constrained ourselves to perspective viewing, because of its importance in the field of radiotherapy. The algorithm we have developed is based on ray casting, and the rendering is sped up by three different methods: shading speedup by gradient precalculation, a new generalized version of Ray-Acceleration by Distance Coding (RADC), and background ray elimination by speculative ray selection.

  5. 3D cinematic rendering of the calvarium, maxillofacial structures, and skull base: preliminary observations.

    PubMed

    Rowe, Steven P; Zinreich, S James; Fishman, Elliot K

    2018-06-01

    Three-dimensional (3D) visualizations of volumetric data from CT have gained widespread clinical acceptance and are an important method for evaluating complex anatomy and pathology. Recently, cinematic rendering (CR), a new 3D visualization methodology, has become available. CR utilizes a lighting model that allows for the production of photorealistic images from isotropic voxel data. Given how new this technique is, studies to evaluate its clinical utility and any potential advantages or disadvantages relative to other 3D methods such as volume rendering have yet to be published. In this pictorial review, we provide examples of normal calvarial, maxillofacial, and skull base anatomy and pathological conditions that highlight the potential for CR images to aid in patient evaluation and treatment planning. The highly detailed images and nuanced shadowing that are intrinsic to CR are well suited to the display of the complex anatomy in this region of the body. We look forward to studies with CR that will ascertain the ultimate value of this methodology to evaluate calvarium, maxillofacial, and skull base morphology as well as other complex anatomic structures.

  6. High-resolution three-dimensional magnetic resonance imaging of mouse lung in situ.

    PubMed

    Scadeng, Miriam; Rossiter, Harry B; Dubowitz, David J; Breen, Ellen C

    2007-01-01

    This study establishes a method for high-resolution isotropic magnetic resonance (MR) imaging of mouse lungs using tracheal liquid-instillation to remove MR susceptibility artifacts. C57BL/6J mice were instilled sequentially with perfluorocarbon and phosphate-buffered saline to an airway pressure of 10, 20, or 30 cm H2O. Imaging was performed in a 7T MR scanner using a 2.5-cm Quadrature volume coil and a 3-dimensional (3D) FLASH imaging sequence. Liquid-instillation removed magnetic susceptibility artifacts and allowed lung structure to be viewed at an isotropic resolution of 78-90 microm. Instilled liquid and modeled lung volumes were well correlated (R = 0.92; P < 0.05) and differed by a constant tissue volume (220 +/- 92 microL). 3D image renderings allowed differences in structural dimensions (volumes and areas) to be accurately measured at each inflation pressure. These data demonstrate the efficacy of pulmonary liquid instillation for in situ high-resolution MR imaging of mouse lungs for accurate measurement of pulmonary airway, parenchymal, and vascular structures.

  7. X-ray microscopy as an approach to increasing accuracy and efficiency of serial block-face imaging for correlated light and electron microscopy of biological specimens.

    PubMed

    Bushong, Eric A; Johnson, Donald D; Kim, Keun-Young; Terada, Masako; Hatori, Megumi; Peltier, Steven T; Panda, Satchidananda; Merkle, Arno; Ellisman, Mark H

    2015-02-01

    The recently developed three-dimensional electron microscopic (EM) method of serial block-face scanning electron microscopy (SBEM) has rapidly established itself as a powerful imaging approach. Volume EM imaging with this scanning electron microscopy (SEM) method requires intense staining of biological specimens with heavy metals to allow sufficient back-scatter electron signal and also to render specimens sufficiently conductive to control charging artifacts. These more extreme heavy metal staining protocols render specimens light opaque and make it much more difficult to track and identify regions of interest (ROIs) for the SBEM imaging process than for a typical thin section transmission electron microscopy correlative light and electron microscopy study. We present a strategy employing X-ray microscopy (XRM) both for tracking ROIs and for increasing the efficiency of the workflow used for typical projects undertaken with SBEM. XRM was found to reveal an impressive level of detail in tissue heavily stained for SBEM imaging, allowing for the identification of tissue landmarks that can be subsequently used to guide data collection in the SEM. Furthermore, specific labeling of individual cells using diaminobenzidine is detectable in XRM volumes. We demonstrate that tungsten carbide particles or upconverting nanophosphor particles can be used as fiducial markers to further increase the precision and efficiency of SBEM imaging.

  8. X-ray Microscopy as an Approach to Increasing Accuracy and Efficiency of Serial Block-face Imaging for Correlated Light and Electron Microscopy of Biological Specimens

    PubMed Central

    Bushong, Eric A.; Johnson, Donald D.; Kim, Keun-Young; Terada, Masako; Hatori, Megumi; Peltier, Steven T.; Panda, Satchidananda; Merkle, Arno; Ellisman, Mark H.

    2015-01-01

    The recently developed three-dimensional electron microscopic (EM) method of serial block-face scanning electron microscopy (SBEM) has rapidly established itself as a powerful imaging approach. Volume EM imaging with this scanning electron microscopy (SEM) method requires intense staining of biological specimens with heavy metals to allow sufficient back-scatter electron signal and also to render specimens sufficiently conductive to control charging artifacts. These more extreme heavy metal staining protocols render specimens light opaque and make it much more difficult to track and identify regions of interest (ROIs) for the SBEM imaging process than for a typical thin section transmission electron microscopy correlative light and electron microscopy study. We present a strategy employing X-ray microscopy (XRM) both for tracking ROIs and for increasing the efficiency of the workflow used for typical projects undertaken with SBEM. XRM was found to reveal an impressive level of detail in tissue heavily stained for SBEM imaging, allowing for the identification of tissue landmarks that can be subsequently used to guide data collection in the SEM. Furthermore, specific labeling of individual cells using diaminobenzidine is detectable in XRM volumes. We demonstrate that tungsten carbide particles or upconverting nanophosphor particles can be used as fiducial markers to further increase the precision and efficiency of SBEM imaging. PMID:25392009

  9. Visualizing Vector Fields Using Line Integral Convolution and Dye Advection

    NASA Technical Reports Server (NTRS)

    Shen, Han-Wei; Johnson, Christopher R.; Ma, Kwan-Liu

    1996-01-01

    We present local and global techniques to visualize three-dimensional vector field data. Using the Line Integral Convolution (LIC) method to image the global vector field, our new algorithm allows the user to introduce colored 'dye' into the vector field to highlight local flow features. A fast algorithm is proposed that quickly recomputes the dyed LIC images. In addition, we introduce volume rendering methods that can map the LIC texture on any contour surface and/or translucent region defined by additional scalar quantities, and can follow the advection of colored dye throughout the volume.

  10. Fast Time-Varying Volume Rendering Using Time-Space Partition (TSP) Tree

    NASA Technical Reports Server (NTRS)

    Shen, Han-Wei; Chiang, Ling-Jen; Ma, Kwan-Liu

    1999-01-01

    We present a new, algorithm for rapid rendering of time-varying volumes. A new hierarchical data structure that is capable of capturing both the temporal and the spatial coherence is proposed. Conventional hierarchical data structures such as octrees are effective in characterizing the homogeneity of the field values existing in the spatial domain. However, when treating time merely as another dimension for a time-varying field, difficulties frequently arise due to the discrepancy between the field's spatial and temporal resolutions. In addition, treating spatial and temporal dimensions equally often prevents the possibility of detecting the coherence that is unique in the temporal domain. Using the proposed data structure, our algorithm can meet the following goals. First, both spatial and temporal coherence are identified and exploited for accelerating the rendering process. Second, our algorithm allows the user to supply the desired error tolerances at run time for the purpose of image-quality/rendering-speed trade-off. Third, the amount of data that are required to be loaded into main memory is reduced, and thus the I/O overhead is minimized. This low I/O overhead makes our algorithm suitable for out-of-core applications.

  11. The physics of volume rendering

    NASA Astrophysics Data System (ADS)

    Peters, Thomas

    2014-11-01

    Radiation transfer is an important topic in several physical disciplines, probably most prominently in astrophysics. Computer scientists use radiation transfer, among other things, for the visualization of complex data sets with direct volume rendering. In this article, I point out the connection between physical radiation transfer and volume rendering, and I describe an implementation of direct volume rendering in the astrophysical radiation transfer code RADMC-3D. I show examples for the use of this module on analytical models and simulation data.

  12. Improved wrist pannus volume measurement from contrast-enhanced MRI in rheumatoid arthritis using shuffle transform.

    PubMed

    Xanthopoulos, Emily; Hutchinson, Charles E; Adams, Judith E; Bruce, Ian N; Nash, Anthony F P; Holmes, Andrew P; Taylor, Christopher J; Waterton, John C

    2007-01-01

    Contrast-enhanced MRI is of value in assessing rheumatoid pannus in the hand, but the images are not always easy to quantitate. To develop and evaluate an improved measurement of volume of enhancing pannus (VEP) in the hand in human rheumatoid arthritis (RA). MR images of the hand and wrist were obtained for 14 patients with RA at 0, 1 and 13 weeks. Volume of enhancing pannus was measured on images created by subtracting precontrast T1-weighted images from contrast-enhanced T1-weighted images using a shuffle transformation technique. Maximum intensity projection (MIP) and 3D volume rendering of the images were used as a guide to identify the pannus and any contrast-enhanced veins. Visualisation of pannus was much improved following the shuffle transform. Between 0 weeks and 1 week, the mean value of the within-subject coefficient of variation (CoV) was 0.13 and the estimated total CoV was 0.15. There was no evidence of significant increased variability within the 13-week interval for the complete sample of patients. Volume of enhancing pannus can be measured reproducibly in the rheumatoid hand using 3D contrast-enhanced MRI and shuffle transform.

  13. Enabling Real-Time Volume Rendering of Functional Magnetic Resonance Imaging on an iOS Device.

    PubMed

    Holub, Joseph; Winer, Eliot

    2017-12-01

    Powerful non-invasive imaging technologies like computed tomography (CT), ultrasound, and magnetic resonance imaging (MRI) are used daily by medical professionals to diagnose and treat patients. While 2D slice viewers have long been the standard, many tools allowing 3D representations of digital medical data are now available. The newest imaging advancement, functional MRI (fMRI) technology, has changed medical imaging from viewing static to dynamic physiology (4D) over time, particularly to study brain activity. Add this to the rapid adoption of mobile devices for everyday work and the need to visualize fMRI data on tablets or smartphones arises. However, there are few mobile tools available to visualize 3D MRI data, let alone 4D fMRI data. Building volume rendering tools on mobile devices to visualize 3D and 4D medical data is challenging given the limited computational power of the devices. This paper describes research that explored the feasibility of performing real-time 3D and 4D volume raycasting on a tablet device. The prototype application was tested on a 9.7" iPad Pro using two different fMRI datasets of brain activity. The results show that mobile raycasting is able to achieve between 20 and 40 frames per second for traditional 3D datasets, depending on the sampling interval, and up to 9 frames per second for 4D data. While the prototype application did not always achieve true real-time interaction, these results clearly demonstrated that visualizing 3D and 4D digital medical data is feasible with a properly constructed software framework.

  14. Parallel Rendering of Large Time-Varying Volume Data

    NASA Technical Reports Server (NTRS)

    Garbutt, Alexander E.

    2005-01-01

    Interactive visualization of large time-varying 3D volume datasets has been and still is a great challenge to the modem computational world. It stretches the limits of the memory capacity, the disk space, the network bandwidth and the CPU speed of a conventional computer. In this SURF project, we propose to develop a parallel volume rendering program on SGI's Prism, a cluster computer equipped with state-of-the-art graphic hardware. The proposed program combines both parallel computing and hardware rendering in order to achieve an interactive rendering rate. We use 3D texture mapping and a hardware shader to implement 3D volume rendering on each workstation. We use SGI's VisServer to enable remote rendering using Prism's graphic hardware. And last, we will integrate this new program with ParVox, a parallel distributed visualization system developed at JPL. At the end of the project, we Will demonstrate remote interactive visualization using this new hardware volume renderer on JPL's Prism System using a time-varying dataset from selected JPL applications.

  15. Cryo-imaging of fluorescently labeled single cells in a mouse

    NASA Astrophysics Data System (ADS)

    Steyer, Grant J.; Roy, Debashish; Salvado, Olivier; Stone, Meredith E.; Wilson, David L.

    2009-02-01

    We developed a cryo-imaging system to provide single-cell detection of fluorescently labeled cells in mouse, with particular applicability to stem cells and metastatic cancer. The Case cryoimaging system consists of a fluorescence microscope, robotic imaging positioner, customized cryostat, PC-based control system, and visualization/analysis software. The system alternates between sectioning (10-40 μm) and imaging, collecting color brightfield and fluorescent blockface image volumes >60GB. In mouse experiments, we imaged quantum-dot labeled stem cells, GFP-labeled cancer and stem cells, and cell-size fluorescent microspheres. To remove subsurface fluorescence, we used a simplified model of light-tissue interaction whereby the next image was scaled, blurred, and subtracted from the current image. We estimated scaling and blurring parameters by minimizing entropy of subtracted images. Tissue specific attenuation parameters were found [uT : heart (267 +/- 47.6 μm), liver (218 +/- 27.1 μm), brain (161 +/- 27.4 μm)] to be within the range of estimates in the literature. "Next image" processing removed subsurface fluorescence equally well across multiple tissues (brain, kidney, liver, adipose tissue, etc.), and analysis of 200 microsphere images in the brain gave 97+/-2% reduction of subsurface fluorescence. Fluorescent signals were determined to arise from single cells based upon geometric and integrated intensity measurements. Next image processing greatly improved axial resolution, enabled high quality 3D volume renderings, and improved enumeration of single cells with connected component analysis by up to 24%. Analysis of image volumes identified metastatic cancer sites, found homing of stem cells to injury sites, and showed microsphere distribution correlated with blood flow patterns. We developed and evaluated cryo-imaging to provide single-cell detection of fluorescently labeled cells in mouse. Our cryo-imaging system provides extreme (>60GB), micron-scale, fluorescence, and bright field image data. Here we describe our image preprocessing, analysis, and visualization techniques. Processing improves axial resolution, reduces subsurface fluorescence by 97%, and enables single cell detection and counting. High quality 3D volume renderings enable us to evaluate cell distribution patterns. Applications include the myriad of biomedical experiments using fluorescent reporter gene and exogenous fluorophore labeling of cells in applications such as stem cell regenerative medicine, cancer, tissue engineering, etc.

  16. A discriminative structural similarity measure and its application to video-volume registration for endoscope three-dimensional motion tracking.

    PubMed

    Luo, Xiongbiao; Mori, Kensaku

    2014-06-01

    Endoscope 3-D motion tracking, which seeks to synchronize pre- and intra-operative images in endoscopic interventions, is usually performed as video-volume registration that optimizes the similarity between endoscopic video and pre-operative images. The tracking performance, in turn, depends significantly on whether a similarity measure can successfully characterize the difference between video sequences and volume rendering images driven by pre-operative images. The paper proposes a discriminative structural similarity measure, which uses the degradation of structural information and takes image correlation or structure, luminance, and contrast into consideration, to boost video-volume registration. By applying the proposed similarity measure to endoscope tracking, it was demonstrated to be more accurate and robust than several available similarity measures, e.g., local normalized cross correlation, normalized mutual information, modified mean square error, or normalized sum squared difference. Based on clinical data evaluation, the tracking error was reduced significantly from at least 14.6 mm to 4.5 mm. The processing time was accelerated more than 30 frames per second using graphics processing unit.

  17. Investigation of buoyancy effects on turbulent nonpremixed jet flames by using normal and low-gravity conditions

    NASA Astrophysics Data System (ADS)

    Idicheria, Cherian Alex

    An experimental study was performed with the aim of investigating the structure of transitional and turbulent nonpremixed jet flames under different gravity conditions. In particular, the focus was to determine the effect of buoyancy on the mean and fluctuating characteristics of the jet flames. Experiments were conducted under three gravity levels, viz. 1 g, 20 mg and 100 mug. The milligravity and microgravity conditions were achieved by dropping a jet-flame rig in the UT-Austin 1.25-second and the NASA-Glenn Research Center 2.2-second drop towers, respectively. The principal diagnostics employed were time-resolved, cinematographic imaging of the visible soot luminosity and planar laser Mie scattering (PLMS). For the cinematographic flame luminosity imaging experiments, the flames studied were piloted nonpremixed propane, ethylene and methane jet flames at source Reynolds numbers ranging from 2000 to 10500. From the soot luminosity images, mean and root-mean square (RMS) images were computed, and volume rendering of the image sequences was used to investigate the large-scale structure evolution and flame tip dynamics. The relative importance of buoyancy was quantified with the parameter, xL , as defined by Becker and Yamazaki [1978]. The results show, in contrast to previous microgravity studies, that the high Reynolds number flames have the same flame length irrespective of the gravity level. The RMS fluctuations and volume renderings indicate that the large-scale structure and flame tip dynamics are essentially identical to those of purely momentum driven flames provided xL is approximately less than 2. The volume-renderings show that the luminous structure celerities (normalized by jet exit velocity) are approximately constant for xL < 6, but are substantially larger for xL > 8. The celerity values for xL > 8 are seen to follow a x3/2L scaling, which can be predicted with a simplified momentum equation analysis for the buoyancy-dominated regime. The underlying turbulent structure and mean mixture fraction characteristics were investigated in nonreacting and reacting jets with a PLMS diagnostic system developed for the UT-Austin 1.25-second drop tower. (Abstract shortened by UMI.)

  18. An HTML5-Based Pure Website Solution for Rapidly Viewing and Processing Large-Scale 3D Medical Volume Reconstruction on Mobile Internet

    PubMed Central

    Chen, Xin; Zhang, Ye; Zhang, Jingna; Li, Ying; Mo, Xuemei; Chen, Wei

    2017-01-01

    This study aimed to propose a pure web-based solution to serve users to access large-scale 3D medical volume anywhere with good user experience and complete details. A novel solution of the Master-Slave interaction mode was proposed, which absorbed advantages of remote volume rendering and surface rendering. On server side, we designed a message-responding mechanism to listen to interactive requests from clients (Slave model) and to guide Master volume rendering. On client side, we used HTML5 to normalize user-interactive behaviors on Slave model and enhance the accuracy of behavior request and user-friendly experience. The results showed that more than four independent tasks (each with a data size of 249.4 MB) could be simultaneously carried out with a 100-KBps client bandwidth (extreme test); the first loading time was <12 s, and the response time of each behavior request for final high quality image remained at approximately 1 s, while the peak value of bandwidth was <50-KBps. Meanwhile, the FPS value for each client was ≥40. This solution could serve the users by rapidly accessing the application via one URL hyperlink without special software and hardware requirement in a diversified network environment and could be easily integrated into other telemedical systems seamlessly. PMID:28638406

  19. An HTML5-Based Pure Website Solution for Rapidly Viewing and Processing Large-Scale 3D Medical Volume Reconstruction on Mobile Internet.

    PubMed

    Qiao, Liang; Chen, Xin; Zhang, Ye; Zhang, Jingna; Wu, Yi; Li, Ying; Mo, Xuemei; Chen, Wei; Xie, Bing; Qiu, Mingguo

    2017-01-01

    This study aimed to propose a pure web-based solution to serve users to access large-scale 3D medical volume anywhere with good user experience and complete details. A novel solution of the Master-Slave interaction mode was proposed, which absorbed advantages of remote volume rendering and surface rendering. On server side, we designed a message-responding mechanism to listen to interactive requests from clients ( Slave model) and to guide Master volume rendering. On client side, we used HTML5 to normalize user-interactive behaviors on Slave model and enhance the accuracy of behavior request and user-friendly experience. The results showed that more than four independent tasks (each with a data size of 249.4 MB) could be simultaneously carried out with a 100-KBps client bandwidth (extreme test); the first loading time was <12 s, and the response time of each behavior request for final high quality image remained at approximately 1 s, while the peak value of bandwidth was <50-KBps. Meanwhile, the FPS value for each client was ≥40. This solution could serve the users by rapidly accessing the application via one URL hyperlink without special software and hardware requirement in a diversified network environment and could be easily integrated into other telemedical systems seamlessly.

  20. Integrated VR platform for 3D and image-based models: a step toward interactive image-based virtual environments

    NASA Astrophysics Data System (ADS)

    Yoon, Jayoung; Kim, Gerard J.

    2003-04-01

    Traditionally, three dimension models have been used for building virtual worlds, and a data structure called the "scene graph" is often employed to organize these 3D objects in the virtual space. On the other hand, image-based rendering has recently been suggested as a probable alternative VR platform for its photo-realism, however, due to limited interactivity, it has only been used for simple navigation systems. To combine the merits of these two approaches to object/scene representations, this paper proposes for a scene graph structure in which both 3D models and various image-based scenes/objects can be defined, traversed, and rendered together. In fact, as suggested by Shade et al., these different representations can be used as different LOD's for a given object. For instance, an object might be rendered using a 3D model at close range, a billboard at an intermediate range, and as part of an environment map at far range. The ultimate objective of this mixed platform is to breath more interactivity into the image based rendered VE's by employing 3D models as well. There are several technical challenges in devising such a platform: designing scene graph nodes for various types of image based techniques, establishing criteria for LOD/representation selection, handling their transitions, implementing appropriate interaction schemes, and correctly rendering the overall scene. Currently, we have extended the scene graph structure of the Sense8's WorldToolKit, to accommodate new node types for environment maps billboards, moving textures and sprites, "Tour-into-the-Picture" structure, and view interpolated objects. As for choosing the right LOD level, the usual viewing distance and image space criteria are used, however, the switching between the image and 3D model occurs at a distance from the user where the user starts to perceive the object's internal depth. Also, during interaction, regardless of the viewing distance, a 3D representation would be used, it if exists. Before rendering, objects are conservatively culled from the view frustum using the representation with the largest volume. Finally, we carried out experiments to verify the theoretical derivation of the switching rule and obtained positive results.

  1. Systematic Parameterization, Storage, and Representation of Volumetric DICOM Data.

    PubMed

    Fischer, Felix; Selver, M Alper; Gezer, Sinem; Dicle, Oğuz; Hillen, Walter

    Tomographic medical imaging systems produce hundreds to thousands of slices, enabling three-dimensional (3D) analysis. Radiologists process these images through various tools and techniques in order to generate 3D renderings for various applications, such as surgical planning, medical education, and volumetric measurements. To save and store these visualizations, current systems use snapshots or video exporting, which prevents further optimizations and requires the storage of significant additional data. The Grayscale Softcopy Presentation State extension of the Digital Imaging and Communications in Medicine (DICOM) standard resolves this issue for two-dimensional (2D) data by introducing an extensive set of parameters, namely 2D Presentation States (2DPR), that describe how an image should be displayed. 2DPR allows storing these parameters instead of storing parameter applied images, which cause unnecessary duplication of the image data. Since there is currently no corresponding extension for 3D data, in this study, a DICOM-compliant object called 3D presentation states (3DPR) is proposed for the parameterization and storage of 3D medical volumes. To accomplish this, the 3D medical visualization process is divided into four tasks, namely pre-processing, segmentation, post-processing, and rendering. The important parameters of each task are determined. Special focus is given to the compression of segmented data, parameterization of the rendering process, and DICOM-compliant implementation of the 3DPR object. The use of 3DPR was tested in a radiology department on three clinical cases, which require multiple segmentations and visualizations during the workflow of radiologists. The results show that 3DPR can effectively simplify the workload of physicians by directly regenerating 3D renderings without repeating intermediate tasks, increase efficiency by preserving all user interactions, and provide efficient storage as well as transfer of visualized data.

  2. SemVisM: semantic visualizer for medical image

    NASA Astrophysics Data System (ADS)

    Landaeta, Luis; La Cruz, Alexandra; Baranya, Alexander; Vidal, María.-Esther

    2015-01-01

    SemVisM is a toolbox that combines medical informatics and computer graphics tools for reducing the semantic gap between low-level features and high-level semantic concepts/terms in the images. This paper presents a novel strategy for visualizing medical data annotated semantically, combining rendering techniques, and segmentation algorithms. SemVisM comprises two main components: i) AMORE (A Modest vOlume REgister) to handle input data (RAW, DAT or DICOM) and to initially annotate the images using terms defined on medical ontologies (e.g., MesH, FMA or RadLex), and ii) VOLPROB (VOlume PRObability Builder) for generating the annotated volumetric data containing the classified voxels that belong to a particular tissue. SemVisM is built on top of the semantic visualizer ANISE.1

  3. Development of the mouse cochlea database (MCD).

    PubMed

    Santi, Peter A; Rapson, Ian; Voie, Arne

    2008-09-01

    The mouse cochlea database (MCD) provides an interactive, image database of the mouse cochlea for learning its anatomy and data mining of its resources. The MCD website is hosted on a centrally maintained, high-speed server at the following URL: (http://mousecochlea.umn.edu). The MCD contains two types of image resources, serial 2D image stacks and 3D reconstructions of cochlear structures. Complete image stacks of the cochlea from two different mouse strains were obtained using orthogonal plane fluorescence optical microscopy (OPFOS). 2D images of the cochlea are presented on the MCD website as: viewable images within a stack, 2D atlas of the cochlea, orthogonal sections, and direct volume renderings combined with isosurface reconstructions. In order to assess cochlear structures quantitatively, "true" cross-sections of the scala media along the length of the basilar membrane were generated by virtual resectioning of a cochlea orthogonal to a cochlear structure, such as the centroid of the basilar membrane or the scala media. 3D images are presented on the MCD website as: direct volume renderings, movies, interactive QuickTime VRs, flythrough, and isosurface 3D reconstructions of different cochlear structures. 3D computer models can also be used for solid model fabrication by rapid prototyping and models from different cochleas can be combined to produce an average 3D model. The MCD is the first comprehensive image resource on the mouse cochlea and is a new paradigm for understanding the anatomy of the cochlea, and establishing morphometric parameters of cochlear structures in normal and mutant mice.

  4. Compression and accelerated rendering of volume data using DWT

    NASA Astrophysics Data System (ADS)

    Kamath, Preyas; Akleman, Ergun; Chan, Andrew K.

    1998-09-01

    2D images cannot convey information on object depth and location relative to the surfaces. The medical community is increasingly using 3D visualization techniques to view data from CT scans, MRI etc. 3D images provide more information on depth and location in the spatial domain to help surgeons making better diagnoses of the problem. 3D images can be constructed from 2D images using 3D scalar algorithms. With recent advances in communication techniques, it is possible for doctors to diagnose and plan treatment of a patient who lives at a remote location. It is made possible by transmitting relevant data of the patient via telephone lines. If this information is to be reconstructed in 3D, then 2D images must be transmitted. However 2D dataset storage occupies a lot of memory. In addition, visualization algorithms are slow. We describe in this paper a scheme which reduces the data transfer time by only transmitting information that the doctor wants. Compression is achieved by reducing the amount of data transfer. This is possible by using the 3D wavelet transform applied to 3D datasets. Since the wavelet transform is localized in frequency and spatial domain, we transmit detail only in the region where the doctor needs it. Since only ROM (Region of Interest) is reconstructed in detail, we need to render only ROI in detail, thus we can reduce the rendering time.

  5. 3D Pathology Volumetric Technique: A Method for Calculating Breast Tumour Volume from Whole-Mount Serial Section Images

    PubMed Central

    Clarke, G. M.; Murray, M.; Holloway, C. M. B.; Liu, K.; Zubovits, J. T.; Yaffe, M. J.

    2012-01-01

    Tumour size, most commonly measured by maximum linear extent, remains a strong predictor of survival in breast cancer. Tumour volume, proportional to the number of tumour cells, may be a more accurate surrogate for size. We describe a novel “3D pathology volumetric technique” for lumpectomies and compare it with 2D measurements. Volume renderings and total tumour volume are computed from digitized whole-mount serial sections using custom software tools. Results are presented for two lumpectomy specimens selected for tumour features which may challenge accurate measurement of tumour burden with conventional, sampling-based pathology: (1) an infiltrative pattern admixed with normal breast elements; (2) a localized invasive mass separated from the in situ component by benign tissue. Spatial relationships between key features (tumour foci, close or involved margins) are clearly visualized in volume renderings. Invasive tumour burden can be underestimated using conventional pathology, compared to the volumetric technique (infiltrative pattern: 30% underestimation; localized mass: 3% underestimation for invasive tumour, 44% for in situ component). Tumour volume approximated from 2D measurements (i.e., maximum linear extent), assuming elliptical geometry, was seen to overestimate volume compared to the 3D volumetric calculation (by a factor of 7x for the infiltrative pattern; 1.5x for the localized invasive mass). PMID:23320179

  6. Value of three-dimensional volume rendering images in the assessment of the centrality index for preoperative planning in patients with renal masses.

    PubMed

    Sofia, C; Magno, C; Silipigni, S; Cantisani, V; Mucciardi, G; Sottile, F; Inferrera, A; Mazziotti, S; Ascenti, G

    2017-01-01

    To evaluate the precision of the centrality index (CI) measurement on three-dimensional (3D) volume rendering technique (VRT) images in patients with renal masses, compared to its standard measurement on axial images. Sixty-five patients with renal lesions underwent contrast-enhanced multidetector (MD) computed tomography (CT) for preoperative imaging. Two readers calculated the CI on two-dimensional axial images and on VRT images, measuring it in the plane that the tumour and centre of the kidney were lying in. Correlation and agreement of interobserver measurements and inter-method results were calculated using intraclass correlation (ICC) coefficients and the Bland-Altman method. Time saving was also calculated. The correlation coefficients were r=0.99 (p<0.05) and r=0.99 (p<0.05) for both the CI on axial and VRT images, with an ICC of 0.99, and 0.99, respectively. Correlation between the two methods of measuring the CI on VRT and axial CT images was r=0.99 (p<0.05). The two methods showed a mean difference of -0.03 (SD 0.13). Mean time saving per each examination with VRT was 45.5%. The present study showed that VRT and axial images produce almost identical values of CI, with the advantages of greater ease of execution and a time saving of almost 50% for 3D VRT images. In addition, VRT provides an integrated perspective that can better assist surgeons in clinical decision making and in operative planning, suggesting this technique as a possible standard method for CI measurement. Copyright © 2016 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  7. [Three-dimensional reconstruction of functional brain images].

    PubMed

    Inoue, M; Shoji, K; Kojima, H; Hirano, S; Naito, Y; Honjo, I

    1999-08-01

    We consider PET (positron emission tomography) measurement with SPM (Statistical Parametric Mapping) analysis to be one of the most useful methods to identify activated areas of the brain involved in language processing. SPM is an effective analytical method that detects markedly activated areas over the whole brain. However, with the conventional presentations of these functional brain images, such as horizontal slices, three directional projection, or brain surface coloring, makes understanding and interpreting the positional relationships among various brain areas difficult. Therefore, we developed three-dimensionally reconstructed images from these functional brain images to improve the interpretation. The subjects were 12 normal volunteers. The following three types of images were constructed: 1) routine images by SPM, 2) three-dimensional static images, and 3) three-dimensional dynamic images, after PET images were analyzed by SPM during daily dialog listening. The creation of images of both the three-dimensional static and dynamic types employed the volume rendering method by VTK (The Visualization Toolkit). Since the functional brain images did not include original brain images, we synthesized SPM and MRI brain images by self-made C++ programs. The three-dimensional dynamic images were made by sequencing static images with available software. Images of both the three-dimensional static and dynamic types were processed by a personal computer system. Our newly created images showed clearer positional relationships among activated brain areas compared to the conventional method. To date, functional brain images have been employed in fields such as neurology or neurosurgery, however, these images may be useful even in the field of otorhinolaryngology, to assess hearing and speech. Exact three-dimensional images based on functional brain images are important for exact and intuitive interpretation, and may lead to new developments in brain science. Currently, the surface model is the most common method of three-dimensional display. However, the volume rendering method may be more effective for imaging regions such as the brain.

  8. Comparison of mandibular first molar mesial root canal morphology using micro-computed tomography and clearing technique.

    PubMed

    Kim, Yeun; Perinpanayagam, Hiran; Lee, Jong-Ki; Yoo, Yeon-Jee; Oh, Soram; Gu, Yu; Lee, Seung-Pyo; Chang, Seok Woo; Lee, Woocheol; Baek, Seung-Ho; Zhu, Qiang; Kum, Kee-Yeon

    2015-08-01

    Micro-computed tomography (MCT) with alternative image reformatting techniques shows complex and detailed root canal anatomy. This study compared two-dimensional (2D) and 3D MCT image reformatting with standard tooth clearing for studying mandibular first molar mesial root canal morphology. Extracted human mandibular first molar mesial roots (n=31) were scanned by MCT (Skyscan 1172). 2D thin-slab minimum intensity projection (TS-MinIP) and 3D volume rendered images were constructed. The same teeth were then processed by clearing and staining. For each root, images obtained from clearing, 2D, 3D and combined 2D and 3D techniques were examined independently by four endodontists and categorized according to Vertucci's classification. Fine anatomical structures such as accessory canals, intercanal communications and loops were also identified. Agreement among the four techniques for Vertucci's classification was 45.2% (14/31). The most frequent were Vertucci's type IV and then type II, although many had complex configurations that were non-classifiable. Generally, complex canal systems were more clearly visible in MCT images than with standard clearing and staining. Fine anatomical structures such as intercanal communications, accessory canals and loops were mostly detected with a combination of 2D TS-MinIP and 3D volume-rendering MCT images. Canal configurations and fine anatomic structures were more clearly observed in the combined 2D and 3D MCT images than the clearing technique. The frequency of non-classifiable configurations demonstrated the complexity of mandibular first molar mesial root canal anatomy.

  9. Plane-Based Sampling for Ray Casting Algorithm in Sequential Medical Images

    PubMed Central

    Lin, Lili; Chen, Shengyong; Shao, Yan; Gu, Zichun

    2013-01-01

    This paper proposes a plane-based sampling method to improve the traditional Ray Casting Algorithm (RCA) for the fast reconstruction of a three-dimensional biomedical model from sequential images. In the novel method, the optical properties of all sampling points depend on the intersection points when a ray travels through an equidistant parallel plan cluster of the volume dataset. The results show that the method improves the rendering speed at over three times compared with the conventional algorithm and the image quality is well guaranteed. PMID:23424608

  10. Image Matrix Processor for Volumetric Computations Final Report CRADA No. TSB-1148-95

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roberson, G. Patrick; Browne, Jolyon

    The development of an Image Matrix Processor (IMP) was proposed that would provide an economical means to perform rapid ray-tracing processes on volume "Giga Voxel" data sets. This was a multi-phased project. The objective of the first phase of the IMP project was to evaluate the practicality of implementing a workstation-based Image Matrix Processor for use in volumetric reconstruction and rendering using hardware simulation techniques. Additionally, ARACOR and LLNL worked together to identify and pursue further funding sources to complete a second phase of this project.

  11. Plexus structure imaging with thin slab MR neurography: rotating frames, fly-throughs, and composite projections

    NASA Astrophysics Data System (ADS)

    Raphael, David T.; McIntee, Diane; Tsuruda, Jay S.; Colletti, Patrick; Tatevossian, Raymond; Frazier, James

    2006-03-01

    We explored multiple image processing approaches by which to display the segmented adult brachial plexus in a three-dimensional manner. Magnetic resonance neurography (MRN) 1.5-Tesla scans with STIR sequences, which preferentially highlight nerves, were performed in adult volunteers to generate high-resolution raw images. Using multiple software programs, the raw MRN images were then manipulated so as to achieve segmentation of plexus neurovascular structures, which were incorporated into three different visualization schemes: rotating upper thoracic girdle skeletal frames, dynamic fly-throughs parallel to the clavicle, and thin slab volume-rendered composite projections.

  12. Enhancement method for rendered images of home decoration based on SLIC superpixels

    NASA Astrophysics Data System (ADS)

    Dai, Yutong; Jiang, Xiaotong

    2018-04-01

    Rendering technology has been widely used in the home decoration industry in recent years for images of home decoration design. However, due to the fact that rendered images of home decoration design rely heavily on the parameters of renderer and the lights of scenes, most rendered images in this industry require further optimization afterwards. To reduce workload and enhance rendered images automatically, an algorithm utilizing neural networks is proposed in this manuscript. In addition, considering few extreme conditions such as strong sunlight and lights, SLIC superpixels based segmentation is used to choose out these bright areas of an image and enhance them independently. Finally, these chosen areas are merged with the entire image. Experimental results show that the proposed method effectively enhances the rendered images when compared with some existing algorithms. Besides, the proposed strategy is proven to be adaptable especially to those images with obvious bright parts.

  13. Simulation and training of lumbar punctures using haptic volume rendering and a 6DOF haptic device

    NASA Astrophysics Data System (ADS)

    Färber, Matthias; Heller, Julika; Handels, Heinz

    2007-03-01

    The lumbar puncture is performed by inserting a needle into the spinal chord of the patient to inject medicaments or to extract liquor. The training of this procedure is usually done on the patient guided by experienced supervisors. A virtual reality lumbar puncture simulator has been developed in order to minimize the training costs and the patient's risk. We use a haptic device with six degrees of freedom (6DOF) to feedback forces that resist needle insertion and rotation. An improved haptic volume rendering approach is used to calculate the forces. This approach makes use of label data of relevant structures like skin, bone, muscles or fat and original CT data that contributes information about image structures that can not be segmented. A real-time 3D visualization with optional stereo view shows the punctured region. 2D visualizations of orthogonal slices enable a detailed impression of the anatomical context. The input data consisting of CT and label data and surface models of relevant structures is defined in an XML file together with haptic rendering and visualization parameters. In a first evaluation the visible human male data has been used to generate a virtual training body. Several users with different medical experience tested the lumbar puncture trainer. The simulator gives a good haptic and visual impression of the needle insertion and the haptic volume rendering technique enables the feeling of unsegmented structures. Especially, the restriction of transversal needle movement together with rotation constraints enabled by the 6DOF device facilitate a realistic puncture simulation.

  14. A spatially augmented reality sketching interface for architectural daylighting design.

    PubMed

    Sheng, Yu; Yapo, Theodore C; Young, Christopher; Cutler, Barbara

    2011-01-01

    We present an application of interactive global illumination and spatially augmented reality to architectural daylight modeling that allows designers to explore alternative designs and new technologies for improving the sustainability of their buildings. Images of a model in the real world, captured by a camera above the scene, are processed to construct a virtual 3D model. To achieve interactive rendering rates, we use a hybrid rendering technique, leveraging radiosity to simulate the interreflectance between diffuse patches and shadow volumes to generate per-pixel direct illumination. The rendered images are then projected on the real model by four calibrated projectors to help users study the daylighting illumination. The virtual heliodon is a physical design environment in which multiple designers, a designer and a client, or a teacher and students can gather to experience animated visualizations of the natural illumination within a proposed design by controlling the time of day, season, and climate. Furthermore, participants may interactively redesign the geometry and materials of the space by manipulating physical design elements and see the updated lighting simulation. © 2011 IEEE Published by the IEEE Computer Society

  15. Denoising and 4D visualization of OCT images

    PubMed Central

    Gargesha, Madhusudhana; Jenkins, Michael W.; Rollins, Andrew M.; Wilson, David L.

    2009-01-01

    We are using Optical Coherence Tomography (OCT) to image structure and function of the developing embryonic heart in avian models. Fast OCT imaging produces very large 3D (2D + time) and 4D (3D volumes + time) data sets, which greatly challenge ones ability to visualize results. Noise in OCT images poses additional challenges. We created an algorithm with a quick, data set specific optimization for reduction of both shot and speckle noise and applied it to 3D visualization and image segmentation in OCT. When compared to baseline algorithms (median, Wiener, orthogonal wavelet, basic non-orthogonal wavelet), a panel of experts judged the new algorithm to give much improved volume renderings concerning both noise and 3D visualization. Specifically, the algorithm provided a better visualization of the myocardial and endocardial surfaces, and the interaction of the embryonic heart tube with surrounding tissue. Quantitative evaluation using an image quality figure of merit also indicated superiority of the new algorithm. Noise reduction aided semi-automatic 2D image segmentation, as quantitatively evaluated using a contour distance measure with respect to an expert segmented contour. In conclusion, the noise reduction algorithm should be quite useful for visualization and quantitative measurements (e.g., heart volume, stroke volume, contraction velocity, etc.) in OCT embryo images. With its semi-automatic, data set specific optimization, we believe that the algorithm can be applied to OCT images from other applications. PMID:18679509

  16. Accelerating Time-Varying Hardware Volume Rendering Using TSP Trees and Color-Based Error Metrics

    NASA Technical Reports Server (NTRS)

    Ellsworth, David; Chiang, Ling-Jen; Shen, Han-Wei; Kwak, Dochan (Technical Monitor)

    2000-01-01

    This paper describes a new hardware volume rendering algorithm for time-varying data. The algorithm uses the Time-Space Partitioning (TSP) tree data structure to identify regions within the data that have spatial or temporal coherence. By using this coherence, the rendering algorithm can improve performance when the volume data is larger than the texture memory capacity by decreasing the amount of textures required. This coherence can also allow improved speed by appropriately rendering flat-shaded polygons instead of textured polygons, and by not rendering transparent regions. To reduce the polygonization overhead caused by the use of the hierarchical data structure, we introduce an optimization method using polygon templates. The paper also introduces new color-based error metrics, which more accurately identify coherent regions compared to the earlier scalar-based metrics. By showing experimental results from runs using different data sets and error metrics, we demonstrate that the new methods give substantial improvements in volume rendering performance.

  17. Imaging method for monitoring delivery of high dose rate brachytherapy

    DOEpatents

    Weisenberger, Andrew G; Majewski, Stanislaw

    2012-10-23

    A method for in-situ monitoring both the balloon/cavity and the radioactive source in brachytherapy treatment utilizing using at least one pair of miniature gamma cameras to acquire separate images of: 1) the radioactive source as it is moved in the tumor volume during brachytherapy; and 2) a relatively low intensity radiation source produced by either an injected radiopharmaceutical rendering cancerous tissue visible or from a radioactive solution filling a balloon surgically implanted into the cavity formed by the surgical resection of a tumor.

  18. Topographic analyses of shape of eyes with pathologic myopia by high-resolution three-dimensional magnetic resonance imaging.

    PubMed

    Moriyama, Muka; Ohno-Matsui, Kyoko; Hayashi, Kengo; Shimada, Noriaki; Yoshida, Takeshi; Tokoro, Takashi; Morita, Ikuo

    2011-08-01

    To analyze the topography of human eyes with pathologic myopia by high-resolution magnetic resonance imaging (MRI) with volume rendering of the acquired images. Observational case series. Eighty-six eyes of 44 patients with high myopia (refractive error ≥-8.00 diopters [D] or axial length >26.5 mm) were studied. Forty emmetropic eyes were examined as controls. The participants were examined with an MRI scanner (Signa HDxt 1.5T, GE Healthcare, Waukesha, WI), and T(2)-weighted cubes were obtained. Volume renderings of the images from high-resolution 3-dimensional (3D) data were done by computer workstation. The margins of globes were then identified semiautomatically by the signal intensity, and the tissues outside the globes were removed. The 3D topographic characteristic of the globes and the distribution of the 4 distinct shapes of globes according to the symmetry and the radius of curvature of the contour of the posterior segment: the barrel, cylindric, nasally distorted, and temporally distorted types. In 69.8% of the patients with bilateral high myopia, both eyes had the same ocular shape. The most protruded part of the globe existed along the central sagittal axis in 78.3% of eyes and was slightly inferior to the central axis in the remaining eyes. In 38 of 68 eyes (55.9%) with bilateral pathologic myopia, multiple protrusions were observed. The eyes with 2 protrusions were subdivided into those with nasal protrusions and those with temporal protrusions. The eyes with 3 protrusions were subdivided into nasal, temporal superior, and temporal inferior protrusions. The eyes with visual field defects that could not be explained by myopic fundus lesions significantly more frequently had a temporally distorted shape. Eyes with ≥2 protrusions had myopic chorioretinal atrophy significantly more frequently than eyes with ≤1 protrusion. Our results demonstrate that it is possible to obtain a complete topographic image of human eyes by high-resolution MRI with volume-rendering techniques. The results showed that there are different ocular shapes in eyes with pathologic myopia, and that the difference in the ocular shape is correlated with the development of vision-threatening conditions in eyes with pathologic myopia. The author(s) have no proprietary or commercial interest in any materials discussed in this article. Copyright © 2011 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  19. Improvement of the Correlative AFM and ToF-SIMS Approach Using an Empirical Sputter Model for 3D Chemical Characterization.

    PubMed

    Terlier, T; Lee, J; Lee, K; Lee, Y

    2018-02-06

    Technological progress has spurred the development of increasingly sophisticated analytical devices. The full characterization of structures in terms of sample volume and composition is now highly complex. Here, a highly improved solution for 3D characterization of samples, based on an advanced method for 3D data correction, is proposed. Traditionally, secondary ion mass spectrometry (SIMS) provides the chemical distribution of sample surfaces. Combining successive sputtering with 2D surface projections enables a 3D volume rendering to be generated. However, surface topography can distort the volume rendering by necessitating the projection of a nonflat surface onto a planar image. Moreover, the sputtering is highly dependent on the probed material. Local variation of composition affects the sputter yield and the beam-induced roughness, which in turn alters the 3D render. To circumvent these drawbacks, the correlation of atomic force microscopy (AFM) with SIMS has been proposed in previous studies as a solution for the 3D chemical characterization. To extend the applicability of this approach, we have developed a methodology using AFM-time-of-flight (ToF)-SIMS combined with an empirical sputter model, "dynamic-model-based volume correction", to universally correct 3D structures. First, the simulation of 3D structures highlighted the great advantages of this new approach compared with classical methods. Then, we explored the applicability of this new correction to two types of samples, a patterned metallic multilayer and a diblock copolymer film presenting surface asperities. In both cases, the dynamic-model-based volume correction produced an accurate 3D reconstruction of the sample volume and composition. The combination of AFM-SIMS with the dynamic-model-based volume correction improves the understanding of the surface characteristics. Beyond the useful 3D chemical information provided by dynamic-model-based volume correction, the approach permits us to enhance the correlation of chemical information from spectroscopic techniques with the physical properties obtained by AFM.

  20. Feature tracking for automated volume of interest stabilization on 4D-OCT images

    NASA Astrophysics Data System (ADS)

    Laves, Max-Heinrich; Schoob, Andreas; Kahrs, Lüder A.; Pfeiffer, Tom; Huber, Robert; Ortmaier, Tobias

    2017-03-01

    A common representation of volumetric medical image data is the triplanar view (TV), in which the surgeon manually selects slices showing the anatomical structure of interest. In addition to common medical imaging such as MRI or computed tomography, recent advances in the field of optical coherence tomography (OCT) have enabled live processing and volumetric rendering of four-dimensional images of the human body. Due to the region of interest undergoing motion, it is challenging for the surgeon to simultaneously keep track of an object by continuously adjusting the TV to desired slices. To select these slices in subsequent frames automatically, it is necessary to track movements of the volume of interest (VOI). This has not been addressed with respect to 4DOCT images yet. Therefore, this paper evaluates motion tracking by applying state-of-the-art tracking schemes on maximum intensity projections (MIP) of 4D-OCT images. Estimated VOI location is used to conveniently show corresponding slices and to improve the MIPs by calculating thin-slab MIPs. Tracking performances are evaluated on an in-vivo sequence of human skin, captured at 26 volumes per second. Among investigated tracking schemes, our recently presented tracking scheme for soft tissue motion provides highest accuracy with an error of under 2.2 voxels for the first 80 volumes. Object tracking on 4D-OCT images enables its use for sub-epithelial tracking of microvessels for image-guidance.

  1. A Distributed GPU-Based Framework for Real-Time 3D Volume Rendering of Large Astronomical Data Cubes

    NASA Astrophysics Data System (ADS)

    Hassan, A. H.; Fluke, C. J.; Barnes, D. G.

    2012-05-01

    We present a framework to volume-render three-dimensional data cubes interactively using distributed ray-casting and volume-bricking over a cluster of workstations powered by one or more graphics processing units (GPUs) and a multi-core central processing unit (CPU). The main design target for this framework is to provide an in-core visualization solution able to provide three-dimensional interactive views of terabyte-sized data cubes. We tested the presented framework using a computing cluster comprising 64 nodes with a total of 128GPUs. The framework proved to be scalable to render a 204GB data cube with an average of 30 frames per second. Our performance analyses also compare the use of NVIDIA Tesla 1060 and 2050GPU architectures and the effect of increasing the visualization output resolution on the rendering performance. Although our initial focus, as shown in the examples presented in this work, is volume rendering of spectral data cubes from radio astronomy, we contend that our approach has applicability to other disciplines where close to real-time volume rendering of terabyte-order three-dimensional data sets is a requirement.

  2. Large area 3-D optical coherence tomography imaging of lumpectomy specimens for radiation treatment planning

    NASA Astrophysics Data System (ADS)

    Wang, Cuihuan; Kim, Leonard; Barnard, Nicola; Khan, Atif; Pierce, Mark C.

    2016-02-01

    Our long term goal is to develop a high-resolution imaging method for comprehensive assessment of tissue removed during lumpectomy procedures. By identifying regions of high-grade disease within the excised specimen, we aim to develop patient-specific post-operative radiation treatment regimens. We have assembled a benchtop spectral-domain optical coherence tomography (SD-OCT) system with 1320 nm center wavelength. Automated beam scanning enables "sub-volumes" spanning 5 mm x 5 mm x 2 mm (500 A-lines x 500 B-scans x 2 mm in depth) to be collected in under 15 seconds. A motorized sample positioning stage enables multiple sub-volumes to be acquired across an entire tissue specimen. Sub-volumes are rendered from individual B-scans in 3D Slicer software and en face (XY) images are extracted at specific depths. These images are then tiled together using MosaicJ software to produce a large area en face view (up to 40 mm x 25 mm). After OCT imaging, specimens were sectioned and stained with HE, allowing comparison between OCT image features and disease markers on histopathology. This manuscript describes the technical aspects of image acquisition and reconstruction, and reports initial qualitative comparison between large area en face OCT images and HE stained tissue sections. Future goals include developing image reconstruction algorithms for mapping an entire sample, and registering OCT image volumes with clinical CT and MRI images for post-operative treatment planning.

  3. Increasing the speed of medical image processing in MatLab®

    PubMed Central

    Bister, M; Yap, CS; Ng, KH; Tok, CH

    2007-01-01

    MatLab® has often been considered an excellent environment for fast algorithm development but is generally perceived as slow and hence not fit for routine medical image processing, where large data sets are now available e.g., high-resolution CT image sets with typically hundreds of 512x512 slices. Yet, with proper programming practices – vectorization, pre-allocation and specialization – applications in MatLab® can run as fast as in C language. In this article, this point is illustrated with fast implementations of bilinear interpolation, watershed segmentation and volume rendering. PMID:21614269

  4. Earthscape, a Multi-Purpose Interactive 3d Globe Viewer for Hybrid Data Visualization and Analysis

    NASA Astrophysics Data System (ADS)

    Sarthou, A.; Mas, S.; Jacquin, M.; Moreno, N.; Salamon, A.

    2015-08-01

    The hybrid visualization and interaction tool EarthScape is presented here. The software is able to display simultaneously LiDAR point clouds, draped videos with moving footprint, volume scientific data (using volume rendering, isosurface and slice plane), raster data such as still satellite images, vector data and 3D models such as buildings or vehicles. The application runs on touch screen devices such as tablets. The software is based on open source libraries, such as OpenSceneGraph, osgEarth and OpenCV, and shader programming is used to implement volume rendering of scientific data. The next goal of EarthScape is to perform data analysis using ENVI Services Engine, a cloud data analysis solution. EarthScape is also designed to be a client of Jagwire which provides multisource geo-referenced video fluxes. When all these components will be included, EarthScape will be a multi-purpose platform that will provide at the same time data analysis, hybrid visualization and complex interactions. The software is available on demand for free at france@exelisvis.com.

  5. Vivaldi: A Domain-Specific Language for Volume Processing and Visualization on Distributed Heterogeneous Systems.

    PubMed

    Choi, Hyungsuk; Choi, Woohyuk; Quan, Tran Minh; Hildebrand, David G C; Pfister, Hanspeter; Jeong, Won-Ki

    2014-12-01

    As the size of image data from microscopes and telescopes increases, the need for high-throughput processing and visualization of large volumetric data has become more pressing. At the same time, many-core processors and GPU accelerators are commonplace, making high-performance distributed heterogeneous computing systems affordable. However, effectively utilizing GPU clusters is difficult for novice programmers, and even experienced programmers often fail to fully leverage the computing power of new parallel architectures due to their steep learning curve and programming complexity. In this paper, we propose Vivaldi, a new domain-specific language for volume processing and visualization on distributed heterogeneous computing systems. Vivaldi's Python-like grammar and parallel processing abstractions provide flexible programming tools for non-experts to easily write high-performance parallel computing code. Vivaldi provides commonly used functions and numerical operators for customized visualization and high-throughput image processing applications. We demonstrate the performance and usability of Vivaldi on several examples ranging from volume rendering to image segmentation.

  6. Virtual probing system for medical volume data

    NASA Astrophysics Data System (ADS)

    Xiao, Yongfei; Fu, Yili; Wang, Shuguo

    2007-12-01

    Because of the huge computation in 3D medical data visualization, looking into its inner data interactively is always a problem to be resolved. In this paper, we present a novel approach to explore 3D medical dataset in real time by utilizing a 3D widget to manipulate the scanning plane. With the help of the 3D texture property in modern graphics card, a virtual scanning probe is used to explore oblique clipping plane of medical volume data in real time. A 3D model of the medical dataset is also rendered to illustrate the relationship between the scanning-plane image and the other tissues in medical data. It will be a valuable tool in anatomy education and understanding of medical images in the medical research.

  7. Fetal brain volumetry through MRI volumetric reconstruction and segmentation

    PubMed Central

    Estroff, Judy A.; Barnewolt, Carol E.; Connolly, Susan A.; Warfield, Simon K.

    2013-01-01

    Purpose Fetal MRI volumetry is a useful technique but it is limited by a dependency upon motion-free scans, tedious manual segmentation, and spatial inaccuracy due to thick-slice scans. An image processing pipeline that addresses these limitations was developed and tested. Materials and methods The principal sequences acquired in fetal MRI clinical practice are multiple orthogonal single-shot fast spin echo scans. State-of-the-art image processing techniques were used for inter-slice motion correction and super-resolution reconstruction of high-resolution volumetric images from these scans. The reconstructed volume images were processed with intensity non-uniformity correction and the fetal brain extracted by using supervised automated segmentation. Results Reconstruction, segmentation and volumetry of the fetal brains for a cohort of twenty-five clinically acquired fetal MRI scans was done. Performance metrics for volume reconstruction, segmentation and volumetry were determined by comparing to manual tracings in five randomly chosen cases. Finally, analysis of the fetal brain and parenchymal volumes was performed based on the gestational age of the fetuses. Conclusion The image processing pipeline developed in this study enables volume rendering and accurate fetal brain volumetry by addressing the limitations of current volumetry techniques, which include dependency on motion-free scans, manual segmentation, and inaccurate thick-slice interpolation. PMID:20625848

  8. UWGSP4: an imaging and graphics superworkstation and its medical applications

    NASA Astrophysics Data System (ADS)

    Jong, Jing-Ming; Park, Hyun Wook; Eo, Kilsu; Kim, Min-Hwan; Zhang, Peng; Kim, Yongmin

    1992-05-01

    UWGSP4 is configured with a parallel architecture for image processing and a pipelined architecture for computer graphics. The system's peak performance is 1,280 MFLOPS for image processing and over 200,000 Gouraud shaded 3-D polygons per second for graphics. The simulated sustained performance is about 50% of the peak performance in general image processing. Most of the 2-D image processing functions are efficiently vectorized and parallelized in UWGSP4. A performance of 770 MFLOPS in convolution and 440 MFLOPS in FFT is achieved. The real-time cine display, up to 32 frames of 1280 X 1024 pixels per second, is supported. In 3-D imaging, the update rate for the surface rendering is 10 frames of 20,000 polygons per second; the update rate for the volume rendering is 6 frames of 128 X 128 X 128 voxels per second. The system provides 1280 X 1024 X 32-bit double frame buffers and one 1280 X 1024 X 8-bit overlay buffer for supporting realistic animation, 24-bit true color, and text annotation. A 1280 X 1024- pixel, 66-Hz noninterlaced display screen with 1:1 aspect ratio can be windowed into the frame buffer for the display of any portion of the processed image or graphics.

  9. Enhancing radiological volumes with symbolic anatomy using image fusion and collaborative virtual reality.

    PubMed

    Silverstein, Jonathan C; Dech, Fred; Kouchoukos, Philip L

    2004-01-01

    Radiological volumes are typically reviewed by surgeons using cross-sections and iso-surface reconstructions. Applications that combine collaborative stereo volume visualization with symbolic anatomic information and data fusions would expand surgeons' capabilities in interpretation of data and in planning treatment. Such an application has not been seen clinically. We are developing methods to systematically combine symbolic anatomy (term hierarchies and iso-surface atlases) with patient data using data fusion. We describe our progress toward integrating these methods into our collaborative virtual reality application. The fully combined application will be a feature-rich stereo collaborative volume visualization environment for use by surgeons in which DICOM datasets will self-report underlying anatomy with visual feedback. Using hierarchical navigation of SNOMED-CT anatomic terms integrated with our existing Tele-immersive DICOM-based volumetric rendering application, we will display polygonal representations of anatomic systems on the fly from menus that query a database. The methods and tools involved in this application development are SNOMED-CT, DICOM, VISIBLE HUMAN, volumetric fusion and C++ on a Tele-immersive platform. This application will allow us to identify structures and display polygonal representations from atlas data overlaid with the volume rendering. First, atlas data is automatically translated, rotated, and scaled to the patient data during loading using a public domain volumetric fusion algorithm. This generates a modified symbolic representation of the underlying canonical anatomy. Then, through the use of collision detection or intersection testing of various transparent polygonal representations, the polygonal structures are highlighted into the volumetric representation while the SNOMED names are displayed. Thus, structural names and polygonal models are associated with the visualized DICOM data. This novel juxtaposition of information promises to expand surgeons' abilities to interpret images and plan treatment.

  10. A feasibility study of hand kinematics for EVA analysis using magnetic resonance imaging

    NASA Technical Reports Server (NTRS)

    Dickenson, Rueben D.; Lorenz, Christine H.; Peterson, Steven W.; Strauss, Alvin M.; Main, John A.

    1992-01-01

    A new method of analyzing the kinematics of joint motion is developed. Magnetic Resonance Imaging (MRI) offers several distinct advantages. Past methods of studying anatomic joint motion have usually centered on four approaches. These methods are x-ray projection, goniometric linkage analysis, sonic digitization, and landmark measurement of photogrammetry. Of these four, only x-ray is applicable for in vivo studies. The remaining three methods utilize other types of projections of inter-joint measurements, which can cause various types of error. MRI offers accuracy in measurement due to its tomographic nature (as opposed to projection) without the problems associated with x-ray dosage. Once the data acquisition of MR images was complete, the images were processed using a 3D volume rendering workstation. The metacarpalphalangeal (MCP) joint of the left index finger was selected and reconstructed into a three-dimensional graphic display. From the reconstructed volumetric images, measurements of the angles of movement of the applicable bones were obtained and processed by analyzing the screw motion of the MCP joint. Landmark positions were chosen at distinctive locations of the joint at fixed image threshold intensity levels to ensure repeatability. The primarily two dimensional planar motion of this joint was then studied using a method of constructing coordinate systems using three (or more) points. A transformation matrix based on a world coordinate system described the location and orientation of a local target coordinate system. Future research involving volume rendering of MRI data focusing on the internal kinematics of the hand's individual ligaments, cartilage, tendons, etc. will follow. Its findings will show the applicability of MRI to joint kinematics for gaining further knowledge of the hand-glove (power assisted) design for extravehicular activity (EVA).

  11. Method to optimize patch size based on spatial frequency response in image rendering of the light field

    NASA Astrophysics Data System (ADS)

    Zhang, Wei; Wang, Yanan; Zhu, Zhenhao; Su, Jinhui

    2018-05-01

    A focused plenoptic camera can effectively transform angular and spatial information to yield a refocused rendered image with high resolution. However, choosing a proper patch size poses a significant problem for the image-rendering algorithm. By using a spatial frequency response measurement, a method to obtain a suitable patch size is presented. By evaluating the spatial frequency response curves, the optimized patch size can be obtained quickly and easily. Moreover, the range of depth over which images can be rendered without artifacts can be estimated. Experiments show that the results of the image rendered based on frequency response measurement are in accordance with the theoretical calculation, which indicates that this is an effective way to determine the patch size. This study may provide support to light-field image rendering.

  12. View planetary differentiation process through high-resolution 3D imaging

    NASA Astrophysics Data System (ADS)

    Fei, Y.

    2011-12-01

    Core-mantle separation is one of the most important processes in planetary evolution, defining the structure and chemical distribution in the planets. Iron-dominated core materials could migrate through silicate mantle to the core by efficient liquid-liquid separation and/or by percolation of liquid metal through solid silicate matrix. We can experimentally simulate these processes to examine the efficiency and time of core formation and its geochemical signatures. The quantitative measure of the efficiency of percolation is usually the dihedral angle, related to the interfacial energies of the liquid and solid phases. To determine the true dihedral angle at high pressure and temperatures, it is necessary to measure the relative frequency distributions of apparent dihedral angles between the quenched liquid metal and silicate grains for each experiment. Here I present a new imaging technique to visualize the distribution of liquid metal in silicate matrix in 3D by combination of focus ion beam (FIB) milling and high-resolution SEM image. The 3D volume rendering provides precise determination of the dihedral angle and quantitative measure of volume fraction and connectivity. I have conducted a series of experiments using mixtures of San Carlos olivine and Fe-S (10wt%S) metal with different metal-silicate ratios, up to 25 GPa and at temperatures above 1800C. High-quality 3D volume renderings were reconstructed from FIB serial sectioning and imaging with 10-nm slice thickness and 14-nm image resolution for each quenched sample. The unprecedented spatial resolution at nano scale allows detailed examination of textural features and precise determination of the dihedral angle as a function of pressure, temperature and composition. The 3D reconstruction also allows direct assessment of connectivity in multi-phase matrix, providing a new way to investigate the efficiency of metal percolation in a real silicate mantle.

  13. Reusable Client-Side JavaScript Modules for Immersive Web-Based Real-Time Collaborative Neuroimage Visualization.

    PubMed

    Bernal-Rusiel, Jorge L; Rannou, Nicolas; Gollub, Randy L; Pieper, Steve; Murphy, Shawn; Robertson, Richard; Grant, Patricia E; Pienaar, Rudolph

    2017-01-01

    In this paper we present a web-based software solution to the problem of implementing real-time collaborative neuroimage visualization. In both clinical and research settings, simple and powerful access to imaging technologies across multiple devices is becoming increasingly useful. Prior technical solutions have used a server-side rendering and push-to-client model wherein only the server has the full image dataset. We propose a rich client solution in which each client has all the data and uses the Google Drive Realtime API for state synchronization. We have developed a small set of reusable client-side object-oriented JavaScript modules that make use of the XTK toolkit, a popular open-source JavaScript library also developed by our team, for the in-browser rendering and visualization of brain image volumes. Efficient realtime communication among the remote instances is achieved by using just a small JSON object, comprising a representation of the XTK image renderers' state, as the Google Drive Realtime collaborative data model. The developed open-source JavaScript modules have already been instantiated in a web-app called MedView , a distributed collaborative neuroimage visualization application that is delivered to the users over the web without requiring the installation of any extra software or browser plugin. This responsive application allows multiple physically distant physicians or researchers to cooperate in real time to reach a diagnosis or scientific conclusion. It also serves as a proof of concept for the capabilities of the presented technological solution.

  14. Alpha shape theory for 3D visualization and volumetric measurement of brain tumor progression using magnetic resonance images.

    PubMed

    Hamoud Al-Tamimi, Mohammed Sabbih; Sulong, Ghazali; Shuaib, Ibrahim Lutfi

    2015-07-01

    Resection of brain tumors is a tricky task in surgery due to its direct influence on the patients' survival rate. Determining the tumor resection extent for its complete information via-à-vis volume and dimensions in pre- and post-operative Magnetic Resonance Images (MRI) requires accurate estimation and comparison. The active contour segmentation technique is used to segment brain tumors on pre-operative MR images using self-developed software. Tumor volume is acquired from its contours via alpha shape theory. The graphical user interface is developed for rendering, visualizing and estimating the volume of a brain tumor. Internet Brain Segmentation Repository dataset (IBSR) is employed to analyze and determine the repeatability and reproducibility of tumor volume. Accuracy of the method is validated by comparing the estimated volume using the proposed method with that of gold-standard. Segmentation by active contour technique is found to be capable of detecting the brain tumor boundaries. Furthermore, the volume description and visualization enable an interactive examination of tumor tissue and its surrounding. Admirable features of our results demonstrate that alpha shape theory in comparison to other existing standard methods is superior for precise volumetric measurement of tumor. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering.

    PubMed

    Sicat, Ronell; Krüger, Jens; Möller, Torsten; Hadwiger, Markus

    2014-12-01

    This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs.

  16. Quantifying navigational information: The catchment volumes of panoramic snapshots in outdoor scenes.

    PubMed

    Murray, Trevor; Zeil, Jochen

    2017-01-01

    Panoramic views of natural environments provide visually navigating animals with two kinds of information: they define locations because image differences increase smoothly with distance from a reference location and they provide compass information, because image differences increase smoothly with rotation away from a reference orientation. The range over which a given reference image can provide navigational guidance (its 'catchment area') has to date been quantified from the perspective of walking animals by determining how image differences develop across the ground plane of natural habitats. However, to understand the information available to flying animals there is a need to characterize the 'catchment volumes' within which panoramic snapshots can provide navigational guidance. We used recently developed camera-based methods for constructing 3D models of natural environments and rendered panoramic views at defined locations within these models with the aim of mapping navigational information in three dimensions. We find that in relatively open woodland habitats, catchment volumes are surprisingly large extending for metres depending on the sensitivity of the viewer to image differences. The size and the shape of catchment volumes depend on the distance of visual features in the environment. Catchment volumes are smaller for reference images close to the ground and become larger for reference images at some distance from the ground and in more open environments. Interestingly, catchment volumes become smaller when only above horizon views are used and also when views include a 1 km distant panorama. We discuss the current limitations of mapping navigational information in natural environments and the relevance of our findings for our understanding of visual navigation in animals and autonomous robots.

  17. The preliminary exploration of 64-slice volume computed tomography in the accurate measurement of pleural effusion.

    PubMed

    Guo, Zhi-Jun; Lin, Qiang; Liu, Hai-Tao; Lu, Jun-Ying; Zeng, Yan-Hong; Meng, Fan-Jie; Cao, Bin; Zi, Xue-Rong; Han, Shu-Ming; Zhang, Yu-Huan

    2013-09-01

    Using computed tomography (CT) to rapidly and accurately quantify pleural effusion volume benefits medical and scientific research. However, the precise volume of pleural effusions still involves many challenges and currently does not have a recognized accurate measuring. To explore the feasibility of using 64-slice CT volume-rendering technology to accurately measure pleural fluid volume and to then analyze the correlation between the volume of the free pleural effusion and the different diameters of the pleural effusion. The 64-slice CT volume-rendering technique was used to measure and analyze three parts. First, the fluid volume of a self-made thoracic model was measured and compared with the actual injected volume. Second, the pleural effusion volume was measured before and after pleural fluid drainage in 25 patients, and the volume reduction was compared with the actual volume of the liquid extract. Finally, the free pleural effusion volume was measured in 26 patients to analyze the correlation between it and the diameter of the effusion, which was then used to calculate the regression equation. After using the 64-slice CT volume-rendering technique to measure the fluid volume of the self-made thoracic model, the results were compared with the actual injection volume. No significant differences were found, P = 0.836. For the 25 patients with drained pleural effusions, the comparison of the reduction volume with the actual volume of the liquid extract revealed no significant differences, P = 0.989. The following linear regression equation was used to compare the pleural effusion volume (V) (measured by the CT volume-rendering technique) with the pleural effusion greatest depth (d): V = 158.16 × d - 116.01 (r = 0.91, P = 0.000). The following linear regression was used to compare the volume with the product of the pleural effusion diameters (l × h × d): V = 0.56 × (l × h × d) + 39.44 (r = 0.92, P = 0.000). The 64-slice CT volume-rendering technique can accurately measure the volume in pleural effusion patients, and a linear regression equation can be used to estimate the volume of the free pleural effusion.

  18. Semi-automated delineation of breast cancer tumors and subsequent materialization using three-dimensional printing (rapid prototyping).

    PubMed

    Schulz-Wendtland, Rüdiger; Harz, Markus; Meier-Meitinger, Martina; Brehm, Barbara; Wacker, Till; Hahn, Horst K; Wagner, Florian; Wittenberg, Thomas; Beckmann, Matthias W; Uder, Michael; Fasching, Peter A; Emons, Julius

    2017-03-01

    Three-dimensional (3D) printing has become widely available, and a few cases of its use in clinical practice have been described. The aim of this study was to explore facilities for the semi-automated delineation of breast cancer tumors and to assess the feasibility of 3D printing of breast cancer tumors. In a case series of five patients, different 3D imaging methods-magnetic resonance imaging (MRI), digital breast tomosynthesis (DBT), and 3D ultrasound-were used to capture 3D data for breast cancer tumors. The volumes of the breast tumors were calculated to assess the comparability of the breast tumor models, and the MRI information was used to render models on a commercially available 3D printer to materialize the tumors. The tumor volumes calculated from the different 3D methods appeared to be comparable. Tumor models with volumes between 325 mm 3 and 7,770 mm 3 were printed and compared with the models rendered from MRI. The materialization of the tumors reflected the computer models of them. 3D printing (rapid prototyping) appears to be feasible. Scenarios for the clinical use of the technology might include presenting the model to the surgeon to provide a better understanding of the tumor's spatial characteristics in the breast, in order to improve decision-making in relation to neoadjuvant chemotherapy or surgical approaches. J. Surg. Oncol. 2017;115:238-242. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  19. Realistic tissue visualization using photoacoustic image

    NASA Astrophysics Data System (ADS)

    Cho, Seonghee; Managuli, Ravi; Jeon, Seungwan; Kim, Jeesu; Kim, Chulhong

    2018-02-01

    Visualization methods are very important in biomedical imaging. As a technology that understands life, biomedical imaging has the unique advantage of providing the most intuitive information in the image. This advantage of biomedical imaging can be greatly improved by choosing a special visualization method. This is more complicated in volumetric data. Volume data has the advantage of containing 3D spatial information. Unfortunately, the data itself cannot directly represent the potential value. Because images are always displayed in 2D space, visualization is the key and creates the real value of volume data. However, image processing of 3D data requires complicated algorithms for visualization and high computational burden. Therefore, specialized algorithms and computing optimization are important issues in volume data. Photoacoustic-imaging is a unique imaging modality that can visualize the optical properties of deep tissue. Because the color of the organism is mainly determined by its light absorbing component, photoacoustic data can provide color information of tissue, which is closer to real tissue color. In this research, we developed realistic tissue visualization using acoustic-resolution photoacoustic volume data. To achieve realistic visualization, we designed specialized color transfer function, which depends on the depth of the tissue from the skin. We used direct ray casting method and processed color during computing shader parameter. In the rendering results, we succeeded in obtaining similar texture results from photoacoustic data. The surface reflected rays were visualized in white, and the reflected color from the deep tissue was visualized red like skin tissue. We also implemented the CUDA algorithm in an OpenGL environment for real-time interactive imaging.

  20. Smooth 2D manifold extraction from 3D image stack

    PubMed Central

    Shihavuddin, Asm; Basu, Sreetama; Rexhepaj, Elton; Delestro, Felipe; Menezes, Nikita; Sigoillot, Séverine M; Del Nery, Elaine; Selimi, Fekrije; Spassky, Nathalie; Genovesio, Auguste

    2017-01-01

    Three-dimensional fluorescence microscopy followed by image processing is routinely used to study biological objects at various scales such as cells and tissue. However, maximum intensity projection, the most broadly used rendering tool, extracts a discontinuous layer of voxels, obliviously creating important artifacts and possibly misleading interpretation. Here we propose smooth manifold extraction, an algorithm that produces a continuous focused 2D extraction from a 3D volume, hence preserving local spatial relationships. We demonstrate the usefulness of our approach by applying it to various biological applications using confocal and wide-field microscopy 3D image stacks. We provide a parameter-free ImageJ/Fiji plugin that allows 2D visualization and interpretation of 3D image stacks with maximum accuracy. PMID:28561033

  1. Prospective feasibility trial of radiotherapy target definition for head and neck cancer using 3-dimensional PET and CT imaging.

    PubMed

    Scarfone, Christopher; Lavely, William C; Cmelak, Anthony J; Delbeke, Dominique; Martin, William H; Billheimer, Dean; Hallahan, Dennis E

    2004-04-01

    The aim of this investigation was to evaluate the influence and accuracy of (18)F-FDG PET in target volume definition as a complementary modality to CT for patients with head and neck cancer (HNC) using dedicated PET and CT scanners. Six HNC patients were custom fitted with head and neck and upper body immobilization devices, and conventional radiotherapy CT simulation was performed together with (18)F-FDG PET imaging. Gross target volume (GTV) and pathologic nodal volumes were first defined in the conventional manner based on CT. A segmentation and surface-rendering registration technique was then used to coregister the (18)F-FDG PET and CT planning image datasets. (18)F-FDG PET GTVs were determined and displayed simultaneously with the CT contours. CT GTVs were then modified based on the PET data to form final PET/CT treatment volumes. Five-field intensity-modulated radiation therapy (IMRT) was then used to demonstrate dose targeting to the CT GTV or the PET/CT GTV. One patient was PET-negative after induction chemotherapy. The CT GTV was modified in all remaining patients based on (18)F-FDG PET data. The resulting PET/CT GTV was larger than the original CT volume by an average of 15%. In 5 cases, (18)F-FDG PET identified active lymph nodes that corresponded to lymph nodes contoured on CT. The pathologically enlarged CT lymph nodes were modified to create final lymph node volumes in 3 of 5 cases. In 1 of 6 patients, (18)F-FDG-avid lymph nodes were not identified as pathologic on CT. In 2 of 6 patients, registration of the independently acquired PET and CT data using segmentation and surface rendering resulted in a suboptimal alignment and, therefore, had to be repeated. Radiotherapy planning using IMRT demonstrated the capability of this technique to target anatomic or anatomic/physiologic target volumes. In this manner, metabolically active sites can be intensified to greater daily doses. Inclusion of (18)F-FDG PET data resulted in modified target volumes in radiotherapy planning for HNC. PET and CT data acquired on separate, dedicated scanners may be coregistered for therapy planning; however, dual-acquisition PET/CT systems may be considered to reduce the need for reregistrations. It is possible to use IMRT to target dose to metabolically active sites based on coregistered PET/CT data.

  2. Three-dimensional structure of the curved mixing layer using image reconstruction and volume rendering

    NASA Astrophysics Data System (ADS)

    Karasso, P. S.; Mungal, M. G.

    1991-05-01

    This study investigates the structure and mixing of the two-dimensional turbulent mixing layer when subjected to longitudinal streamwise curvature. The straight layer is now well known to be dominated by the primary Kelvin-Helmholtz (KH) instability as well as the secondary Taylor-Goertler (TG) instability. For equal density fluids, placing the high-speed fluid on the inside of a streamwise bend causes the TG instability to be enhanced (unstable case), while placing the low-speed fluid on the inside of the same bend leads to the suppression of the TG instability (stable case). The location of the mixing transition is correspondingly altered. Our goal is to study the changes to the mixing field and growth rate resulting from the competition between instabilities. Our studies are performed in a newly constructed blow-down water facility capable of high Reynolds numbers and excellent optical access. Maximum flow speeds are 2 and 0.25 m/sec for the high- and low-speed sides, respectively, leading to maximum Reynolds numbers of 80 000 based on velocity difference and the width of the layer. We are able to dye one stream with a fluorescent dye, thus providing several planar views of the flow under laser sheet illumination. These views are superior to conventional approaches as they are free of wall effects and are not spatially integrating. However, our most useful diagnostic of the structure of the flow is the ability to record high-speed images of the end view of the flow that are then reconstructed by computer using the volume rendering technique of Jiménez et al.1 This approach is especially useful as it allows us to compare the structural changes to the flow resulting from the competition between the KH and TG instabilities. Another advantage is the fact that several hundred frames, covering many characteristic times, are incorporated into the rendered image and thus capture considerably more flow physics than do still images. We currently have our rendering techniques fully operational,2 and are presently acquiring high quality high-speed movies of the various flow cases. Our findings to date, based on planar time-averaged and instantaneous views, show the following: (1) a 50% increase in growth rate from the stable to the unstable case resulting from mild curvature; (2) an enhancement of the TG vortices in the unstable case, but without major disruption of the KH instability which remains relatively intact; and (3) the occurrence of the KH instability at angles tilted with respect to the splitter plate tip, in agreement with the predictions of linear stability theory. This final observation has not been reported to date, primarily because sheet techniques have not been used at Reynolds numbers as high as the present study. The presentation will provide detailed views of the changes between the stable, straight, and unstable cases using our volume rendering approach, and will provide statistical measures such as changes to vortex spacing and size, to quantify such changes.

  3. High-resolution Episcopic Microscopy (HREM) - Simple and Robust Protocols for Processing and Visualizing Organic Materials

    PubMed Central

    Geyer, Stefan H.; Maurer-Gesek, Barbara; Reissig, Lukas F.; Weninger, Wolfgang J.

    2017-01-01

    We provide simple protocols for generating digital volume data with the high-resolution episcopic microscopy (HREM) method. HREM is capable of imaging organic materials with volumes up to 5 x 5 x 7 mm3 in typical numeric resolutions between 1 x 1 x 1 and 5 x 5 x 5 µm3. Specimens are embedded in methacrylate resin and sectioned on a microtome. After each section an image of the block surface is captured with a digital video camera that sits on the phototube connected to the compound microscope head. The optical axis passes through a green fluorescent protein (GFP) filter cube and is aligned with a position, at which the bock holder arm comes to rest after each section. In this way, a series of inherently aligned digital images, displaying subsequent block surfaces are produced. Loading such an image series in three-dimensional (3D) visualization software facilitates the immediate conversion to digital volume data, which permit virtual sectioning in various orthogonal and oblique planes and the creation of volume and surface rendered computer models. We present three simple, tissue specific protocols for processing various groups of organic specimens, including mouse, chick, quail, frog and zebra fish embryos, human biopsy material, uncoated paper and skin replacement material. PMID:28715372

  4. High-resolution Episcopic Microscopy (HREM) - Simple and Robust Protocols for Processing and Visualizing Organic Materials.

    PubMed

    Geyer, Stefan H; Maurer-Gesek, Barbara; Reissig, Lukas F; Weninger, Wolfgang J

    2017-07-07

    We provide simple protocols for generating digital volume data with the high-resolution episcopic microscopy (HREM) method. HREM is capable of imaging organic materials with volumes up to 5 x 5 x 7 mm 3 in typical numeric resolutions between 1 x 1 x 1 and 5 x 5 x 5 µm 3 . Specimens are embedded in methacrylate resin and sectioned on a microtome. After each section an image of the block surface is captured with a digital video camera that sits on the phototube connected to the compound microscope head. The optical axis passes through a green fluorescent protein (GFP) filter cube and is aligned with a position, at which the bock holder arm comes to rest after each section. In this way, a series of inherently aligned digital images, displaying subsequent block surfaces are produced. Loading such an image series in three-dimensional (3D) visualization software facilitates the immediate conversion to digital volume data, which permit virtual sectioning in various orthogonal and oblique planes and the creation of volume and surface rendered computer models. We present three simple, tissue specific protocols for processing various groups of organic specimens, including mouse, chick, quail, frog and zebra fish embryos, human biopsy material, uncoated paper and skin replacement material.

  5. Clinical Application of an Open-Source 3D Volume Rendering Software to Neurosurgical Approaches.

    PubMed

    Fernandes de Oliveira Santos, Bruno; Silva da Costa, Marcos Devanir; Centeno, Ricardo Silva; Cavalheiro, Sergio; Antônio de Paiva Neto, Manoel; Lawton, Michael T; Chaddad-Neto, Feres

    2018-02-01

    Preoperative recognition of the anatomic individualities of each patient can help to achieve more precise and less invasive approaches. It also may help to anticipate potential complications and intraoperative difficulties. Here we describe the use, accuracy, and precision of a free tool for planning microsurgical approaches using 3-dimensional (3D) reconstructions from magnetic resonance imaging (MRI). We used the 3D volume rendering tool of a free open-source software program for 3D reconstruction of images of surgical sites obtained by MRI volumetric acquisition. We recorded anatomic reference points, such as the sulcus and gyrus, and vascularization patterns for intraoperative localization of lesions. Lesion locations were confirmed during surgery by intraoperative ultrasound and/or electrocorticography and later by postoperative MRI. Between August 2015 and September 2016, a total of 23 surgeries were performed using this technique for 9 low-grade gliomas, 7 high-grade gliomas, 4 cortical dysplasias, and 3 arteriovenous malformations. The technique helped delineate lesions with an overall accuracy of 2.6 ± 1.0 mm. 3D reconstructions were successfully performed in all patients, and images showed sulcus, gyrus, and venous patterns corresponding to the intraoperative images. All lesion areas were confirmed both intraoperatively and at the postoperative evaluation. With the technique described herein, it was possible to successfully perform 3D reconstruction of the cortical surface. This reconstruction tool may serve as an adjunct to neuronavigation systems or may be used alone when such a system is unavailable. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Statistical representative elementary volumes of porous media determined using greyscale analysis of 3D tomograms

    NASA Astrophysics Data System (ADS)

    Bruns, S.; Stipp, S. L. S.; Sørensen, H. O.

    2017-09-01

    Digital rock physics carries the dogmatic concept of having to segment volume images for quantitative analysis but segmentation rejects huge amounts of signal information. Information that is essential for the analysis of difficult and marginally resolved samples, such as materials with very small features, is lost during segmentation. In X-ray nanotomography reconstructions of Hod chalk we observed partial volume voxels with an abundance that limits segmentation based analysis. Therefore, we investigated the suitability of greyscale analysis for establishing statistical representative elementary volumes (sREV) for the important petrophysical parameters of this type of chalk, namely porosity, specific surface area and diffusive tortuosity, by using volume images without segmenting the datasets. Instead, grey level intensities were transformed to a voxel level porosity estimate using a Gaussian mixture model. A simple model assumption was made that allowed formulating a two point correlation function for surface area estimates using Bayes' theory. The same assumption enables random walk simulations in the presence of severe partial volume effects. The established sREVs illustrate that in compacted chalk, these simulations cannot be performed in binary representations without increasing the resolution of the imaging system to a point where the spatial restrictions of the represented sample volume render the precision of the measurement unacceptable. We illustrate this by analyzing the origins of variance in the quantitative analysis of volume images, i.e. resolution dependence and intersample and intrasample variance. Although we cannot make any claims on the accuracy of the approach, eliminating the segmentation step from the analysis enables comparative studies with higher precision and repeatability.

  7. A concept of volume rendering guided search process to analyze medical data set.

    PubMed

    Zhou, Jianlong; Xiao, Chun; Wang, Zhiyan; Takatsuka, Masahiro

    2008-03-01

    This paper firstly presents an approach of parallel coordinates based parameter control panel (PCP). The PCP is used to control parameters of focal region-based volume rendering (FRVR) during data analysis. It uses a parallel coordinates style interface. Different rendering parameters represented with nodes on each axis, and renditions based on related parameters are connected using polylines to show dependencies between renditions and parameters. Based on the PCP, a concept of volume rendering guided search process is proposed. The search pipeline is divided into four phases. Different parameters of FRVR are recorded and modulated in the PCP during search phases. The concept shows that volume visualization could play the role of guiding a search process in the rendition space to help users to efficiently find local structures of interest. The usability of the proposed approach is evaluated to show its effectiveness.

  8. HDlive rendering images of the fetal stomach: a preliminary report.

    PubMed

    Inubashiri, Eisuke; Abe, Kiyotaka; Watanabe, Yukio; Akutagawa, Noriyuki; Kuroki, Katumaru; Sugawara, Masaki; Maeda, Nobuhiko; Minami, Kunihiro; Nomura, Yasuhiro

    2015-01-01

    This study aimed to show reconstruction of the fetal stomach using the HDlive rendering mode in ultrasound. Seventeen healthy singleton fetuses at 18-34 weeks' gestational age were observed using the HDlive rendering mode of ultrasound in utero. In all of the fetuses, we identified specific spatial structures, including macroscopic anatomical features (e.g., the pyrous, cardia, fundus, and great curvature) of the fetal stomach, using the HDlive rendering mode. In particular, HDlive rendering images showed remarkably fine details that appeared as if they were being viewed under an endoscope, with visible rugal folds after 27 weeks' gestational age. Our study suggests that the HDlive rendering mode can be used as an additional method for evaluating the fetal stomach. The HDlive rendering mode shows detailed 3D structural images and anatomically realistic images of the fetal stomach. This technique may be effective in prenatal diagnosis for examining detailed information of fetal organs.

  9. Scheimpflug with computational imaging to extend the depth of field of iris recognition systems

    NASA Astrophysics Data System (ADS)

    Sinharoy, Indranil

    Despite the enormous success of iris recognition in close-range and well-regulated spaces for biometric authentication, it has hitherto failed to gain wide-scale adoption in less controlled, public environments. The problem arises from a limitation in imaging called the depth of field (DOF): the limited range of distances beyond which subjects appear blurry in the image. The loss of spatial details in the iris image outside the small DOF limits the iris image capture to a small volume-the capture volume. Existing techniques to extend the capture volume are usually expensive, computationally intensive, or afflicted by noise. Is there a way to combine the classical Scheimpflug principle with the modern computational imaging techniques to extend the capture volume? The solution we found is, surprisingly, simple; yet, it provides several key advantages over existing approaches. Our method, called Angular Focus Stacking (AFS), consists of capturing a set of images while rotating the lens, followed by registration, and blending of the in-focus regions from the images in the stack. The theoretical underpinnings of AFS arose from a pair of new and general imaging models we developed for Scheimpflug imaging that directly incorporates the pupil parameters. The model revealed that we could register the images in the stack analytically if we pivot the lens at the center of its entrance pupil, rendering the registration process exact. Additionally, we found that a specific lens design further reduces the complexity of image registration making AFS suitable for real-time performance. We have demonstrated up to an order of magnitude improvement in the axial capture volume over conventional image capture without sacrificing optical resolution and signal-to-noise ratio. The total time required for capturing the set of images for AFS is less than the time needed for a single-exposure, conventional image for the same DOF and brightness level. The net reduction in capture time can significantly relax the constraints on subject movement during iris acquisition, making it less restrictive.

  10. Fast DRR generation for 2D to 3D registration on GPUs.

    PubMed

    Tornai, Gábor János; Cserey, György; Pappas, Ion

    2012-08-01

    The generation of digitally reconstructed radiographs (DRRs) is the most time consuming step on the CPU in intensity based two-dimensional x-ray to three-dimensional (CT or 3D rotational x-ray) medical image registration, which has application in several image guided interventions. This work presents optimized DRR rendering on graphical processor units (GPUs) and compares performance achievable on four commercially available devices. A ray-cast based DRR rendering was implemented for a 512 × 512 × 72 CT volume. The block size parameter was optimized for four different GPUs for a region of interest (ROI) of 400 × 225 pixels with different sampling ratios (1.1%-9.1% and 100%). Performance was statistically evaluated and compared for the four GPUs. The method and the block size dependence were validated on the latest GPU for several parameter settings with a public gold standard dataset (512 × 512 × 825 CT) for registration purposes. Depending on the GPU, the full ROI is rendered in 2.7-5.2 ms. If sampling ratio of 1.1%-9.1% is applied, execution time is in the range of 0.3-7.3 ms. On all GPUs, the mean of the execution time increased linearly with respect to the number of pixels if sampling was used. The presented results outperform other results from the literature. This indicates that automatic 2D to 3D registration, which typically requires a couple of hundred DRR renderings to converge, can be performed quasi on-line, in less than a second or depending on the application and hardware in less than a couple of seconds. Accordingly, a whole new field of applications is opened for image guided interventions, where the registration is continuously performed to match the real-time x-ray.

  11. Integral image rendering procedure for aberration correction and size measurement.

    PubMed

    Sommer, Holger; Ihrig, Andreas; Ebenau, Melanie; Flühs, Dirk; Spaan, Bernhard; Eichmann, Marion

    2014-05-20

    The challenge in rendering integral images is to use as much information preserved by the light field as possible to reconstruct a captured scene in a three-dimensional way. We propose a rendering algorithm based on the projection of rays through a detailed simulation of the optical path, considering all the physical properties and locations of the optical elements. The rendered images contain information about the correct size of imaged objects without the need to calibrate the imaging device. Additionally, aberrations of the optical system may be corrected, depending on the setup of the integral imaging device. We show simulation data that illustrates the aberration correction ability and experimental data from our plenoptic camera, which illustrates the capability of our proposed algorithm to measure size and distance. We believe this rendering procedure will be useful in the future for three-dimensional ophthalmic imaging of the human retina.

  12. Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering

    PubMed Central

    Sicat, Ronell; Krüger, Jens; Möller, Torsten; Hadwiger, Markus

    2015-01-01

    This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs. PMID:26146475

  13. A medical application integrating remote 3D visualization tools to access picture archiving and communication system on mobile devices.

    PubMed

    He, Longjun; Ming, Xing; Liu, Qian

    2014-04-01

    With computing capability and display size growing, the mobile device has been used as a tool to help clinicians view patient information and medical images anywhere and anytime. However, for direct interactive 3D visualization, which plays an important role in radiological diagnosis, the mobile device cannot provide a satisfactory quality of experience for radiologists. This paper developed a medical system that can get medical images from the picture archiving and communication system on the mobile device over the wireless network. In the proposed application, the mobile device got patient information and medical images through a proxy server connecting to the PACS server. Meanwhile, the proxy server integrated a range of 3D visualization techniques, including maximum intensity projection, multi-planar reconstruction and direct volume rendering, to providing shape, brightness, depth and location information generated from the original sectional images for radiologists. Furthermore, an algorithm that changes remote render parameters automatically to adapt to the network status was employed to improve the quality of experience. Finally, performance issues regarding the remote 3D visualization of the medical images over the wireless network of the proposed application were also discussed. The results demonstrated that this proposed medical application could provide a smooth interactive experience in the WLAN and 3G networks.

  14. A new approach of building 3D visualization framework for multimodal medical images display and computed assisted diagnosis

    NASA Astrophysics Data System (ADS)

    Li, Zhenwei; Sun, Jianyong; Zhang, Jianguo

    2012-02-01

    As more and more CT/MR studies are scanning with larger volume of data sets, more and more radiologists and clinician would like using PACS WS to display and manipulate these larger data sets of images with 3D rendering features. In this paper, we proposed a design method and implantation strategy to develop 3D image display component not only with normal 3D display functions but also with multi-modal medical image fusion as well as compute-assisted diagnosis of coronary heart diseases. The 3D component has been integrated into the PACS display workstation of Shanghai Huadong Hospital, and the clinical practice showed that it is easy for radiologists and physicians to use these 3D functions such as multi-modalities' (e.g. CT, MRI, PET, SPECT) visualization, registration and fusion, and the lesion quantitative measurements. The users were satisfying with the rendering speeds and quality of 3D reconstruction. The advantages of the component include low requirements for computer hardware, easy integration, reliable performance and comfortable application experience. With this system, the radiologists and the clinicians can manipulate with 3D images easily, and use the advanced visualization tools to facilitate their work with a PACS display workstation at any time.

  15. Volume estimation of tonsil phantoms using an oral camera with 3D imaging

    PubMed Central

    Das, Anshuman J.; Valdez, Tulio A.; Vargas, Jose Arbouin; Saksupapchon, Punyapat; Rachapudi, Pushyami; Ge, Zhifei; Estrada, Julio C.; Raskar, Ramesh

    2016-01-01

    Three-dimensional (3D) visualization of oral cavity and oropharyngeal anatomy may play an important role in the evaluation for obstructive sleep apnea (OSA). Although computed tomography (CT) and magnetic resonance (MRI) imaging are capable of providing 3D anatomical descriptions, this type of technology is not readily available in a clinic setting. Current imaging of the oropharynx is performed using a light source and tongue depressors. For better assessment of the inferior pole of the tonsils and tongue base flexible laryngoscopes are required which only provide a two dimensional (2D) rendering. As a result, clinical diagnosis is generally subjective in tonsillar hypertrophy where current physical examination has limitations. In this report, we designed a hand held portable oral camera with 3D imaging capability to reconstruct the anatomy of the oropharynx in tonsillar hypertrophy where the tonsils get enlarged and can lead to increased airway resistance. We were able to precisely reconstruct the 3D shape of the tonsils and from that estimate airway obstruction percentage and volume of the tonsils in 3D printed realistic models. Our results correlate well with Brodsky’s classification of tonsillar hypertrophy as well as intraoperative volume estimations. PMID:27446667

  16. Three-dimensional rendering of segmented object using matlab - biomed 2010.

    PubMed

    Anderson, Jeffrey R; Barrett, Steven F

    2010-01-01

    The three-dimensional rendering of microscopic objects is a difficult and challenging task that often requires specialized image processing techniques. Previous work has been described of a semi-automatic segmentation process of fluorescently stained neurons collected as a sequence of slice images with a confocal laser scanning microscope. Once properly segmented, each individual object can be rendered and studied as a three-dimensional virtual object. This paper describes the work associated with the design and development of Matlab files to create three-dimensional images from the segmented object data previously mentioned. Part of the motivation for this work is to integrate both the segmentation and rendering processes into one software application, providing a seamless transition from the segmentation tasks to the rendering and visualization tasks. Previously these tasks were accomplished on two different computer systems, windows and Linux. This transition basically limits the usefulness of the segmentation and rendering applications to those who have both computer systems readily available. The focus of this work is to create custom Matlab image processing algorithms for object rendering and visualization, and merge these capabilities to the Matlab files that were developed especially for the image segmentation task. The completed Matlab application will contain both the segmentation and rendering processes in a single graphical user interface, or GUI. This process for rendering three-dimensional images in Matlab requires that a sequence of two-dimensional binary images, representing a cross-sectional slice of the object, be reassembled in a 3D space, and covered with a surface. Additional segmented objects can be rendered in the same 3D space. The surface properties of each object can be varied by the user to aid in the study and analysis of the objects. This inter-active process becomes a powerful visual tool to study and understand microscopic objects.

  17. In vivo three-dimensional imaging of human corneal nerves using Fourier-domain optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Shin, Jun Geun; Hwang, Ho Sik; Eom, Tae Joong; Lee, Byeong Ha

    2017-01-01

    We have employed Fourier-domain optical coherence tomography (FD-OCT) to achieve corneal nerve imaging, which could be useful in surgical planning and refractive surgery. Because the three-dimensional (3-D) images of the corneal nerves were acquired in vivo, unintentional movement of the subject during the measurement led to imaging artifacts. These artifacts were compensated for with a series of signal processing techniques, namely realigning A-scan images to flatten the boundary and cross-correlating adjacent B-scan images. To overcome the undesirably large signal from scattering at the corneal surface and iris, volume rendering and maximum intensity projections were performed with only the data taken in the stromal region of the cornea, which is located between 200 and 500 μm from the corneal surface. The 3-D volume imaging of a 10×10 mm2 area took 9.8 s, which is slightly shorter than the normal tear breakup time. This allowed us to image the branched and threadlike corneal nerve bundles within the human eye. The experimental results show that FD-OCT systems have the potential to be useful in clinical investigations of corneal nerves and by minimizing nerve injury during clinical or surgical procedures.

  18. Visualization of stereoscopic anatomic models of the paranasal sinuses and cervical vertebrae from the surgical and procedural perspective.

    PubMed

    Chen, Jian; Smith, Andrew D; Khan, Majid A; Sinning, Allan R; Conway, Marianne L; Cui, Dongmei

    2017-11-01

    Recent improvements in three-dimensional (3D) virtual modeling software allows anatomists to generate high-resolution, visually appealing, colored, anatomical 3D models from computed tomography (CT) images. In this study, high-resolution CT images of a cadaver were used to develop clinically relevant anatomic models including facial skull, nasal cavity, septum, turbinates, paranasal sinuses, optic nerve, pituitary gland, carotid artery, cervical vertebrae, atlanto-axial joint, cervical spinal cord, cervical nerve root, and vertebral artery that can be used to teach clinical trainees (students, residents, and fellows) approaches for trans-sphenoidal pituitary surgery and cervical spine injection procedure. Volume, surface rendering and a new rendering technique, semi-auto-combined, were applied in the study. These models enable visualization, manipulation, and interaction on a computer and can be presented in a stereoscopic 3D virtual environment, which makes users feel as if they are inside the model. Anat Sci Educ 10: 598-606. © 2017 American Association of Anatomists. © 2017 American Association of Anatomists.

  19. Reusable Client-Side JavaScript Modules for Immersive Web-Based Real-Time Collaborative Neuroimage Visualization

    PubMed Central

    Bernal-Rusiel, Jorge L.; Rannou, Nicolas; Gollub, Randy L.; Pieper, Steve; Murphy, Shawn; Robertson, Richard; Grant, Patricia E.; Pienaar, Rudolph

    2017-01-01

    In this paper we present a web-based software solution to the problem of implementing real-time collaborative neuroimage visualization. In both clinical and research settings, simple and powerful access to imaging technologies across multiple devices is becoming increasingly useful. Prior technical solutions have used a server-side rendering and push-to-client model wherein only the server has the full image dataset. We propose a rich client solution in which each client has all the data and uses the Google Drive Realtime API for state synchronization. We have developed a small set of reusable client-side object-oriented JavaScript modules that make use of the XTK toolkit, a popular open-source JavaScript library also developed by our team, for the in-browser rendering and visualization of brain image volumes. Efficient realtime communication among the remote instances is achieved by using just a small JSON object, comprising a representation of the XTK image renderers' state, as the Google Drive Realtime collaborative data model. The developed open-source JavaScript modules have already been instantiated in a web-app called MedView, a distributed collaborative neuroimage visualization application that is delivered to the users over the web without requiring the installation of any extra software or browser plugin. This responsive application allows multiple physically distant physicians or researchers to cooperate in real time to reach a diagnosis or scientific conclusion. It also serves as a proof of concept for the capabilities of the presented technological solution. PMID:28507515

  20. Three-dimensional visualization of endolymphatic hydrops after intravenous administration of single-dose gadodiamide.

    PubMed

    Naganawa, Shinji; Yamazaki, Masahiro; Kawai, Hisashi; Bokura, Kiminori; Sone, Michihiko; Nakashima, Tsutomu

    2013-01-01

    Endolymphatic hydrops can be visualized with high contrast-to-noise ratio even after intravenous injection of single-dose gadolinium-based contrast material (IV-SD-GBCM) using HYDROPS-Mi2 images. We applied 3-dimensional rendering software to process HYDROPS-Mi2 images of 15 ears with and without suspected Ménière's disease and separately visualized the volumes of endo- and perilymph in patients with Ménière's disease even after IV-SD-GBCM. Such dimensional visualization will aid understanding of the pathophysiology of Ménière's disease.

  1. Cryo-FIB-SEM serial milling and block face imaging: Large volume structural analysis of biological tissues preserved close to their native state.

    PubMed

    Vidavsky, Netta; Akiva, Anat; Kaplan-Ashiri, Ifat; Rechav, Katya; Addadi, Lia; Weiner, Steve; Schertel, Andreas

    2016-12-01

    Many important biological questions can be addressed by studying in 3D large volumes of intact, cryo fixed hydrated tissues (⩾10,000μm 3 ) at high resolution (5-20nm). This can be achieved using serial FIB milling and block face surface imaging under cryo conditions. Here we demonstrate the unique potential of the cryo-FIB-SEM approach using two extensively studied model systems; sea urchin embryos and the tail fin of zebrafish larvae. We focus in particular on the environment of mineral deposition sites. The cellular organelles, including mitochondria, Golgi, ER, nuclei and nuclear pores are made visible by the image contrast created by differences in surface potential of different biochemical components. Auto segmentation and/or volume rendering of the image stacks and 3D reconstruction of the skeleton and the cellular environment, provides a detailed view of the relative distribution in space of the tissue/cellular components, and thus of their interactions. Simultaneous acquisition of secondary and back-scattered electron images adds additional information. For example, a serial view of the zebrafish tail reveals the presence of electron dense mineral particles inside mitochondrial networks extending more than 20μm in depth in the block. Large volume imaging using cryo FIB SEM, as demonstrated here, can contribute significantly to the understanding of the structures and functions of diverse biological tissues. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. A stereoscopic system for viewing the temporal evolution of brain activity clusters in response to linguistic stimuli

    NASA Astrophysics Data System (ADS)

    Forbes, Angus; Villegas, Javier; Almryde, Kyle R.; Plante, Elena

    2014-03-01

    In this paper, we present a novel application, 3D+Time Brain View, for the stereoscopic visualization of functional Magnetic Resonance Imaging (fMRI) data gathered from participants exposed to unfamiliar spoken languages. An analysis technique based on Independent Component Analysis (ICA) is used to identify statistically significant clusters of brain activity and their changes over time during different testing sessions. That is, our system illustrates the temporal evolution of participants' brain activity as they are introduced to a foreign language through displaying these clusters as they change over time. The raw fMRI data is presented as a stereoscopic pair in an immersive environment utilizing passive stereo rendering. The clusters are presented using a ray casting technique for volume rendering. Our system incorporates the temporal information and the results of the ICA into the stereoscopic 3D rendering, making it easier for domain experts to explore and analyze the data.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sewell, Christopher Meyer

    This is a set of slides from a guest lecture for a class at the University of Texas, El Paso on visualization and data analysis for high-performance computing. The topics covered are the following: trends in high-performance computing; scientific visualization, such as OpenGL, ray tracing and volume rendering, VTK, and ParaView; data science at scale, such as in-situ visualization, image databases, distributed memory parallelism, shared memory parallelism, VTK-m, "big data", and then an analysis example.

  4. Quantifying navigational information: The catchment volumes of panoramic snapshots in outdoor scenes

    PubMed Central

    Zeil, Jochen

    2017-01-01

    Panoramic views of natural environments provide visually navigating animals with two kinds of information: they define locations because image differences increase smoothly with distance from a reference location and they provide compass information, because image differences increase smoothly with rotation away from a reference orientation. The range over which a given reference image can provide navigational guidance (its ‘catchment area’) has to date been quantified from the perspective of walking animals by determining how image differences develop across the ground plane of natural habitats. However, to understand the information available to flying animals there is a need to characterize the ‘catchment volumes’ within which panoramic snapshots can provide navigational guidance. We used recently developed camera-based methods for constructing 3D models of natural environments and rendered panoramic views at defined locations within these models with the aim of mapping navigational information in three dimensions. We find that in relatively open woodland habitats, catchment volumes are surprisingly large extending for metres depending on the sensitivity of the viewer to image differences. The size and the shape of catchment volumes depend on the distance of visual features in the environment. Catchment volumes are smaller for reference images close to the ground and become larger for reference images at some distance from the ground and in more open environments. Interestingly, catchment volumes become smaller when only above horizon views are used and also when views include a 1 km distant panorama. We discuss the current limitations of mapping navigational information in natural environments and the relevance of our findings for our understanding of visual navigation in animals and autonomous robots. PMID:29088300

  5. Hierarchical storage of large volume of multidector CT data using distributed servers

    NASA Astrophysics Data System (ADS)

    Ratib, Osman; Rosset, Antoine; Heuberger, Joris; Bandon, David

    2006-03-01

    Multidector scanners and hybrid multimodality scanners have the ability to generate large number of high-resolution images resulting in very large data sets. In most cases, these datasets are generated for the sole purpose of generating secondary processed images and 3D rendered images as well as oblique and curved multiplanar reformatted images. It is therefore not essential to archive the original images after they have been processed. We have developed an architecture of distributed archive servers for temporary storage of large image datasets for 3D rendering and image processing without the need for long term storage in PACS archive. With the relatively low cost of storage devices it is possible to configure these servers to hold several months or even years of data, long enough for allowing subsequent re-processing if required by specific clinical situations. We tested the latest generation of RAID servers provided by Apple computers with a capacity of 5 TBytes. We implemented a peer-to-peer data access software based on our Open-Source image management software called OsiriX, allowing remote workstations to directly access DICOM image files located on the server through a new technology called "bonjour". This architecture offers a seamless integration of multiple servers and workstations without the need for central database or complex workflow management tools. It allows efficient access to image data from multiple workstation for image analysis and visualization without the need for image data transfer. It provides a convenient alternative to centralized PACS architecture while avoiding complex and time-consuming data transfer and storage.

  6. On soft clipping of Zernike moments for deblurring and enhancement of optical point spread functions

    NASA Astrophysics Data System (ADS)

    Becherer, Nico; Jödicke, Hanna; Schlosser, Gregor; Hesser, Jürgen; Zeilfelder, Frank; Männer, Reinhard

    2006-02-01

    Blur and noise originating from the physical imaging processes degrade the microscope data. Accurate deblurring techniques require, however, an accurate estimation of the underlying point-spread function (PSF). A good representation of PSFs can be achieved by Zernike Polynomials since they offer a compact representation where low-order coefficients represent typical aberrations of optical wavefronts while noise is represented in higher order coefficients. A quantitative description of the noise distribution (Gaussian) over the Zernike moments of various orders is given which is the basis for the new soft clipping approach for denoising of PSFs. Instead of discarding moments beyond a certain order, those Zernike moments that are more sensitive to noise are dampened according to the measured distribution and the present noise model. Further, a new scheme to combine experimental and theoretical PSFs in Zernike space is presented. According to our experimental reconstructions, using the new improved PSF the correlation between reconstructed and original volume is raised by 15% on average cases and up to 85% in the case of thin fibre structures, compared to reconstructions where a non improved PSF was used. Finally, we demonstrate the advantages of our approach on 3D images of confocal microscopes by generating visually improved volumes. Additionally, we are presenting a method to render the reconstructed results using a new volume rendering method that is almost artifact-free. The new approach is based on a Shear-Warp technique, wavelet data encoding techniques and a recent approach to approximate the gray value distribution by a Super spline model.

  7. Interactive distributed hardware-accelerated LOD-sprite terrain rendering with stable frame rates

    NASA Astrophysics Data System (ADS)

    Swan, J. E., II; Arango, Jesus; Nakshatrala, Bala K.

    2002-03-01

    A stable frame rate is important for interactive rendering systems. Image-based modeling and rendering (IBMR) techniques, which model parts of the scene with image sprites, are a promising technique for interactive systems because they allow the sprite to be manipulated instead of the underlying scene geometry. However, with IBMR techniques a frequent problem is an unstable frame rate, because generating an image sprite (with 3D rendering) is time-consuming relative to manipulating the sprite (with 2D image resampling). This paper describes one solution to this problem, by distributing an IBMR technique into a collection of cooperating threads and executable programs across two computers. The particular IBMR technique distributed here is the LOD-Sprite algorithm. This technique uses a multiple level-of-detail (LOD) scene representation. It first renders a keyframe from a high-LOD representation, and then caches the frame as an image sprite. It renders subsequent spriteframes by texture-mapping the cached image sprite into a lower-LOD representation. We describe a distributed architecture and implementation of LOD-Sprite, in the context of terrain rendering, which takes advantage of graphics hardware. We present timing results which indicate we have achieved a stable frame rate. In addition to LOD-Sprite, our distribution method holds promise for other IBMR techniques.

  8. A Review of Algorithms for Segmentation of Optical Coherence Tomography from Retina

    PubMed Central

    Kafieh, Raheleh; Rabbani, Hossein; Kermani, Saeed

    2013-01-01

    Optical coherence tomography (OCT) is a recently established imaging technique to describe different information about the internal structures of an object and to image various aspects of biological tissues. OCT image segmentation is mostly introduced on retinal OCT to localize the intra-retinal boundaries. Here, we review some of the important image segmentation methods for processing retinal OCT images. We may classify the OCT segmentation approaches into five distinct groups according to the image domain subjected to the segmentation algorithm. Current researches in OCT segmentation are mostly based on improving the accuracy and precision, and on reducing the required processing time. There is no doubt that current 3-D imaging modalities are now moving the research projects toward volume segmentation along with 3-D rendering and visualization. It is also important to develop robust methods capable of dealing with pathologic cases in OCT imaging. PMID:24083137

  9. Diagnostic accuracy of a volume-rendered computed tomography movie and other computed tomography-based imaging methods in assessment of renal vascular anatomy for laparoscopic donor nephrectomy.

    PubMed

    Yamamoto, Shingo; Tanooka, Masao; Ando, Kumiko; Yamano, Toshiko; Ishikura, Reiichi; Nojima, Michio; Hirota, Shozo; Shima, Hiroki

    2009-12-01

    To evaluate the diagnostic accuracy of computed tomography (CT)-based imaging methods for assessing renal vascular anatomy, imaging studies, including standard axial CT, three-dimensional volume-rendered CT (3DVR-CT), and a 3DVR-CT movie, were performed on 30 patients who underwent laparoscopic donor nephrectomy (10 right side, 20 left side) for predicting the location of the renal arteries and renal, adrenal, gonadal, and lumbar veins. These findings were compared with videos obtained during the operation. Two of 37 renal arteries observed intraoperatively were missed by standard axial CT and 3DVR-CT, whereas all arteries were identified by the 3DVR-CT movie. Two of 36 renal veins were missed by standard axial CT and 3DVR-CT, whereas 1 was missed by the 3DVR-CT movie. In 20 left renal hilar anatomical structures, 20 adrenal, 20 gonadal, and 22 lumbar veins were observed during the operation. Preoperatively, the standard axial CT, 3DVR-CT, and 3DVR-CT movie detected 11, 19, and 20 adrenal veins; 13, 14, and 19 gonadal veins; and 6, 11, and 15 lumbar veins, respectively. Overall, of 135 renal vascular structures, the standard axial CT, 3DVR-CT, and 3DVR-CT movie accurately detected 99 (73.3%), 113 (83.7%), and 126 (93.3%) vessels, respectively, which indicated that the 3DVR-CT movie demonstrated a significantly higher detection rate than other CT-based imaging methods (P < 0.05). The 3DVR-CT movie accurately provides essential information about the renal vascular anatomy before laparoscopic donor nephrectomy.

  10. Quantitative Evaluation of a Planetary Renderer for Terrain Relative Navigation

    NASA Astrophysics Data System (ADS)

    Amoroso, E.; Jones, H.; Otten, N.; Wettergreen, D.; Whittaker, W.

    2016-11-01

    A ray-tracing computer renderer tool is presented based on LOLA and LROC elevation models and is quantitatively compared to LRO WAC and NAC images for photometric accuracy. We investigated using rendered images for terrain relative navigation.

  11. The Architecture of an Automatic eHealth Platform With Mobile Client for Cerebrovascular Disease Detection

    PubMed Central

    Wang, Xingce; Bie, Rongfang; Wu, Zhongke; Zhou, Mingquan; Cao, Rongfei; Xie, Lizhi; Zhang, Dong

    2013-01-01

    Background In recent years, cerebrovascular disease has been the leading cause of death and adult disability in the world. This study describes an efficient approach to detect cerebrovascular disease. Objective In order to improve cerebrovascular treatment, prevention, and care, an automatic cerebrovascular disease detection eHealth platform is designed and studied. Methods We designed an automatic eHealth platform for cerebrovascular disease detection with a four-level architecture: object control layer, data transmission layer, service supporting layer, and application service layer. The platform has eight main functions: cerebrovascular database management, preprocessing of cerebral image data, image viewing and adjustment model, image cropping compression and measurement, cerebrovascular segmentation, 3-dimensional cerebrovascular reconstruction, cerebrovascular rendering, cerebrovascular virtual endoscope, and automatic detection. Several key technologies were employed for the implementation of the platform. The anisotropic diffusion model was used to reduce the noise. Statistics segmentation with Gaussian-Markov random field model (G-MRF) and Stochastic Estimation Maximization (SEM) parameter estimation method were used to realize the cerebrovascular segmentation. Ball B-Spline curve was proposed to model the cerebral blood vessels. Compute unified device architecture (CUDA) based on ray-casting volume rendering presented by curvature enhancement and boundary enhancement were used to realize the volume rendering model. We implemented the platform with a network client and mobile phone client to fit different users. Results The implemented platform is running on a common personal computer. Experiments on 32 patients’ brain computed tomography data or brain magnetic resonance imaging data stored in the system verified the feasibility and validity of each model we proposed. The platform is partly used in the cranial nerve surgery of the First Hospital Affiliated to the General Hospital of People's Liberation Army and radiology of Beijing Navy General Hospital. At the same time it also gets some applications in medical imaging specialty teaching of Tianjin Medical University. The application results have also been validated by our neurosurgeon and radiologist. Conclusions The platform appears beneficial in diagnosis of the cerebrovascular disease. The long-term benefits and additional applications of this technology warrant further study. The research built a diagnosis and treatment platform of the human tissue with complex geometry and topology such as brain vessel based on the Internet of things. PMID:25098861

  12. The production of digital and printed resources from multiple modalities using visualization and three-dimensional printing techniques.

    PubMed

    Shui, Wuyang; Zhou, Mingquan; Chen, Shi; Pan, Zhouxian; Deng, Qingqiong; Yao, Yong; Pan, Hui; He, Taiping; Wang, Xingce

    2017-01-01

    Virtual digital resources and printed models have become indispensable tools for medical training and surgical planning. Nevertheless, printed models of soft tissue organs are still challenging to reproduce. This study adopts open source packages and a low-cost desktop 3D printer to convert multiple modalities of medical images to digital resources (volume rendering images and digital models) and lifelike printed models, which are useful to enhance our understanding of the geometric structure and complex spatial nature of anatomical organs. Neuroimaging technologies such as CT, CTA, MRI, and TOF-MRA collect serial medical images. The procedures for producing digital resources can be divided into volume rendering and medical image reconstruction. To verify the accuracy of reconstruction, this study presents qualitative and quantitative assessments. Subsequently, digital models are archived as stereolithography format files and imported to the bundled software of the 3D printer. The printed models are produced using polylactide filament materials. We have successfully converted multiple modalities of medical images to digital resources and printed models for both hard organs (cranial base and tooth) and soft tissue organs (brain, blood vessels of the brain, the heart chambers and vessel lumen, and pituitary tumor). Multiple digital resources and printed models were provided to illustrate the anatomical relationship between organs and complicated surrounding structures. Three-dimensional printing (3DP) is a powerful tool to produce lifelike and tangible models. We present an available and cost-effective method for producing both digital resources and printed models. The choice of modality in medical images and the processing approach is important when reproducing soft tissue organs models. The accuracy of the printed model is determined by the quality of organ models and 3DP. With the ongoing improvement of printing techniques and the variety of materials available, 3DP will become an indispensable tool in medical training and surgical planning.

  13. From the sample preparation to the volume rendering images of small animals: A step by step example of a procedure to carry out the micro-CT study of the leafhopper insect Homalodisca vitripennis (Hemiptera: Cicadellidae)

    USDA-ARS?s Scientific Manuscript database

    Advances in micro-CT, digital computed tomography (CT) scan uses X-rays to make detailed pictures of structures inside of the body. Combining micro-CT with Digital Video Library systems, and linking this to Big Data, will change the way researchers, entomologist, and the public search and use anato...

  14. Three-dimensional x-ray diffraction nanoscopy

    NASA Astrophysics Data System (ADS)

    Nikulin, Andrei Y.; Dilanian, Ruben A.; Zatsepin, Nadia A.; Muddle, Barry C.

    2008-08-01

    A novel approach to x-ray diffraction data analysis for non-destructive determination of the shape of nanoscale particles and clusters in three-dimensions is illustrated with representative examples of composite nanostructures. The technique is insensitive to the x-rays coherence, which allows 3D reconstruction of a modal image without tomographic synthesis and in-situ analysis of large (over a several cubic millimeters) volume of material with a spatial resolution of few nanometers, rendering the approach suitable for laboratory facilities.

  15. Electrical Capacitance Volume Tomography: Design and Applications

    PubMed Central

    Wang, Fei; Marashdeh, Qussai; Fan, Liang-Shih; Warsito, Warsito

    2010-01-01

    This article reports recent advances and progress in the field of electrical capacitance volume tomography (ECVT). ECVT, developed from the two-dimensional electrical capacitance tomography (ECT), is a promising non-intrusive imaging technology that can provide real-time three-dimensional images of the sensing domain. Images are reconstructed from capacitance measurements acquired by electrodes placed on the outside boundary of the testing vessel. In this article, a review of progress on capacitance sensor design and applications to multi-phase flows is presented. The sensor shape, electrode configuration, and the number of electrodes that comprise three key elements of three-dimensional capacitance sensors are illustrated. The article also highlights applications of ECVT sensors on vessels of various sizes from 1 to 60 inches with complex geometries. Case studies are used to show the capability and validity of ECVT. The studies provide qualitative and quantitative real-time three-dimensional information of the measuring domain under study. Advantages of ECVT render it a favorable tool to be utilized for industrial applications and fundamental multi-phase flow research. PMID:22294905

  16. Lattice Boltzmann methods applied to large-scale three-dimensional virtual cores constructed from digital optical borehole images of the karst carbonate Biscayne aquifer in southeastern Florida

    USGS Publications Warehouse

    Michael Sukop,; Cunningham, Kevin J.

    2014-01-01

    Digital optical borehole images at approximately 2 mm vertical resolution and borehole caliper data were used to create three-dimensional renderings of the distribution of (1) matrix porosity and (2) vuggy megaporosity for the karst carbonate Biscayne aquifer in southeastern Florida. The renderings based on the borehole data were used as input into Lattice Boltzmann methods to obtain intrinsic permeability estimates for this extremely transmissive aquifer, where traditional aquifer test methods may fail due to very small drawdowns and non-Darcian flow that can reduce apparent hydraulic conductivity. Variogram analysis of the borehole data suggests a nearly isotropic rock structure at lag lengths up to the nominal borehole diameter. A strong correlation between the diameter of the borehole and the presence of vuggy megaporosity in the data set led to a bias in the variogram where the computed horizontal spatial autocorrelation is strong at lag distances greater than the nominal borehole size. Lattice Boltzmann simulation of flow across a 0.4 × 0.4 × 17 m (2.72 m3 volume) parallel-walled column of rendered matrix and vuggy megaporosity indicates a high hydraulic conductivity of 53 m s−1. This value is similar to previous Lattice Boltzmann calculations of hydraulic conductivity in smaller limestone samples of the Biscayne aquifer. The development of simulation methods that reproduce dual-porosity systems with higher resolution and fidelity and that consider flow through horizontally longer renderings could provide improved estimates of the hydraulic conductivity and help to address questions about the importance of scale.

  17. Lattice Boltzmann methods applied to large-scale three-dimensional virtual cores constructed from digital optical borehole images of the karst carbonate Biscayne aquifer in southeastern Florida

    NASA Astrophysics Data System (ADS)

    Sukop, Michael C.; Cunningham, Kevin J.

    2014-11-01

    Digital optical borehole images at approximately 2 mm vertical resolution and borehole caliper data were used to create three-dimensional renderings of the distribution of (1) matrix porosity and (2) vuggy megaporosity for the karst carbonate Biscayne aquifer in southeastern Florida. The renderings based on the borehole data were used as input into Lattice Boltzmann methods to obtain intrinsic permeability estimates for this extremely transmissive aquifer, where traditional aquifer test methods may fail due to very small drawdowns and non-Darcian flow that can reduce apparent hydraulic conductivity. Variogram analysis of the borehole data suggests a nearly isotropic rock structure at lag lengths up to the nominal borehole diameter. A strong correlation between the diameter of the borehole and the presence of vuggy megaporosity in the data set led to a bias in the variogram where the computed horizontal spatial autocorrelation is strong at lag distances greater than the nominal borehole size. Lattice Boltzmann simulation of flow across a 0.4 × 0.4 × 17 m (2.72 m3 volume) parallel-walled column of rendered matrix and vuggy megaporosity indicates a high hydraulic conductivity of 53 m s-1. This value is similar to previous Lattice Boltzmann calculations of hydraulic conductivity in smaller limestone samples of the Biscayne aquifer. The development of simulation methods that reproduce dual-porosity systems with higher resolution and fidelity and that consider flow through horizontally longer renderings could provide improved estimates of the hydraulic conductivity and help to address questions about the importance of scale.

  18. Analysis of the chicken retina with an adaptive optics multiphoton microscope.

    PubMed

    Bueno, Juan M; Giakoumaki, Anastasia; Gualda, Emilio J; Schaeffel, Frank; Artal, Pablo

    2011-06-01

    The structure and organization of the chicken retina has been investigated with an adaptive optics multiphoton imaging microscope in a backward configuration. Non-stained flat-mounted retinal tissues were imaged at different depths, from the retinal nerve fiber layer to the outer segment, by detecting the intrinsic nonlinear fluorescent signal. From the stacks of images corresponding to the different retinal layers, volume renderings of the entire retina were reconstructed. The density of photoreceptors and ganglion cells layer were directly estimated from the images as a function of the retinal eccentricity. The maximum anatomical resolving power at different retinal eccentricities was also calculated. This technique could be used for a better characterization of retinal alterations during myopia development, and may be useful for visualization of retinal pathologies and intoxication during pharmacological studies.

  19. 3D Printout Models vs. 3D-Rendered Images: Which Is Better for Preoperative Planning?

    PubMed

    Zheng, Yi-xiong; Yu, Di-fei; Zhao, Jian-gang; Wu, Yu-lian; Zheng, Bin

    2016-01-01

    Correct interpretation of a patient's anatomy and changes that occurs secondary to a disease process are crucial in the preoperative process to ensure optimal surgical treatment. In this study, we presented 3 different pancreatic cancer cases to surgical residents in the form of 3D-rendered images and 3D-printed models to investigate which modality resulted in the most appropriate preoperative plan. We selected 3 cases that would require significantly different preoperative plans based on key features identifiable in the preoperative computed tomography imaging. 3D volume rendering and 3D printing were performed respectively to create 2 different training ways. A total of 30, year 1 surgical residents were randomly divided into 2 groups. Besides traditional 2D computed tomography images, residents in group A (n = 15) reviewed 3D computer models, whereas in group B, residents (n = 15) reviewed 3D-printed models. Both groups subsequently completed an examination, designed in-house, to assess the appropriateness of their preoperative plan and provide a numerical score of the quality of the surgical plan. Residents in group B showed significantly higher quality of the surgical plan scores compared with residents in group A (76.4 ± 10.5 vs. 66.5 ± 11.2, p = 0.018). This difference was due in large part to a significant difference in knowledge of key surgical steps (22.1 ± 2.9 vs. 17.4 ± 4.2, p = 0.004) between each group. All participants reported a high level of satisfaction with the exercise. Results from this study support our hypothesis that 3D-printed models improve the quality of surgical trainee's preoperative plans. Copyright © 2016 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  20. A web-based instruction module for interpretation of craniofacial cone beam CT anatomy.

    PubMed

    Hassan, B A; Jacobs, R; Scarfe, W C; Al-Rawi, W T

    2007-09-01

    To develop a web-based module for learner instruction in the interpretation and recognition of osseous anatomy on craniofacial cone-beam CT (CBCT) images. Volumetric datasets from three CBCT systems were acquired (i-CAT, NewTom 3G and AccuiTomo FPD) for various subjects using equipment-specific scanning protocols. The datasets were processed using multiple software to provide two-dimensional (2D) multiplanar reformatted (MPR) images (e.g. sagittal, coronal and axial) and three-dimensional (3D) visual representations (e.g. maximum intensity projection, minimum intensity projection, ray sum, surface and volume rendering). Distinct didactic modules which illustrate the principles of CBCT systems, guided navigation of the volumetric dataset, and anatomic correlation of 3D models and 2D MPR graphics were developed using a hybrid combination of web authoring and image analysis techniques. Interactive web multimedia instruction was facilitated by the use of dynamic highlighting and labelling, and rendered video illustrations, supplemented with didactic textual material. HTML coding and Java scripting were heavily implemented for the blending of the educational modules. An interactive, multimedia educational tool for visualizing the morphology and interrelationships of osseous craniofacial anatomy, as depicted on CBCT MPR and 3D images, was designed and implemented. The present design of a web-based instruction module may assist radiologists and clinicians in learning how to recognize and interpret the craniofacial anatomy of CBCT based images more efficiently.

  1. [Giant aneurysm of posterior comunicating artery (PCoA) in cerebral panarteriography and CT angiography (CTA)].

    PubMed

    Jaźwiec, Przemysław; Chwiszczuk, Luiza; Sasiadek, Marek; Całka, Karol; Kuniej, Tomasz; Plucińska, Irena

    2008-01-01

    We present a case of 32-year-old woman, who was admitted to the Department of Neurology in the emergency mode, due to: instantaneous pupillary dilation (mydriasis), ptosis of the left eyeball and double vision. We performed plain CT, panarteriography of cerebral vessels, CT angiography with RT3D (volume-rendered three-dimensional) reconstruction images. On the base of imaging studies the diagnosis of giant saccular aneurysm of the left posterior communicating artery was established. The patient was operated on and the giant aneurysm of left posterior communicating artery was clipped, confirming radiological diagnosis. During operation and postoperative period no complications were noted.

  2. A new strategic neurosurgical planning tool for brainstem cavernous malformations using interactive computer graphics with multimodal fusion images.

    PubMed

    Kin, Taichi; Nakatomi, Hirofumi; Shojima, Masaaki; Tanaka, Minoru; Ino, Kenji; Mori, Harushi; Kunimatsu, Akira; Oyama, Hiroshi; Saito, Nobuhito

    2012-07-01

    In this study, the authors used preoperative simulation employing 3D computer graphics (interactive computer graphics) to fuse all imaging data for brainstem cavernous malformations. The authors evaluated whether interactive computer graphics or 2D imaging correlated better with the actual operative field, particularly in identifying a developmental venous anomaly (DVA). The study population consisted of 10 patients scheduled for surgical treatment of brainstem cavernous malformations. Data from preoperative imaging (MRI, CT, and 3D rotational angiography) were automatically fused using a normalized mutual information method, and then reconstructed by a hybrid method combining surface rendering and volume rendering methods. With surface rendering, multimodality and multithreshold techniques for 1 tissue were applied. The completed interactive computer graphics were used for simulation of surgical approaches and assumed surgical fields. Preoperative diagnostic rates for a DVA associated with brainstem cavernous malformation were compared between conventional 2D imaging and interactive computer graphics employing receiver operating characteristic (ROC) analysis. The time required for reconstruction of 3D images was 3-6 hours for interactive computer graphics. Observation in interactive mode required approximately 15 minutes. Detailed anatomical information for operative procedures, from the craniotomy to microsurgical operations, could be visualized and simulated three-dimensionally as 1 computer graphic using interactive computer graphics. Virtual surgical views were consistent with actual operative views. This technique was very useful for examining various surgical approaches. Mean (±SEM) area under the ROC curve for rate of DVA diagnosis was significantly better for interactive computer graphics (1.000±0.000) than for 2D imaging (0.766±0.091; p<0.001, Mann-Whitney U-test). The authors report a new method for automatic registration of preoperative imaging data from CT, MRI, and 3D rotational angiography for reconstruction into 1 computer graphic. The diagnostic rate of DVA associated with brainstem cavernous malformation was significantly better using interactive computer graphics than with 2D images. Interactive computer graphics was also useful in helping to plan the surgical access corridor.

  3. Display gamma is an important factor in Web image viewing

    NASA Astrophysics Data System (ADS)

    Zhang, Xuemei; Lavin, Yingmei; Silverstein, D. Amnon

    2001-06-01

    We conducted a perceptual image preference experiment over the web to find our (1) if typical computer users have significant variations in their display gamma settings, and (2) if so, do the gamma settings have significant perceptual effect on the appearance of images in their web browsers. The digital image renderings used were found to have preferred tone characteristics from a previous lab- controlled experiment. They were rendered with 4 different gamma settings. The subjects were asked to view the images over the web, with their own computer equipment and web browsers. The subjects werewe asked to view the images over the web, with their own computer equipment and web browsers. The subjects made pair-wise subjective preference judgements on which rendering they liked bets for each image. Each subject's display gamma setting was estimated using a 'gamma estimator' tool, implemented as a Java applet. The results indicated that (1) the user's gamma settings, as estimated in the experiment, span a wide range from about 1.8 to about 3.0; (2) the subjects preferred images that werewe rendered with a 'correct' gamma value matching their display setting. Subjects disliked images rendered with a gamma value not matching their displays'. This indicates that display gamma estimation is a perceptually significant factor in web image optimization.

  4. Matching rendered and real world images by digital image processing

    NASA Astrophysics Data System (ADS)

    Mitjà, Carles; Bover, Toni; Bigas, Miquel; Escofet, Jaume

    2010-05-01

    Recent advances in computer-generated images (CGI) have been used in commercial and industrial photography providing a broad scope in product advertising. Mixing real world images with those rendered from virtual space software shows a more or less visible mismatching between corresponding image quality performance. Rendered images are produced by software which quality performance is only limited by the resolution output. Real world images are taken with cameras with some amount of image degradation factors as lens residual aberrations, diffraction, sensor low pass anti aliasing filters, color pattern demosaicing, etc. The effect of all those image quality degradation factors can be characterized by the system Point Spread Function (PSF). Because the image is the convolution of the object by the system PSF, its characterization shows the amount of image degradation added to any taken picture. This work explores the use of image processing to degrade the rendered images following the parameters indicated by the real system PSF, attempting to match both virtual and real world image qualities. The system MTF is determined by the slanted edge method both in laboratory conditions and in the real picture environment in order to compare the influence of the working conditions on the device performance; an approximation to the system PSF is derived from the two measurements. The rendered images are filtered through a Gaussian filter obtained from the taking system PSF. Results with and without filtering are shown and compared measuring the contrast achieved in different final image regions.

  5. Congenital anatomic variants of the kidney and ureter: a pictorial essay.

    PubMed

    Srinivas, M R; Adarsh, K M; Jeeson, Riya; Ashwini, C; Nagaraj, B R

    2016-03-01

    Congenital renal parenchymal and pelvicalyceal abnormalities have a wide spectrum. Most of them are asymptomatic, like that of ectopia, cross fused kidney, horseshoe kidney, etc., while a few of them become complicated, leading to renal failure and death. It is very important for the radiologist to identify these anatomic variants and guide the clinicians for surgical and therapeutic procedures. Cross-sectional imaging with a volume rendered technique/maximum intensity projection has overcome ultrasonography and IVU for identification and interpretation of some of these variants.

  6. Exploiting the potential of free software to evaluate root canal biomechanical preparation outcomes through micro-CT images.

    PubMed

    Neves, A A; Silva, E J; Roter, J M; Belladona, F G; Alves, H D; Lopes, R T; Paciornik, S; De-Deus, G A

    2015-11-01

    To propose an automated image processing routine based on free software to quantify root canal preparation outcomes in pairs of sound and instrumented roots after micro-CT scanning procedures. Seven mesial roots of human mandibular molars with different canal configuration systems were studied: (i) Vertucci's type 1, (ii) Vertucci's type 2, (iii) two individual canals, (iv) Vertucci's type 6, canals (v) with and (vi) without debris, and (vii) canal with visible pulp calcification. All teeth were instrumented with the BioRaCe system and scanned in a Skyscan 1173 micro-CT before and after canal preparation. After reconstruction, the instrumented stack of images (IS) was registered against the preoperative sound stack of images (SS). Image processing included contrast equalization and noise filtering. Sound canal volumes were obtained by a minimum threshold. For the IS, a fixed conservative threshold was chosen as the best compromise between instrumented canal and dentine whilst avoiding debris, resulting in instrumented canal plus empty spaces. Arithmetic and logical operations between sound and instrumented stacks were used to identify debris. Noninstrumented dentine was calculated using a minimum threshold in the IS and subtracting from the SS and total debris. Removed dentine volume was obtained by subtracting SS from IS. Quantitative data on total debris present in the root canal space after instrumentation, noninstrumented areas and removed dentine volume were obtained for each test case, as well as three-dimensional volume renderings. After standardization of acquisition, reconstruction and image processing micro-CT images, a quantitative approach for calculation of root canal biomechanical outcomes was achieved using free software. © 2014 International Endodontic Journal. Published by John Wiley & Sons Ltd.

  7. A heterogeneous computing environment for simulating astrophysical fluid flows

    NASA Technical Reports Server (NTRS)

    Cazes, J.

    1994-01-01

    In the Concurrent Computing Laboratory in the Department of Physics and Astronomy at Louisiana State University we have constructed a heterogeneous computing environment that permits us to routinely simulate complicated three-dimensional fluid flows and to readily visualize the results of each simulation via three-dimensional animation sequences. An 8192-node MasPar MP-1 computer with 0.5 GBytes of RAM provides 250 MFlops of execution speed for our fluid flow simulations. Utilizing the parallel virtual machine (PVM) language, at periodic intervals data is automatically transferred from the MP-1 to a cluster of workstations where individual three-dimensional images are rendered for inclusion in a single animation sequence. Work is underway to replace executions on the MP-1 with simulations performed on the 512-node CM-5 at NCSA and to simultaneously gain access to more potent volume rendering workstations.

  8. An image-processing software package: UU and Fig for optical metrology applications

    NASA Astrophysics Data System (ADS)

    Chen, Lujie

    2013-06-01

    Modern optical metrology applications are largely supported by computational methods, such as phase shifting [1], Fourier Transform [2], digital image correlation [3], camera calibration [4], etc, in which image processing is a critical and indispensable component. While it is not too difficult to obtain a wide variety of image-processing programs from the internet; few are catered for the relatively special area of optical metrology. This paper introduces an image-processing software package: UU (data processing) and Fig (data rendering) that incorporates many useful functions to process optical metrological data. The cross-platform programs UU and Fig are developed based on wxWidgets. At the time of writing, it has been tested on Windows, Linux and Mac OS. The userinterface is designed to offer precise control of the underline processing procedures in a scientific manner. The data input/output mechanism is designed to accommodate diverse file formats and to facilitate the interaction with other independent programs. In terms of robustness, although the software was initially developed for personal use, it is comparably stable and accurate to most of the commercial software of similar nature. In addition to functions for optical metrology, the software package has a rich collection of useful tools in the following areas: real-time image streaming from USB and GigE cameras, computational geometry, computer vision, fitting of data, 3D image processing, vector image processing, precision device control (rotary stage, PZT stage, etc), point cloud to surface reconstruction, volume rendering, batch processing, etc. The software package is currently used in a number of universities for teaching and research.

  9. Computing volume potentials for noninvasive imaging of cardiac excitation.

    PubMed

    van der Graaf, A W Maurits; Bhagirath, Pranav; van Driel, Vincent J H M; Ramanna, Hemanth; de Hooge, Jacques; de Groot, Natasja M S; Götte, Marco J W

    2015-03-01

    In noninvasive imaging of cardiac excitation, the use of body surface potentials (BSP) rather than body volume potentials (BVP) has been favored due to enhanced computational efficiency and reduced modeling effort. Nowadays, increased computational power and the availability of open source software enable the calculation of BVP for clinical purposes. In order to illustrate the possible advantages of this approach, the explanatory power of BVP is investigated using a rectangular tank filled with an electrolytic conductor and a patient specific three dimensional model. MRI images of the tank and of a patient were obtained in three orthogonal directions using a turbo spin echo MRI sequence. MRI images were segmented in three dimensional using custom written software. Gmsh software was used for mesh generation. BVP were computed using a transfer matrix and FEniCS software. The solution for 240,000 nodes, corresponding to a resolution of 5 mm throughout the thorax volume, was computed in 3 minutes. The tank experiment revealed that an increased electrode surface renders the position of the 4 V equipotential plane insensitive to mesh cell size and reduces simulated deviations. In the patient-specific model, the impact of assigning a different conductivity to lung tissue on the distribution of volume potentials could be visualized. Generation of high quality volume meshes and computation of BVP with a resolution of 5 mm is feasible using generally available software and hardware. Estimation of BVP may lead to an improved understanding of the genesis of BSP and sources of local inaccuracies. © 2014 Wiley Periodicals, Inc.

  10. Random forest classification of large volume structures for visuo-haptic rendering in CT images

    NASA Astrophysics Data System (ADS)

    Mastmeyer, Andre; Fortmeier, Dirk; Handels, Heinz

    2016-03-01

    For patient-specific voxel-based visuo-haptic rendering of CT scans of the liver area, the fully automatic segmentation of large volume structures such as skin, soft tissue, lungs and intestine (risk structures) is important. Using a machine learning based approach, several existing segmentations from 10 segmented gold-standard patients are learned by random decision forests individually and collectively. The core of this paper is feature selection and the application of the learned classifiers to a new patient data set. In a leave-some-out cross-validation, the obtained full volume segmentations are compared to the gold-standard segmentations of the untrained patients. The proposed classifiers use a multi-dimensional feature space to estimate the hidden truth, instead of relying on clinical standard threshold and connectivity based methods. The result of our efficient whole-body section classification are multi-label maps with the considered tissues. For visuo-haptic simulation, other small volume structures would have to be segmented additionally. We also take a look into these structures (liver vessels). For an experimental leave-some-out study consisting of 10 patients, the proposed method performs much more efficiently compared to state of the art methods. In two variants of leave-some-out experiments we obtain best mean DICE ratios of 0.79, 0.97, 0.63 and 0.83 for skin, soft tissue, hard bone and risk structures. Liver structures are segmented with DICE 0.93 for the liver, 0.43 for blood vessels and 0.39 for bile vessels.

  11. [Registration and 3D rendering of serial tissue section images].

    PubMed

    Liu, Zhexing; Jiang, Guiping; Dong, Wu; Zhang, Yu; Xie, Xiaomian; Hao, Liwei; Wang, Zhiyuan; Li, Shuxiang

    2002-12-01

    It is an important morphological research method to reconstruct the 3D imaging from serial section tissue images. Registration of serial images is a key step to 3D reconstruction. Firstly, an introduction to the segmentation-counting registration algorithm is presented, which is based on the joint histogram. After thresholding of the two images to be registered, the criterion function is defined as counting in a specific region of the joint histogram, which greatly speeds up the alignment process. Then, the method is used to conduct the serial tissue image matching task, and lies a solid foundation for 3D rendering. Finally, preliminary surface rendering results are presented.

  12. Analysis of the chicken retina with an adaptive optics multiphoton microscope

    PubMed Central

    Bueno, Juan M.; Giakoumaki, Anastasia; Gualda, Emilio J.; Schaeffel, Frank; Artal, Pablo

    2011-01-01

    The structure and organization of the chicken retina has been investigated with an adaptive optics multiphoton imaging microscope in a backward configuration. Non-stained flat-mounted retinal tissues were imaged at different depths, from the retinal nerve fiber layer to the outer segment, by detecting the intrinsic nonlinear fluorescent signal. From the stacks of images corresponding to the different retinal layers, volume renderings of the entire retina were reconstructed. The density of photoreceptors and ganglion cells layer were directly estimated from the images as a function of the retinal eccentricity. The maximum anatomical resolving power at different retinal eccentricities was also calculated. This technique could be used for a better characterization of retinal alterations during myopia development, and may be useful for visualization of retinal pathologies and intoxication during pharmacological studies. PMID:21698025

  13. Spectral domain optical coherence tomography of multi-MHz A-scan rates at 1310 nm range and real-time 4D-display up to 41 volumes/second

    PubMed Central

    Choi, Dong-hak; Hiro-Oka, Hideaki; Shimizu, Kimiya; Ohbayashi, Kohji

    2012-01-01

    An ultrafast frequency domain optical coherence tomography system was developed at A-scan rates between 2.5 and 10 MHz, a B-scan rate of 4 or 8 kHz, and volume-rates between 12 and 41 volumes/second. In the case of the worst duty ratio of 10%, the averaged A-scan rate was 1 MHz. Two optical demultiplexers at a center wavelength of 1310 nm were used for linear-k spectral dispersion and simultaneous differential signal detection at 320 wavelengths. The depth-range, sensitivity, sensitivity roll-off by 6 dB, and axial resolution were 4 mm, 97 dB, 6 mm, and 23 μm, respectively. Using FPGAs for FFT and a GPU for volume rendering, a real-time 4D display was demonstrated at a rate up to 41 volumes/second for an image size of 256 (axial) × 128 × 128 (lateral) voxels. PMID:23243560

  14. Real time ray tracing based on shader

    NASA Astrophysics Data System (ADS)

    Gui, JiangHeng; Li, Min

    2017-07-01

    Ray tracing is a rendering algorithm for generating an image through tracing lights into an image plane, it can simulate complicate optical phenomenon like refraction, depth of field and motion blur. Compared with rasterization, ray tracing can achieve more realistic rendering result, however with greater computational cost, simple scene rendering can consume tons of time. With the GPU's performance improvement and the advent of programmable rendering pipeline, complicated algorithm can also be implemented directly on shader. So, this paper proposes a new method that implement ray tracing directly on fragment shader, mainly include: surface intersection, importance sampling and progressive rendering. With the help of GPU's powerful throughput capability, it can implement real time rendering of simple scene.

  15. Light-sheet enhanced resolution of light field microscopy for rapid imaging of large volumes

    NASA Astrophysics Data System (ADS)

    Madrid Wolff, Jorge; Castro, Diego; Arbeláez, Pablo; Forero-Shelton, Manu

    2018-02-01

    Whole-brain imaging is challenging because it demands microscopes with high temporal and spatial resolution, which are often at odds, especially in the context of large fields of view. We have designed and built a light-sheet microscope with digital micromirror illumination and light-field detection. On the one hand, light sheets provide high resolution optical sectioning on live samples without compromising their viability. On the other hand, light field imaging makes it possible to reconstruct full volumes of relatively large fields of view from a single camera exposure; however, its enhanced temporal resolution comes at the expense of spatial resolution, limiting its applicability. We present an approach to increase the resolution of light field images using DMD-based light sheet illumination. To that end, we develop a method to produce synthetic resolution targets for light field microscopy and a procedure to correct the depth at which planes are refocused with rendering software. We measured the axial resolution as a function of depth and show a three-fold potential improvement with structured illumination, albeit by sacrificing some temporal resolution, also three-fold. This results in an imaging system that may be adjusted to specific needs without having to reassemble and realign it. This approach could be used to image relatively large samples at high rates.

  16. High Performance GPU-Based Fourier Volume Rendering.

    PubMed

    Abdellah, Marwan; Eldeib, Ayman; Sharawi, Amr

    2015-01-01

    Fourier volume rendering (FVR) is a significant visualization technique that has been used widely in digital radiography. As a result of its (N (2)log⁡N) time complexity, it provides a faster alternative to spatial domain volume rendering algorithms that are (N (3)) computationally complex. Relying on the Fourier projection-slice theorem, this technique operates on the spectral representation of a 3D volume instead of processing its spatial representation to generate attenuation-only projections that look like X-ray radiographs. Due to the rapid evolution of its underlying architecture, the graphics processing unit (GPU) became an attractive competent platform that can deliver giant computational raw power compared to the central processing unit (CPU) on a per-dollar-basis. The introduction of the compute unified device architecture (CUDA) technology enables embarrassingly-parallel algorithms to run efficiently on CUDA-capable GPU architectures. In this work, a high performance GPU-accelerated implementation of the FVR pipeline on CUDA-enabled GPUs is presented. This proposed implementation can achieve a speed-up of 117x compared to a single-threaded hybrid implementation that uses the CPU and GPU together by taking advantage of executing the rendering pipeline entirely on recent GPU architectures.

  17. Combined in-depth, 3D, en face imaging of the optic disc, optic disc pits and optic disc pit maculopathy using swept-source megahertz OCT at 1050 nm.

    PubMed

    Maertz, Josef; Kolb, Jan Philip; Klein, Thomas; Mohler, Kathrin J; Eibl, Matthias; Wieser, Wolfgang; Huber, Robert; Priglinger, Siegfried; Wolf, Armin

    2018-02-01

    To demonstrate papillary imaging of eyes with optic disc pits (ODP) or optic disc pit associated maculopathy (ODP-M) with ultrahigh-speed swept-source optical coherence tomography (SS-OCT) at 1.68 million A-scans/s. To generate 3D-renderings of the papillary area with 3D volume-reconstructions of the ODP and highly resolved en face images from a single densely-sampled megahertz-OCT (MHz-OCT) dataset for investigation of ODP-characteristics. A 1.68 MHz-prototype SS-MHz-OCT system at 1050 nm based on a Fourier-domain mode-locked laser was employed to acquire high-definition, 3D datasets with a dense sampling of 1600 × 1600 A-scans over a 45° field of view. Six eyes with ODPs, and two further eyes with glaucomatous alteration or without ocular pathology are presented. 3D-rendering of the deep papillary structures, virtual 3D-reconstructions of the ODPs and depth resolved isotropic en face images were generated using semiautomatic segmentation. 3D-rendering and en face imaging of the optic disc, ODPs and ODP associated pathologies showed a broad spectrum regarding ODP characteristics. Between individuals the shape of the ODP and the appending pathologies varied considerably. MHz-OCT en face imaging generates distinct top-view images of ODPs and ODP-M. MHz-OCT generates high resolution images of retinal pathologies associated with ODP-M and allows visualizing ODPs with depths of up to 2.7 mm. Different patterns of ODPs can be visualized in patients for the first time using 3D-reconstructions and co-registered high-definition en face images extracted from a single densely sampled 1050 nm megahertz-OCT (MHz-OCT) dataset. As the immediate vicinity to the SAS and the site of intrapapillary proliferation is located at the bottom of the ODP it is crucial to image the complete structure and the whole depth of ODPs. Especially in very deep pits, where non-swept-source OCT fails to reach the bottom, conventional swept-source devices and the MHz-OCT alike are feasible and beneficial methods to examine deep details of optic disc pathologies, while the MHz-OCT bears the advantage of an essentially swifter imaging process.

  18. Virtual dissection of Thoropa miliaris tadpole using phase-contrast synchrotron microtomography

    NASA Astrophysics Data System (ADS)

    Fidalgo, G.; Colaço, M. V.; Nogueira, L. P.; Braz, D.; Silva, H. R.; Colaço, G.; Barroso, R. C.

    2018-05-01

    In this work, in-line phase-contrast synchrotron microtomography was used in order to study the external and internal morphology of Thoropa miliaris tadpoles. Whole-specimens of T. miliaris in larval stages of development 28, 37 and 42, collected in the municipality of Mangaratiba (Rio de Janeiro, Brazil) were used for the study. The samples were scanned in microtomography beamline (IMX) at the Brazilian Synchrotron Light Laboratory (LNLS). The phase-contrast technique allowed us to obtain high quality images which made possible the structures segmentation on the rendered volume by the Avizo graphic image editing software. The combination of high quality images and segmentation process provides adequate visualization of different organs and soft (liver, notochord, brain, crystalline, cartilages) and hard (elements of the bone skeleton) tissues.

  19. Correlation between differential renal function estimation using CT-based functional renal parenchymal volume and (99m)Tc - DTPA renal scan.

    PubMed

    Sarma, Debanga; Barua, Sasanka K; Rajeev, T P; Baruah, Saumar J

    2012-10-01

    Nuclear renal scan is currently the gold standard imaging study to determine differential renal function. We propose helical CT as single modality for both the anatomical and functional evaluation of kidney with impaired function. In the present study renal parenchymal volume is measured and percent total renal volume is used as a surrogate marker for differential renal function. The objective of this study is to correlate between differential renal function estimation using CT-based renal parenchymal volume measurement with differential renal function estimation using (99m)TC - DTPA renal scan. Twenty-one patients with unilateral obstructive uropathy were enrolled in this prospective comparative study. They were subjected to (99m)Tc - DTPA renal scan and 64 slice helical CT scan which estimates the renal volume depending on the reconstruction of arterial phase images followed by volume rendering and percent renal volume was calculated. Percent renal volume was correlated with percent renal function, as determined by nuclear renal scan using Pearson coefficient. RESULTS AND OBSERVATION: A strong correlation is observed between percent renal volume and percent renal function in obstructed units (r = 0.828, P < 0.001) as well as in nonobstructed units (r = 0.827, P < 0.001). There is a strong correlation between percent renal volume determined by CT scan and percent renal function determined by (99m)TC - DTPA renal scan both in obstructed and in normal units. CT-based percent renal volume can be used as a single radiological tests for both functional and anatomical assessment of impaired renal units.

  20. Ophthalmologic diagnostic tool using MR images for biomechanically-based muscle volume deformation

    NASA Astrophysics Data System (ADS)

    Buchberger, Michael; Kaltofen, Thomas

    2003-05-01

    We would like to give a work-in-progress report on our ophthalmologic diagnostic software system which performs biomechanically-based muscle volume deformations using MR images. For reconstructing a three-dimensional representation of an extraocular eye muscle, a sufficient amount of high resolution MR images is used, each representing a slice of the muscle. In addition, threshold values are given, which restrict the amount of data used from the MR images. The Marching Cube algorithm is applied to the polygons, resulting in a 3D representation of the muscle, which can efficiently be rendered. A transformation to a dynamic, deformable model is applied by calculating the center of gravity of each muscle slice, approximating the muscle path and subsequently adding Hermite splines through the centers of gravity of all slices. Then, a radius function is defined for each slice, completing the transformation of the static 3D polygon model. Finally, this paper describes future extensions to our system. One of these extensions is the support for additional calculations and measurements within the reconstructed 3D muscle representation. Globe translation, localization of muscle pulleys by analyzing the 3D reconstruction in two different gaze positions and other diagnostic measurements will be available.

  1. Cardiac cycle-dependent left atrial dynamics: implications for catheter ablation of atrial fibrillation.

    PubMed

    Patel, Amit R; Fatemi, Omid; Norton, Patrick T; West, J Jason; Helms, Adam S; Kramer, Christopher M; Ferguson, John D

    2008-06-01

    Left atrial (LA) volume determines prognosis and response to therapy for atrial fibrillation. Integration of electroanatomic maps with three-dimensional images rendered from computed tomography and magnetic resonance imaging (MRI) is used to facilitate atrial fibrillation ablation. The purpose of this study was to measure LA volume changes and regional motion during the cardiac cycle that might affect the accuracy of image integration and to determine their relationship to standard LA volume measurements. MRI was performed in 30 patients with paroxysmal atrial fibrillation. LA time-volume curves were generated and used to divide LA ejection fraction into pumping ejection fraction and conduit ejection fraction and to determine maximum LA volume (LA(max)) and preatrial contraction volume. LA volume was measured using an MRI angiogram and traditional geometric models from echocardiography (area-length model and ellipsoid model). In-plane displacement of the pulmonary veins, anterior left atrium, mitral annulus, and LA appendage was measured. LA(max) was 107 +/- 36 mL and occurred at 42% +/- 5% of the R-R interval. Preatrial contraction volume was 86 +/- 34 mL and occurred at 81% +/- 4% of the R-R interval. LA ejection fraction was 45% +/- 10%, and pumping ejection fraction was 31% +/- 10%. LA volume measurements made from MRI angiogram, area-length model, and ellipsoid model underestimated LA(max) by 21 +/- 25 mL, 16 +/- 26 mL, and 35 +/- 22 mL, respectively. Anterior LA, mitral annulus, and LA appendage were significantly displaced during the cardiac cycle (8.8 +/- 2.0 mm, 13.2 +/- 3.8 mm, and 10.2 +/- 3.4 mm, respectively); the pulmonary veins were not displaced. LA volume changes significantly during the cardiac cycle, and substantial regional variation in LA motion exists. Standard measurements of LA volume significantly underestimate LA(max) compared to the gold standard measure of three-dimensional volumetrics.

  2. Shwirl: Meaningful coloring of spectral cube data with volume rendering

    NASA Astrophysics Data System (ADS)

    Vohl, Dany

    2017-04-01

    Shwirl visualizes spectral data cubes with meaningful coloring methods. The program has been developed to investigate transfer functions, which combines volumetric elements (or voxels) to set the color, and graphics shaders, functions used to compute several properties of the final image such as color, depth, and/or transparency, as enablers for scientific visualization of astronomical data. The program uses Astropy (ascl:1304.002) to handle FITS files and World Coordinate System, Qt (and PyQt) for the user interface, and VisPy, an object-oriented Python visualization library binding onto OpenGL.

  3. Accuracy evaluation of an X-ray microtomography system.

    PubMed

    Fernandes, Jaquiel S; Appoloni, Carlos R; Fernandes, Celso P

    2016-06-01

    Microstructural parameter evaluation of reservoir rocks is of great importance to petroleum production companies. In this connection, X-ray computed microtomography (μ-CT) has proven to be a quite useful method for the assessment of rocks, as it provides important microstructural parameters, such as porosity, permeability, pore size distribution and porous phase of the sample. X-ray computed microtomography is a non-destructive technique that enables the reuse of samples already measured and also yields 2-D cross-sectional images of the sample as well as volume rendering. This technique offers an additional advantage, as it does not require sample preparation, of reducing the measurement time, which is approximately one to three hours, depending on the spatial resolution used. Although this technique is extensively used, accuracy verification of measurements is hard to obtain because the existing calibrated samples (phantoms) have large volumes and are assessed in medical CT scanners with millimeter spatial resolution. Accordingly, this study aims to determine the accuracy of an X-ray computed microtomography system using a Skyscan 1172 X-ray microtomograph. To accomplish this investigation, it was used a nylon thread set with known appropriate diameter inserted into a glass tube. The results for porosity size and phase distribution by X-ray microtomography were very close to the geometrically calculated values. The geometrically calculated porosity and the porosity determined by the methodology using the μ-CT was 33.4±3.4% and 31.0±0.3%, respectively. The outcome of this investigation was excellent. It was also observed a small variability in the results along all 401 sections of the analyzed image. Minimum and maximum porosity values between the cross sections were 30.9% and 31.1%, respectively. A 3-D image representing the actual structure of the sample was also rendered from the 2-D images. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. A novel image processing technique for 3D volumetric analysis of severely resorbed alveolar sockets with CBCT.

    PubMed

    Manavella, Valeria; Romano, Federica; Garrone, Federica; Terzini, Mara; Bignardi, Cristina; Aimetti, Mario

    2017-06-01

    The aim of this study was to present and validate a novel procedure for the quantitative volumetric assessment of extraction sockets that combines cone-beam computed tomography (CBCT) and image processing techniques. The CBCT dataset of 9 severely resorbed extraction sockets was analyzed by means of two image processing software, Image J and Mimics, using manual and automated segmentation techniques. They were also applied on 5-mm spherical aluminum markers of known volume and on a polyvinyl chloride model of one alveolar socket scanned with Micro-CT to test the accuracy. Statistical differences in alveolar socket volume were found between the different methods of volumetric analysis (P<0.0001). The automated segmentation using Mimics was the most reliable and accurate method with a relative error of 1.5%, considerably smaller than the error of 7% and of 10% introduced by the manual method using Mimics and by the automated method using ImageJ. The currently proposed automated segmentation protocol for the three-dimensional rendering of alveolar sockets showed more accurate results, excellent inter-observer similarity and increased user friendliness. The clinical application of this method enables a three-dimensional evaluation of extraction socket healing after the reconstructive procedures and during the follow-up visits.

  5. Efficient Stochastic Rendering of Static and Animated Volumes Using Visibility Sweeps.

    PubMed

    von Radziewsky, Philipp; Kroes, Thomas; Eisemann, Martin; Eisemann, Elmar

    2017-09-01

    Stochastically solving the rendering integral (particularly visibility) is the de-facto standard for physically-based light transport but it is computationally expensive, especially when displaying heterogeneous volumetric data. In this work, we present efficient techniques to speed-up the rendering process via a novel visibility-estimation method in concert with an unbiased importance sampling (involving environmental lighting and visibility inside the volume), filtering, and update techniques for both static and animated scenes. Our major contributions include a progressive estimate of partial occlusions based on a fast sweeping-plane algorithm. These occlusions are stored in an octahedral representation, which can be conveniently transformed into a quadtree-based hierarchy suited for a joint importance sampling. Further, we propose sweep-space filtering, which suppresses the occurrence of fireflies and investigate different update schemes for animated scenes. Our technique is unbiased, requires little precomputation, is highly parallelizable, and is applicable to a various volume data sets, dynamic transfer functions, animated volumes and changing environmental lighting.

  6. Computational Video for Collaborative Applications

    DTIC Science & Technology

    2003-03-01

    Plenoptic Modeling: An Image- Based Rendering System.” SIGGRAPH 95, 39-46. [18] McMillan, L. An Image-Based Approach to Three-Dimensional Computer... Plenoptic modeling and rendering from image sequences taken by hand-held camera. Proc. DAGM 99, pages 94–101. [8] Y. Horry, K. Anjyo, and K. Arai

  7. Ray Casting of Large Multi-Resolution Volume Datasets

    NASA Astrophysics Data System (ADS)

    Lux, C.; Fröhlich, B.

    2009-04-01

    High quality volume visualization through ray casting on graphics processing units (GPU) has become an important approach for many application domains. We present a GPU-based, multi-resolution ray casting technique for the interactive visualization of massive volume data sets commonly found in the oil and gas industry. Large volume data sets are represented as a multi-resolution hierarchy based on an octree data structure. The original volume data is decomposed into small bricks of a fixed size acting as the leaf nodes of the octree. These nodes are the highest resolution of the volume. Coarser resolutions are represented through inner nodes of the hierarchy which are generated by down sampling eight neighboring nodes on a finer level. Due to limited memory resources of current desktop workstations and graphics hardware only a limited working set of bricks can be locally maintained for a frame to be displayed. This working set is chosen to represent the whole volume at different local resolution levels depending on the current viewer position, transfer function and distinct areas of interest. During runtime the working set of bricks is maintained in CPU- and GPU memory and is adaptively updated by asynchronously fetching data from external sources like hard drives or a network. The CPU memory hereby acts as a secondary level cache for these sources from which the GPU representation is updated. Our volume ray casting algorithm is based on a 3D texture-atlas in GPU memory. This texture-atlas contains the complete working set of bricks of the current multi-resolution representation of the volume. This enables the volume ray casting algorithm to access the whole working set of bricks through only a single 3D texture. For traversing rays through the volume, information about the locations and resolution levels of visited bricks are required for correct compositing computations. We encode this information into a small 3D index texture which represents the current octree subdivision on its finest level and spatially organizes the bricked data. This approach allows us to render a bricked multi-resolution volume data set utilizing only a single rendering pass with no loss of compositing precision. In contrast most state-of-the art volume rendering systems handle the bricked data as individual 3D textures, which are rendered one at a time while the results are composited into a lower precision frame buffer. Furthermore, our method enables us to integrate advanced volume rendering techniques like empty-space skipping, adaptive sampling and preintegrated transfer functions in a very straightforward manner with virtually no extra costs. Our interactive volume ray tracing implementation allows high quality visualizations of massive volume data sets of tens of Gigabytes in size on standard desktop workstations.

  8. Bio-inspired color image enhancement

    NASA Astrophysics Data System (ADS)

    Meylan, Laurence; Susstrunk, Sabine

    2004-06-01

    Capturing and rendering an image that fulfills the observer's expectations is a difficult task. This is due to the fact that the signal reaching the eye is processed by a complex mechanism before forming a percept, whereas a capturing device only retains the physical value of light intensities. It is especially difficult to render complex scenes with highly varying luminances. For example, a picture taken inside a room where objects are visible through the windows will not be rendered correctly by a global technique. Either details in the dim room will be hidden in shadow or the objects viewed through the window will be too bright. The image has to be treated locally to resemble more closely to what the observer remembers. The purpose of this work is to develop a technique for rendering images based on human local adaptation. We take inspiration from a model of color vision called Retinex. This model determines the perceived color given spatial relationships of the captured signals. Retinex has been used as a computational model for image rendering. In this article, we propose a new solution inspired by Retinex that is based on a single filter applied to the luminance channel. All parameters are image-dependent so that the process requires no parameter tuning. That makes the method more flexible than other existing ones. The presented results show that our method suitably enhances high dynamic range images.

  9. Synthesized view comparison method for no-reference 3D image quality assessment

    NASA Astrophysics Data System (ADS)

    Luo, Fangzhou; Lin, Chaoyi; Gu, Xiaodong; Ma, Xiaojun

    2018-04-01

    We develop a no-reference image quality assessment metric to evaluate the quality of synthesized view rendered from the Multi-view Video plus Depth (MVD) format. Our metric is named Synthesized View Comparison (SVC), which is designed for real-time quality monitoring at the receiver side in a 3D-TV system. The metric utilizes the virtual views in the middle which are warped from left and right views by Depth-image-based rendering algorithm (DIBR), and compares the difference between the virtual views rendered from different cameras by Structural SIMilarity (SSIM), a popular 2D full-reference image quality assessment metric. The experimental results indicate that our no-reference quality assessment metric for the synthesized images has competitive prediction performance compared with some classic full-reference image quality assessment metrics.

  10. IceT users' guide and reference.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moreland, Kenneth D.

    2011-01-01

    The Image Composition Engine for Tiles (IceT) is a high-performance sort-last parallel rendering library. In addition to providing accelerated rendering for a standard display, IceT provides the unique ability to generate images for tiled displays. The overall resolution of the display may be several times larger than any viewport that may be rendered by a single machine. This document is an overview of the user interface to IceT.

  11. Automatic extraction of via in the CT image of PCB

    NASA Astrophysics Data System (ADS)

    Liu, Xifeng; Hu, Yuwei

    2018-04-01

    In modern industry, the nondestructive testing of printed circuit board (PCB) can prevent effectively the system failure and is becoming more and more important. In order to detect the via in the PCB base on the CT image automatically accurately and reliably, a novel algorithm for via extraction based on weighting stack combining the morphologic character of via is designed. Every slice data in the vertical direction of the PCB is superimposed to enhanced vias target. The OTSU algorithm is used to segment the slice image. OTSU algorithm of thresholding gray level images is efficient for separating an image into two classes where two types of fairly distinct classes exist in the image. Randomized Hough Transform was used to locate the region of via in the segmented binary image. Then the 3D reconstruction of via based on sequence slice images was done by volume rendering. The accuracy of via positioning and detecting from a CT images of PCB was demonstrated by proposed algorithm. It was found that the method is good in veracity and stability for detecting of via in three dimensional.

  12. Efficient high-quality volume rendering of SPH data.

    PubMed

    Fraedrich, Roland; Auer, Stefan; Westermann, Rüdiger

    2010-01-01

    High quality volume rendering of SPH data requires a complex order-dependent resampling of particle quantities along the view rays. In this paper we present an efficient approach to perform this task using a novel view-space discretization of the simulation domain. Our method draws upon recent work on GPU-based particle voxelization for the efficient resampling of particles into uniform grids. We propose a new technique that leverages a perspective grid to adaptively discretize the view-volume, giving rise to a continuous level-of-detail sampling structure and reducing memory requirements compared to a uniform grid. In combination with a level-of-detail representation of the particle set, the perspective grid allows effectively reducing the amount of primitives to be processed at run-time. We demonstrate the quality and performance of our method for the rendering of fluid and gas dynamics SPH simulations consisting of many millions of particles.

  13. Thalamotemporal alteration and postoperative seizures in temporal lobe epilepsy

    PubMed Central

    Richardson, Mark P.; Schoene‐Bake, Jan‐Christoph; O'Muircheartaigh, Jonathan; Elkommos, Samia; Kreilkamp, Barbara; Goh, Yee Yen; Marson, Anthony G.; Elger, Christian; Weber, Bernd

    2015-01-01

    Objective There are competing explanations for persistent postoperative seizures after temporal lobe surgery. One is that 1 or more particular subtypes of mesial temporal lobe epilepsy (mTLE) exist that are particularly resistant to surgery. We sought to identify a common brain structural and connectivity alteration in patients with persistent postoperative seizures using preoperative quantitative magnetic resonance imaging and diffusion tensor imaging (DTI). Methods We performed a series of studies in 87 patients with mTLE (47 subsequently rendered seizure free, 40 who continued to experience postoperative seizures) and 80 healthy controls. We investigated the relationship between imaging variables and postoperative seizure outcome. All patients had unilateral temporal lobe seizure onset, had ipsilateral hippocampal sclerosis as the only brain lesion, and underwent amygdalohippocampectomy. Results Quantitative imaging factors found not to be significantly associated with persistent seizures were volumes of ipsilateral and contralateral mesial temporal lobe structures, generalized brain atrophy, and extent of resection. There were nonsignificant trends for larger amygdala and entorhinal resections to be associated with improved outcome. However, patients with persistent seizures had significant atrophy of bilateral dorsomedial and pulvinar thalamic regions, and significant alterations of DTI‐derived thalamotemporal probabilistic paths bilaterally relative to those patients rendered seizure free and controls, even when corrected for extent of mesial temporal lobe resection. Interpretation Patients with bihemispheric alterations of thalamotemporal structural networks may represent a subtype of mTLE that is resistant to temporal lobe surgery. Increasingly sensitive multimodal imaging techniques should endeavor to transform these group‐based findings to individualize prediction of patient outcomes. Ann Neurol 2015;77:760–774 PMID:25627477

  14. Complex adaptation-based LDR image rendering for 3D image reconstruction

    NASA Astrophysics Data System (ADS)

    Lee, Sung-Hak; Kwon, Hyuk-Ju; Sohng, Kyu-Ik

    2014-07-01

    A low-dynamic tone-compression technique is developed for realistic image rendering that can make three-dimensional (3D) images similar to realistic scenes by overcoming brightness dimming in the 3D display mode. The 3D surround provides varying conditions for image quality, illuminant adaptation, contrast, gamma, color, sharpness, and so on. In general, gain/offset adjustment, gamma compensation, and histogram equalization have performed well in contrast compression; however, as a result of signal saturation and clipping effects, image details are removed and information is lost on bright and dark areas. Thus, an enhanced image mapping technique is proposed based on space-varying image compression. The performance of contrast compression is enhanced with complex adaptation in a 3D viewing surround combining global and local adaptation. Evaluating local image rendering in view of tone and color expression, noise reduction, and edge compensation confirms that the proposed 3D image-mapping model can compensate for the loss of image quality in the 3D mode.

  15. Voxel-based lesion mapping of meningioma: a comprehensive lesion location mapping of 260 lesions.

    PubMed

    Hirayama, Ryuichi; Kinoshita, Manabu; Arita, Hideyuki; Kagawa, Naoki; Kishima, Haruhiko; Hashimoto, Naoya; Fujimoto, Yasunori; Yoshimine, Toshiki

    2018-06-01

    OBJECTIVE In the present study the authors aimed to determine preferred locations of meningiomas by avoiding descriptive analysis and instead using voxel-based lesion mapping and 3D image-rendering techniques. METHODS Magnetic resonance images obtained in 248 treatment-naïve meningioma patients with 260 lesions were retrospectively and consecutively collected. All images were registered to a 1-mm isotropic, high-resolution, T1-weighted brain atlas provided by the Montreal Neurological Institute (the MNI152), and a lesion frequency map was created, followed by 3D volume rendering to visualize the preferred locations of meningiomas in 3D. RESULTS The 3D lesion frequency map clearly showed that skull base structures such as parasellar, sphenoid wing, and petroclival regions were commonly affected by the tumor. The middle one-third of the superior sagittal sinus was most commonly affected in parasagittal tumors. Substantial lesion accumulation was observed around the leptomeninges covering the central sulcus and the sylvian fissure, with very few lesions observed at the frontal, parietal, and occipital convexities. CONCLUSIONS Using an objective visualization method, meningiomas were shown to be located around the middle third of the superior sagittal sinus, the perisylvian convexity, and the skull base. These observations, which are in line with previous descriptive analyses, justify further use of voxel-based lesion mapping techniques to help understand the biological nature of this disease.

  16. Fast automatic delineation of cardiac volume of interest in MSCT images

    NASA Astrophysics Data System (ADS)

    Lorenz, Cristian; Lessick, Jonathan; Lavi, Guy; Bulow, Thomas; Renisch, Steffen

    2004-05-01

    Computed Tomography Angiography (CTA) is an emerging modality for assessing cardiac anatomy. The delineation of the cardiac volume of interest (VOI) is a pre-processing step for subsequent visualization or image processing. It serves the suppression of anatomic structures being not in the primary focus of the cardiac application, such as sternum, ribs, spinal column, descending aorta and pulmonary vasculature. These structures obliterate standard visualizations such as direct volume renderings or maximum intensity projections. In addition, outcome and performance of post-processing steps such as ventricle suppression, coronary artery segmentation or the detection of short and long axes of the heart can be improved. The structures being part of the cardiac VOI (coronary arteries and veins, myocardium, ventricles and atria) differ tremendously in appearance. In addition, there is no clear image feature associated with the contour (or better cut-surface) distinguishing between cardiac VOI and surrounding tissue making the automatic delineation of the cardiac VOI a difficult task. The presented approach locates in a first step chest wall and descending aorta in all image slices giving a rough estimate of the location of the heart. In a second step, a Fourier based active contour approach delineates slice-wise the border of the cardiac VOI. The algorithm has been evaluated on 41 multi-slice CT data-sets including cases with coronary stents and venous and arterial bypasses. The typical processing time amounts to 5-10s on a 1GHz P3 PC.

  17. Freely-available, true-color volume rendering software and cryohistology data sets for virtual exploration of the temporal bone anatomy.

    PubMed

    Kahrs, Lüder Alexander; Labadie, Robert Frederick

    2013-01-01

    Cadaveric dissection of temporal bone anatomy is not always possible or feasible in certain educational environments. Volume rendering using CT and/or MRI helps understanding spatial relationships, but they suffer in nonrealistic depictions especially regarding color of anatomical structures. Freely available, nonstained histological data sets and software which are able to render such data sets in realistic color could overcome this limitation and be a very effective teaching tool. With recent availability of specialized public-domain software, volume rendering of true-color, histological data sets is now possible. We present both feasibility as well as step-by-step instructions to allow processing of publicly available data sets (Visible Female Human and Visible Ear) into easily navigable 3-dimensional models using free software. Example renderings are shown to demonstrate the utility of these free methods in virtual exploration of the complex anatomy of the temporal bone. After exploring the data sets, the Visible Ear appears more natural than the Visible Human. We provide directions for an easy-to-use, open-source software in conjunction with freely available histological data sets. This work facilitates self-education of spatial relationships of anatomical structures inside the human temporal bone as well as it allows exploration of surgical approaches prior to cadaveric testing and/or clinical implementation. Copyright © 2013 S. Karger AG, Basel.

  18. A probability tracking approach to segmentation of ultrasound prostate images using weak shape priors

    NASA Astrophysics Data System (ADS)

    Xu, Robert S.; Michailovich, Oleg V.; Solovey, Igor; Salama, Magdy M. A.

    2010-03-01

    Prostate specific antigen density is an established parameter for indicating the likelihood of prostate cancer. To this end, the size and volume of the gland have become pivotal quantities used by clinicians during the standard cancer screening process. As an alternative to manual palpation, an increasing number of volume estimation methods are based on the imagery data of the prostate. The necessity to process large volumes of such data requires automatic segmentation algorithms, which can accurately and reliably identify the true prostate region. In particular, transrectal ultrasound (TRUS) imaging has become a standard means of assessing the prostate due to its safe nature and high benefit-to-cost ratio. Unfortunately, modern TRUS images are still plagued by many ultrasound imaging artifacts such as speckle noise and shadowing, which results in relatively low contrast and reduced SNR of the acquired images. Consequently, many modern segmentation methods incorporate prior knowledge about the prostate geometry to enhance traditional segmentation techniques. In this paper, a novel approach to the problem of TRUS segmentation, particularly the definition of the prostate shape prior, is presented. The proposed approach is based on the concept of distribution tracking, which provides a unified framework for tracking both photometric and morphological features of the prostate. In particular, the tracking of morphological features defines a novel type of "weak" shape priors. The latter acts as a regularization force, which minimally bias the segmentation procedure, while rendering the final estimate stable and robust. The value of the proposed methodology is demonstrated in a series of experiments.

  19. Archeological Testing Fort Hood: 1994-1995. Volume 2

    DTIC Science & Technology

    1996-10-01

    Type 3 sediment appears to be dry present, both as discrete lenses which are usually decomposition, which renders it a loose, grayish readily...degrading the quality of the shelters, rendering them increasingly attractive for resource. habitation. However, as noted previously (Abbott 1994; Abbott...651 characteristic renders them subject to additional federal laws (e.g., NAGPRA), it increases the urgency to implement management policies that will

  20. Real-time Graphics Processing Unit Based Fourier Domain Optical Coherence Tomography and Surgical Applications

    NASA Astrophysics Data System (ADS)

    Zhang, Kang

    2011-12-01

    In this dissertation, real-time Fourier domain optical coherence tomography (FD-OCT) capable of multi-dimensional micrometer-resolution imaging targeted specifically for microsurgical intervention applications was developed and studied. As a part of this work several ultra-high speed real-time FD-OCT imaging and sensing systems were proposed and developed. A real-time 4D (3D+time) OCT system platform using the graphics processing unit (GPU) to accelerate OCT signal processing, the imaging reconstruction, visualization, and volume rendering was developed. Several GPU based algorithms such as non-uniform fast Fourier transform (NUFFT), numerical dispersion compensation, and multi-GPU implementation were developed to improve the impulse response, SNR roll-off and stability of the system. Full-range complex-conjugate-free FD-OCT was also implemented on the GPU architecture to achieve doubled image range and improved SNR. These technologies overcome the imaging reconstruction and visualization bottlenecks widely exist in current ultra-high speed FD-OCT systems and open the way to interventional OCT imaging for applications in guided microsurgery. A hand-held common-path optical coherence tomography (CP-OCT) distance-sensor based microsurgical tool was developed and validated. Through real-time signal processing, edge detection and feed-back control, the tool was shown to be capable of track target surface and compensate motion. The micro-incision test using a phantom was performed using a CP-OCT-sensor integrated hand-held tool, which showed an incision error less than +/-5 microns, comparing to >100 microns error by free-hand incision. The CP-OCT distance sensor has also been utilized to enhance the accuracy and safety of optical nerve stimulation. Finally, several experiments were conducted to validate the system for surgical applications. One of them involved 4D OCT guided micro-manipulation using a phantom. Multiple volume renderings of one 3D data set were performed with different view angles to allow accurate monitoring of the micro-manipulation, and the user to clearly monitor tool-to-target spatial relation in real-time. The system was also validated by imaging multiple biological samples, such as human fingerprint, human cadaver head and small animals. Compared to conventional surgical microscopes, GPU-based real-time FD-OCT can provide the surgeons with a real-time comprehensive spatial view of the microsurgical region and accurate depth perception.

  1. PACS-based interface for 3D anatomical structure visualization and surgical planning

    NASA Astrophysics Data System (ADS)

    Koehl, Christophe; Soler, Luc; Marescaux, Jacques

    2002-05-01

    The interpretation of radiological image is routine but it remains a rather difficult task for physicians. It requires complex mental processes, that permit translation from 2D slices into 3D localization and volume determination of visible diseases. An easier and more extensive visualization and exploitation of medical images can be reached through the use of computer-based systems that provide real help from patient admission to post-operative followup. In this way, we have developed a 3D visualization interface linked to a PACS database that allows manipulation and interaction on virtual organs delineated from CT-scan or MRI. This software provides the 3D real-time surface rendering of anatomical structures, an accurate evaluation of volumes and distances and the improvement of radiological image analysis and exam annotation through a negatoscope tool. It also provides a tool for surgical planning allowing the positioning of an interactive laparoscopic instrument and the organ resection. The software system could revolutionize the field of computerized imaging technology. Indeed, it provides a handy and portable tool for pre-operative and intra-operative analysis of anatomy and pathology in various medical fields. This constitutes the first step of the future development of augmented reality and surgical simulation systems.

  2. Real-time photorealistic stereoscopic rendering of fire

    NASA Astrophysics Data System (ADS)

    Rose, Benjamin M.; McAllister, David F.

    2007-02-01

    We propose a method for real-time photorealistic stereo rendering of the natural phenomenon of fire. Applications include the use of virtual reality in fire fighting, military training, and entertainment. Rendering fire in real-time presents a challenge because of the transparency and non-static fluid-like behavior of fire. It is well known that, in general, methods that are effective for monoscopic rendering are not necessarily easily extended to stereo rendering because monoscopic methods often do not provide the depth information necessary to produce the parallax required for binocular disparity in stereoscopic rendering. We investigate the existing techniques used for monoscopic rendering of fire and discuss their suitability for extension to real-time stereo rendering. Methods include the use of precomputed textures, dynamic generation of textures, and rendering models resulting from the approximation of solutions of fluid dynamics equations through the use of ray-tracing algorithms. We have found that in order to attain real-time frame rates, our method based on billboarding is effective. Slicing is used to simulate depth. Texture mapping or 2D images are mapped onto polygons and alpha blending is used to treat transparency. We can use video recordings or prerendered high-quality images of fire as textures to attain photorealistic stereo.

  3. Material Characterization and Geometric Segmentation of a Composite Structure Using Microfocus X-Ray Computed Tomography Image-Based Finite Element Modeling

    NASA Technical Reports Server (NTRS)

    Abdul-Aziz, Ali; Roth, D. J.; Cotton, R.; Studor, George F.; Christiansen, Eric; Young, P. C.

    2011-01-01

    This study utilizes microfocus x-ray computed tomography (CT) slice sets to model and characterize the damage locations and sizes in thermal protection system materials that underwent impact testing. ScanIP/FE software is used to visualize and process the slice sets, followed by mesh generation on the segmented volumetric rendering. Then, the local stress fields around several of the damaged regions are calculated for realistic mission profiles that subject the sample to extreme temperature and other severe environmental conditions. The resulting stress fields are used to quantify damage severity and make an assessment as to whether damage that did not penetrate to the base material can still result in catastrophic failure of the structure. It is expected that this study will demonstrate that finite element modeling based on an accurate three-dimensional rendered model from a series of CT slices is an essential tool to quantify the internal macroscopic defects and damage of a complex system made out of thermal protection material. Results obtained showing details of segmented images; three-dimensional volume-rendered models, finite element meshes generated, and the resulting thermomechanical stress state due to impact loading for the material are presented and discussed. Further, this study is conducted to exhibit certain high-caliber capabilities that the nondestructive evaluation (NDE) group at NASA Glenn Research Center can offer to assist in assessing the structural durability of such highly specialized materials so improvements in their performance and capacities to handle harsh operating conditions can be made.

  4. Efficient visibility encoding for dynamic illumination in direct volume rendering.

    PubMed

    Kronander, Joel; Jönsson, Daniel; Löw, Joakim; Ljung, Patric; Ynnerman, Anders; Unger, Jonas

    2012-03-01

    We present an algorithm that enables real-time dynamic shading in direct volume rendering using general lighting, including directional lights, point lights, and environment maps. Real-time performance is achieved by encoding local and global volumetric visibility using spherical harmonic (SH) basis functions stored in an efficient multiresolution grid over the extent of the volume. Our method enables high-frequency shadows in the spatial domain, but is limited to a low-frequency approximation of visibility and illumination in the angular domain. In a first pass, level of detail (LOD) selection in the grid is based on the current transfer function setting. This enables rapid online computation and SH projection of the local spherical distribution of visibility information. Using a piecewise integration of the SH coefficients over the local regions, the global visibility within the volume is then computed. By representing the light sources using their SH projections, the integral over lighting, visibility, and isotropic phase functions can be efficiently computed during rendering. The utility of our method is demonstrated in several examples showing the generality and interactive performance of the approach.

  5. New portable FELIX 3D display

    NASA Astrophysics Data System (ADS)

    Langhans, Knut; Bezecny, Daniel; Homann, Dennis; Bahr, Detlef; Vogt, Carsten; Blohm, Christian; Scharschmidt, Karl-Heinz

    1998-04-01

    An improved generation of our 'FELIX 3D Display' is presented. This system is compact, light, modular and easy to transport. The created volumetric images consist of many voxels, which are generated in a half-sphere display volume. In that way a spatial object can be displayed occupying a physical space with height, width and depth. The new FELIX generation uses a screen rotating with 20 revolutions per second. This target screen is mounted by an easy to change mechanism making it possible to use appropriate screens for the specific purpose of the display. An acousto-optic deflection unit with an integrated small diode pumped laser draws the images on the spinning screen. Images can consist of up to 10,000 voxels at a refresh rate of 20 Hz. Currently two different hardware systems are investigated. The first one is based on a standard PCMCIA digital/analog converter card as an interface and is controlled by a notebook. The developed software is provided with a graphical user interface enabling several animation features. The second, new prototype is designed to display images created by standard CAD applications. It includes the development of a new high speed hardware interface suitable for state-of-the- art fast and high resolution scanning devices, which require high data rates. A true 3D volume display as described will complement the broad range of 3D visualization tools, such as volume rendering packages, stereoscopic and virtual reality techniques, which have become widely available in recent years. Potential applications for the FELIX 3D display include imaging in the field so fair traffic control, medical imaging, computer aided design, science as well as entertainment.

  6. MicroCT analysis of a retrieved root restored with a bonded fiber-reinforced composite dowel: a pilot study.

    PubMed

    Lorenzoni, Fabio Cesar; Bonfante, Estevam A; Bonfante, Gerson; Martins, Leandro M; Witek, Lukasz; Silva, Nelson R F A

    2013-08-01

    This evaluation aimed to (1) validate micro-computed tomography (microCT) findings using scanning electron microscopy (SEM) imaging, and (2) quantify the volume of voids and the bonded surface area resulting from fiber-reinforced composite (FRC) dowel cementation technique using microCT scanning technology/3D reconstructing software. A fiberglass dowel was cemented in a condemned maxillary lateral incisor prior to its extraction. A microCT scan was performed of the extracted tooth creating a large volume of data in DICOM format. This set of images was imported to image-processing software to inspect the internal architecture of structures. The outer surface and the spatial relationship of dentin, FRC dowel, cement layer, and voids were reconstructed. Three-dimensional spatial architecture of structures and volumetric analysis revealed that 9.89% of the resin cement was composed of voids and that the bonded area between root dentin and cement was 60.63% larger than that between cement and FRC dowel. SEM imaging demonstrated the presence of voids similarly observed using microCT technology (aim 1). MicroCT technology was able to nondestructively measure the volume of voids within the cement layer and the bonded surface area at the root/cement/FRC interfaces (aim 2). The interfaces at the root dentin/cement/dowel represent a timely and relevant topic where several efforts have been conducted in the past few years to understand their inherent features. MicroCT technology combined with 3D reconstruction allows for not only inspecting the internal arrangement rendered by fiberglass adhesively bonded to root dentin, but also estimating the volume of voids and contacted bond area between the dentin and cement layer. © 2013 by the American College of Prosthodontists.

  7. Space-time light field rendering.

    PubMed

    Wang, Huamin; Sun, Mingxuan; Yang, Ruigang

    2007-01-01

    In this paper, we propose a novel framework called space-time light field rendering, which allows continuous exploration of a dynamic scene in both space and time. Compared to existing light field capture/rendering systems, it offers the capability of using unsynchronized video inputs and the added freedom of controlling the visualization in the temporal domain, such as smooth slow motion and temporal integration. In order to synthesize novel views from any viewpoint at any time instant, we develop a two-stage rendering algorithm. We first interpolate in the temporal domain to generate globally synchronized images using a robust spatial-temporal image registration algorithm followed by edge-preserving image morphing. We then interpolate these software-synchronized images in the spatial domain to synthesize the final view. In addition, we introduce a very accurate and robust algorithm to estimate subframe temporal offsets among input video sequences. Experimental results from unsynchronized videos with or without time stamps show that our approach is capable of maintaining photorealistic quality from a variety of real scenes.

  8. Modeling a color-rendering operator for high dynamic range images using a cone-response function

    NASA Astrophysics Data System (ADS)

    Choi, Ho-Hyoung; Kim, Gi-Seok; Yun, Byoung-Ju

    2015-09-01

    Tone-mapping operators are the typical algorithms designed to produce visibility and the overall impression of brightness, contrast, and color of high dynamic range (HDR) images on low dynamic range (LDR) display devices. Although several new tone-mapping operators have been proposed in recent years, the results of these operators have not matched those of the psychophysical experiments based on the human visual system. A color-rendering model that is a combination of tone-mapping and cone-response functions using an XYZ tristimulus color space is presented. In the proposed method, the tone-mapping operator produces visibility and the overall impression of brightness, contrast, and color in HDR images when mapped onto relatively LDR devices. The tone-mapping resultant image is obtained using chromatic and achromatic colors to avoid well-known color distortions shown in the conventional methods. The resulting image is then processed with a cone-response function wherein emphasis is placed on human visual perception (HVP). The proposed method covers the mismatch between the actual scene and the rendered image based on HVP. The experimental results show that the proposed method yields an improved color-rendering performance compared to conventional methods.

  9. Multi-Detector Row Computed Tomography Findings of Pelvic Congestion Syndrome Caused by Dilated Ovarian Veins

    PubMed Central

    Eren, Suat

    2010-01-01

    Objective: To evaluate the efficacy of multi-detector row CT (MDCT) on pelvic congestion syndrome (PCS), which is often overlooked or poorly visualized with routine imaging examination. Materials and Methods: We evaluated the MDCT features of 40 patients with PCS (mean age, 45 years; range, 29–60 years) using axial, coronal, sagittal, 3D volume-rendered, and Maximum Intensity Projection MIP images. Results: MDCT revealed pelvic varices and ovarian vein dilatations in all patients. Bilateral ovarian vein dilatation was present in 25 patients, and 15 patients had unilateral dilatation. While 12 cases of secondary pelvic varices occurred simultaneously with a retroaortic left renal vein, 10 cases were due solely to a mass obstruction or stenosis of venous structures. Conclusion: MDCT is an effective tool in the evaluation of PCS, and it has more advantages than other imaging modalities. PMID:25610142

  10. Getting in touch--3D printing in forensic imaging.

    PubMed

    Ebert, Lars Chr; Thali, Michael J; Ross, Steffen

    2011-09-10

    With the increasing use of medical imaging in forensics, as well as the technological advances in rapid prototyping, we suggest combining these techniques to generate displays of forensic findings. We used computed tomography (CT), CT angiography, magnetic resonance imaging (MRI) and surface scanning with photogrammetry in conjunction with segmentation techniques to generate 3D polygon meshes. Based on these data sets, a 3D printer created colored models of the anatomical structures. Using this technique, we could create models of bone fractures, vessels, cardiac infarctions, ruptured organs as well as bitemark wounds. The final models are anatomically accurate, fully colored representations of bones, vessels and soft tissue, and they demonstrate radiologically visible pathologies. The models are more easily understood by laypersons than volume rendering or 2D reconstructions. Therefore, they are suitable for presentations in courtrooms and for educational purposes. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  11. Ambient occlusion effects for combined volumes and tubular geometry.

    PubMed

    Schott, Mathias; Martin, Tobias; Grosset, A V Pascal; Smith, Sean T; Hansen, Charles D

    2013-06-01

    This paper details a method for interactive direct volume rendering that computes ambient occlusion effects for visualizations that combine both volumetric and geometric primitives, specifically tube-shaped geometric objects representing streamlines, magnetic field lines or DTI fiber tracts. The algorithm extends the recently presented the directional occlusion shading model to allow the rendering of those geometric shapes in combination with a context providing 3D volume, considering mutual occlusion between structures represented by a volume or geometry. Stream tube geometries are computed using an effective spline-based interpolation and approximation scheme that avoids self-intersection and maintains coherent orientation of the stream tube segments to avoid surface deforming twists. Furthermore, strategies to reduce the geometric and specular aliasing of the stream tubes are discussed.

  12. Ambient Occlusion Effects for Combined Volumes and Tubular Geometry

    PubMed Central

    Schott, Mathias; Martin, Tobias; Grosset, A.V. Pascal; Smith, Sean T.; Hansen, Charles D.

    2013-01-01

    This paper details a method for interactive direct volume rendering that computes ambient occlusion effects for visualizations that combine both volumetric and geometric primitives, specifically tube-shaped geometric objects representing streamlines, magnetic field lines or DTI fiber tracts. The algorithm extends the recently presented the directional occlusion shading model to allow the rendering of those geometric shapes in combination with a context providing 3D volume, considering mutual occlusion between structures represented by a volume or geometry. Stream tube geometries are computed using an effective spline-based interpolation and approximation scheme that avoids self-intersection and maintains coherent orientation of the stream tube segments to avoid surface deforming twists. Furthermore, strategies to reduce the geometric and specular aliasing of the stream tubes are discussed. PMID:23559506

  13. Integrity Determination for Image Rendering Vision Navigation

    DTIC Science & Technology

    2016-03-01

    identifying an object within a scene, tracking a SIFT feature between frames or matching images and/or features for stereo vision applications. This... object level, either in 2-D or 3-D, versus individual features. There is a breadth of information, largely from the machine vision community...matching or image rendering image correspondence approach is based upon using either 2-D or 3-D object models or templates to perform object detection or

  14. HVS: an image-based approach for constructing virtual environments

    NASA Astrophysics Data System (ADS)

    Zhang, Maojun; Zhong, Li; Sun, Lifeng; Li, Yunhao

    1998-09-01

    Virtual Reality Systems can construct virtual environment which provide an interactive walkthrough experience. Traditionally, walkthrough is performed by modeling and rendering 3D computer graphics in real-time. Despite the rapid advance of computer graphics technique, the rendering engine usually places a limit on scene complexity and rendering quality. This paper presents a approach which uses the real-world image or synthesized image to comprise a virtual environment. The real-world image or synthesized image can be recorded by camera, or synthesized by off-line multispectral image processing for Landsat TM (Thematic Mapper) Imagery and SPOT HRV imagery. They are digitally warped on-the-fly to simulate walking forward/backward, to left/right and 360-degree watching around. We have developed a system HVS (Hyper Video System) based on these principles. HVS improves upon QuickTime VR and Surround Video in the walking forward/backward.

  15. A Heterogeneous Multiprocessor Graphics System Using Processor-Enhanced Memories

    DTIC Science & Technology

    1989-02-01

    frames per second, font generation directly from conic spline descriptions, and rapid calculation of radiosity form factors. The hardware consists of...generality for rendering curved surfaces, volume data, objects dcscri id with Constructive Solid Geometry, for rendering scenes using the radiosity ...f.aces and for computing a spherical radiosity lighting model (see Section 7.6). Custom Memory Chips \\ 208 bits x 128 pixels - Renderer Board ix p o a

  16. Techniques for virtual lung nodule insertion: volumetric and morphometric comparison of projection-based and image-based methods for quantitative CT

    NASA Astrophysics Data System (ADS)

    Robins, Marthony; Solomon, Justin; Sahbaee, Pooyan; Sedlmair, Martin; Choudhury, Kingshuk Roy; Pezeshk, Aria; Sahiner, Berkman; Samei, Ehsan

    2017-09-01

    Virtual nodule insertion paves the way towards the development of standardized databases of hybrid CT images with known lesions. The purpose of this study was to assess three methods (an established and two newly developed techniques) for inserting virtual lung nodules into CT images. Assessment was done by comparing virtual nodule volume and shape to the CT-derived volume and shape of synthetic nodules. 24 synthetic nodules (three sizes, four morphologies, two repeats) were physically inserted into the lung cavity of an anthropomorphic chest phantom (KYOTO KAGAKU). The phantom was imaged with and without nodules on a commercial CT scanner (SOMATOM Definition Flash, Siemens) using a standard thoracic CT protocol at two dose levels (1.4 and 22 mGy CTDIvol). Raw projection data were saved and reconstructed with filtered back-projection and sinogram affirmed iterative reconstruction (SAFIRE, strength 5) at 0.6 mm slice thickness. Corresponding 3D idealized, virtual nodule models were co-registered with the CT images to determine each nodule’s location and orientation. Virtual nodules were voxelized, partial volume corrected, and inserted into nodule-free CT data (accounting for system imaging physics) using two methods: projection-based Technique A, and image-based Technique B. Also a third Technique C based on cropping a region of interest from the acquired image of the real nodule and blending it into the nodule-free image was tested. Nodule volumes were measured using a commercial segmentation tool (iNtuition, TeraRecon, Inc.) and deformation was assessed using the Hausdorff distance. Nodule volumes and deformations were compared between the idealized, CT-derived and virtual nodules using a linear mixed effects regression model which utilized the mean, standard deviation, and coefficient of variation (Mea{{n}RHD} , ST{{D}RHD} and C{{V}RHD}{) }~ of the regional Hausdorff distance. Overall, there was a close concordance between the volumes of the CT-derived and virtual nodules. Percent differences between them were less than 3% for all insertion techniques and were not statistically significant in most cases. Correlation coefficient values were greater than 0.97. The deformation according to the Hausdorff distance was also similar between the CT-derived and virtual nodules with minimal statistical significance in the (C{{V}RHD} ) for Techniques A, B, and C. This study shows that both projection-based and image-based nodule insertion techniques yield realistic nodule renderings with statistical similarity to the synthetic nodules with respect to nodule volume and deformation. These techniques could be used to create a database of hybrid CT images containing nodules of known size, location and morphology.

  17. Techniques for virtual lung nodule insertion: volumetric and morphometric comparison of projection-based and image-based methods for quantitative CT

    PubMed Central

    Robins, Marthony; Solomon, Justin; Sahbaee, Pooyan; Sedlmair, Martin; Choudhury, Kingshuk Roy; Pezeshk, Aria; Sahiner, Berkman; Samei, Ehsan

    2017-01-01

    Virtual nodule insertion paves the way towards the development of standardized databases of hybrid CT images with known lesions. The purpose of this study was to assess three methods (an established and two newly developed techniques) for inserting virtual lung nodules into CT images. Assessment was done by comparing virtual nodule volume and shape to the CT-derived volume and shape of synthetic nodules. 24 synthetic nodules (three sizes, four morphologies, two repeats) were physically inserted into the lung cavity of an anthropomorphic chest phantom (KYOTO KAGAKU). The phantom was imaged with and without nodules on a commercial CT scanner (SOMATOM Definition Flash, Siemens) using a standard thoracic CT protocol at two dose levels (1.4 and 22 mGy CTDIvol). Raw projection data were saved and reconstructed with filtered back-projection and sinogram affirmed iterative reconstruction (SAFIRE, strength 5) at 0.6 mm slice thickness. Corresponding 3D idealized, virtual nodule models were co-registered with the CT images to determine each nodule’s location and orientation. Virtual nodules were voxelized, partial volume corrected, and inserted into nodule-free CT data (accounting for system imaging physics) using two methods: projection-based Technique A, and image-based Technique B. Also a third Technique C based on cropping a region of interest from the acquired image of the real nodule and blending it into the nodule-free image was tested. Nodule volumes were measured using a commercial segmentation tool (iNtuition, TeraRecon, Inc.) and deformation was assessed using the Hausdorff distance. Nodule volumes and deformations were compared between the idealized, CT-derived and virtual nodules using a linear mixed effects regression model which utilized the mean, standard deviation, and coefficient of variation (MeanRHD, and STDRHD CVRHD) of the regional Hausdorff distance. Overall, there was a close concordance between the volumes of the CT-derived and virtual nodules. Percent differences between them were less than 3% for all insertion techniques and were not statistically significant in most cases. Correlation coefficient values were greater than 0.97. The deformation according to the Hausdorff distance was also similar between the CT-derived and virtual nodules with minimal statistical significance in the (CVRHD) for Techniques A, B, and C. This study shows that both projection-based and image-based nodule insertion techniques yield realistic nodule renderings with statistical similarity to the synthetic nodules with respect to nodule volume and deformation. These techniques could be used to create a database of hybrid CT images containing nodules of known size, location and morphology. PMID:28786399

  18. STAR (Simple Targeted Arterial Rendering) Technique: a Novel and Simple Method to Visualize the Fetal Cardiac Outflow Tracts

    PubMed Central

    Yeo, Lami; Romero, Roberto; Jodicke, Cristiano; Kim, Sun Kwon; Gonzalez, Juan M.; Oggè, Giovanna; Lee, Wesley; Kusanovic, Juan Pedro; Vaisbuch, Edi; Hassan, Sonia S.

    2010-01-01

    Objective To describe a novel and simple technique (STAR: Simple Targeted Arterial Rendering) to visualize the fetal cardiac outflow tracts from dataset volumes obtained with spatiotemporal image correlation (STIC) and applying a new display technology (OmniView). Methods We developed a technique to image the outflow tracts by drawing three dissecting lines through the four-chamber view of the heart contained in a STIC volume dataset. Each line generated the following plane: 1) Line 1: ventricular septum “en face” with both great vessels (pulmonary artery anterior to the aorta); 2) Line 2: pulmonary artery with continuation into the longitudinal view of the ductal arch; and 3) Line 3: long axis view of the aorta arising from the left ventricle. The pattern formed by all 3 lines intersecting approximately through the crux of the heart resembles a “star”. The technique was then tested in 50 normal hearts (15.3 – 40.4 weeks of gestation). To determine if the technique could identify planes that departed from the normal images, we tested the technique in 4 cases with proven congenital heart defects (ventricular septal defect, transposition of great vessels, tetralogy of Fallot, and pulmonary atresia with intact ventricular septum). Results The STAR technique was able to generate the intended planes in all 50 normal cases. In the abnormal cases, the STAR technique allowed identification of the ventricular septal defect, demonstrated great vessel anomalies, and displayed views that deviated from what was expected from the examination of normal hearts. Conclusions This novel and simple technique can be used to visualize the outflow tracts and ventricular septum “en face” in normal fetal hearts. The inability to obtain expected views or the appearance of abnormal views in the generated planes should raise the index of suspicion for congenital heart disease involving the great vessels and/or the ventricular septum. The STAR technique may simplify examination of the fetal heart and could reduce operator dependency. PMID:20878672

  19. MRI for transformation of preserved organs and their pathologies into digital formats for medical education and creation of a virtual pathology museum. A pilot study.

    PubMed

    Venkatesh, S K; Wang, G; Seet, J E; Teo, L L S; Chong, V F H

    2013-03-01

    To evaluate the feasibility of magnetic resonance imaging (MRI) for the transformation of preserved organs and their disease entities into digital formats for medical education and creation of a virtual museum. MRI of selected 114 pathology specimen jars representing different organs and their diseases was performed using a 3 T MRI machine with two or more MRI sequences including three-dimensional (3D) T1-weighted (T1W), 3D-T2W, 3D-FLAIR (fluid attenuated inversion recovery), fat-water separation (DIXON), and gradient-recalled echo (GRE) sequences. Qualitative assessment of MRI for depiction of disease and internal anatomy was performed. Volume rendering was performed on commercially available workstations. The digital images, 3D models, and photographs of specimens were archived into a workstation serving as a virtual pathology museum. MRI was successfully performed on all specimens. The 3D-T1W and 3D-T2W sequences demonstrated the best contrast between normal and pathological tissues. The digital material is a useful aid for understanding disease by giving insights into internal structural changes not apparent on visual inspection alone. Volume rendering produced vivid 3D models with better contrast between normal tissue and diseased tissue compared to real specimens or their photographs in some cases. The digital library provides good illustration material for radiological-pathological correlation by enhancing pathological anatomy and information on nature and signal characteristics of tissues. In some specimens, the MRI appearance may be different from corresponding organ and disease in vivo due to dead tissue and changes induced by prolonged contact with preservative fluid. MRI of pathology specimens is feasible and provides excellent images for education and creating a virtual pathology museum that can serve as permanent record of digital material for self-directed learning, improving teaching aids, and radiological-pathological correlation. Copyright © 2012 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  20. Validation of geometric measurements of the left atrium and pulmonary veins for analysis of reverse structural remodeling following ablation therapy

    NASA Astrophysics Data System (ADS)

    Rettmann, M. E.; Holmes, D. R., III; Gunawan, M. S.; Ge, X.; Karwoski, R. A.; Breen, J. F.; Packer, D. L.; Robb, R. A.

    2012-03-01

    Geometric analysis of the left atrium and pulmonary veins is important for studying reverse structural remodeling following cardiac ablation therapy. It has been shown that the left atrium decreases in volume and the pulmonary vein ostia decrease in diameter following ablation therapy. Most analysis techniques, however, require laborious manual tracing of image cross-sections. Pulmonary vein diameters are typically measured at the junction between the left atrium and pulmonary veins, called the pulmonary vein ostia, with manually drawn lines on volume renderings or on image cross-sections. In this work, we describe a technique for making semi-automatic measurements of the left atrium and pulmonary vein ostial diameters from high resolution CT scans and multi-phase datasets. The left atrium and pulmonary veins are segmented from a CT volume using a 3D volume approach and cut planes are interactively positioned to separate the pulmonary veins from the body of the left atrium. The cut plane is also used to compute the pulmonary vein ostial diameter. Validation experiments are presented which demonstrate the ability to repeatedly measure left atrial volume and pulmonary vein diameters from high resolution CT scans, as well as the feasibility of this approach for analyzing dynamic, multi-phase datasets. In the high resolution CT scans the left atrial volume measurements show high repeatability with approximately 4% intra-rater repeatability and 8% inter-rater repeatability. Intra- and inter-rater repeatability for pulmonary vein diameter measurements range from approximately 2 to 4 mm. For the multi-phase CT datasets, differences in left atrial volumes between a standard slice-by-slice approach and the proposed 3D volume approach are small, with percent differences on the order of 3% to 6%.

  1. Comparison of three methods for registration of abdominal/pelvic volume data sets from functional-anatomic scans

    NASA Astrophysics Data System (ADS)

    Mahmoud, Faaiza; Ton, Anthony; Crafoord, Joakim; Kramer, Elissa L.; Maguire, Gerald Q., Jr.; Noz, Marilyn E.; Zeleznik, Michael P.

    2000-06-01

    The purpose of this work was to evaluate three volumetric registration methods in terms of technique, user-friendliness and time requirements. CT and SPECT data from 11 patients were interactively registered using: a 3D method involving only affine transformation; a mixed 3D - 2D non-affine (warping) method; and a 3D non-affine (warping) method. In the first method representative isosurfaces are generated from the anatomical images. Registration proceeds through translation, rotation, and scaling in all three space variables. Resulting isosurfaces are fused and quantitative measurements are possible. In the second method, the 3D volumes are rendered co-planar by performing an oblique projection. Corresponding landmark pairs are chosen on matching axial slice sets. A polynomial warp is then applied. This method has undergone extensive validation and was used to evaluate the results. The third method employs visualization tools. The data model allows images to be localized within two separate volumes. Landmarks are chosen on separate slices. Polynomial warping coefficients are generated and data points from one volume are moved to the corresponding new positions. The two landmark methods were the least time consuming (10 to 30 minutes from start to finish), but did demand a good knowledge of anatomy. The affine method was tedious and required a fair understanding of 3D geometry.

  2. Physics Based Modeling and Rendering of Vegetation in the Thermal Infrared

    NASA Technical Reports Server (NTRS)

    Smith, J. A.; Ballard, J. R., Jr.

    1999-01-01

    We outline a procedure for rendering physically-based thermal infrared images of simple vegetation scenes. Our approach incorporates the biophysical processes that affect the temperature distribution of the elements within a scene. Computer graphics plays a key role in two respects. First, in computing the distribution of scene shaded and sunlit facets and, second, in the final image rendering once the temperatures of all the elements in the scene have been computed. We illustrate our approach for a simple corn scene where the three-dimensional geometry is constructed based on measured morphological attributes of the row crop. Statistical methods are used to construct a representation of the scene in agreement with the measured characteristics. Our results are quite good. The rendered images exhibit realistic behavior in directional properties as a function of view and sun angle. The root-mean-square error in measured versus predicted brightness temperatures for the scene was 2.1 deg C.

  3. RenderMan design principles

    NASA Technical Reports Server (NTRS)

    Apodaca, Tony; Porter, Tom

    1989-01-01

    The two worlds of interactive graphics and realistic graphics have remained separate. Fast graphics hardware runs simple algorithms and generates simple looking images. Photorealistic image synthesis software runs slowly on large expensive computers. The time has come for these two branches of computer graphics to merge. The speed and expense of graphics hardware is no longer the barrier to the wide acceptance of photorealism. There is every reason to believe that high quality image synthesis will become a standard capability of every graphics machine, from superworkstation to personal computer. The significant barrier has been the lack of a common language, an agreed-upon set of terms and conditions, for 3-D modeling systems to talk to 3-D rendering systems for computing an accurate rendition of that scene. Pixar has introduced RenderMan to serve as that common language. RenderMan, specifically the extensibility it offers in shading calculations, is discussed.

  4. LOD-Sprite Technique for Accelerated Terrain Rendering

    DTIC Science & Technology

    1999-01-01

    includes limited parallax, is possible. Another category samples the full plenoptic function, resulting in 3D, 4D or even 5D image sprites [13, 10... Plenoptic modeling: An image- based rendering system. Computer Graphics (Proc. SIG- GRAPH ’95), pages 39–46, 1995. [19] P. Rademacher and G. Bishop

  5. Rapid prototyping raw models on the basis of high resolution computed tomography lung data for respiratory flow dynamics.

    PubMed

    Giesel, Frederik L; Mehndiratta, Amit; von Tengg-Kobligk, Hendrik; Schaeffer, A; Teh, Kevin; Hoffman, E A; Kauczor, Hans-Ulrich; van Beek, E J R; Wild, Jim M

    2009-04-01

    Three-dimensional image reconstruction by volume rendering and rapid prototyping has made it possible to visualize anatomic structures in three dimensions for interventional planning and academic research. Volumetric chest computed tomography was performed on a healthy volunteer. Computed tomographic images of the larger bronchial branches were segmented by an extended three-dimensional region-growing algorithm, converted into a stereolithography file, and used for computer-aided design on a laser sintering machine. The injection of gases for respiratory flow modeling and measurements using magnetic resonance imaging were done on a hollow cast. Manufacturing the rapid prototype took about 40 minutes and included the airway tree from trackea to segmental bronchi (fifth generation). The branching of the airways are clearly visible in the (3)He images, and the radial imaging has the potential to elucidate the airway dimensions. The results for flow patterns in the human bronchial tree using the rapid-prototype model with hyperpolarized helium-3 magnetic resonance imaging show the value of this model for flow phantom studies.

  6. 3D surface rendered MR images of the brain and its vasculature.

    PubMed

    Cline, H E; Lorensen, W E; Souza, S P; Jolesz, F A; Kikinis, R; Gerig, G; Kennedy, T E

    1991-01-01

    Both time-of-flight and phase contrast magnetic resonance angiography images are combined with stationary tissue images to provide data depicting two contrast relationships yielding intrinsic discrimination of brain matter and flowing blood. A computer analysis is based on nearest neighbor segmentation and the connection between anatomical structures to partition the images into different tissue categories: from which, high resolution brain parenchymal and vascular surfaces are constructed and rendered in juxtaposition, aiding in surgical planning.

  7. Pitfalls in 16-detector row CT of the coronary arteries.

    PubMed

    Nakanishi, Tadashi; Kayashima, Yasuyo; Inoue, Rintaro; Sumii, Kotaro; Gomyo, Yukihiko

    2005-01-01

    Recently developed 16-detector row computed tomography (CT) has been introduced as a reliable noninvasive imaging modality for evaluating the coronary arteries. In most cases, with appropriate premedication that includes beta-blockers and nitroglycerin, ideal data sets can be acquired from which to obtain excellent-quality coronary CT angiograms, most often with multiplanar reformation, thin-slab maximum intensity projection, and volume rendering. However, various artifacts associated with data creation and reformation, postprocessing methods, and image interpretation can hamper accurate diagnosis. These artifacts can be related to pulsation (nonassessable segments, pseudostenosis) as well as rhythm disorders, respiratory issues, partial volume averaging effect, high-attenuation entities, inappropriate scan pitch, contrast material enhancement, and patient body habitus. Some artifacts have already been resolved with technical advances, whereas others represent partially inherent limitations of coronary CT angiography. Familiarity with the pitfalls of coronary angiography with 16-detector row CT, coupled with the knowledge of both the normal anatomy and anatomic variants of the coronary arteries, can almost always help radiologists avoid interpretive errors in the diagnosis of coronary artery stenosis. (c) RSNA, 2005.

  8. Integrating segmentation methods from the Insight Toolkit into a visualization application.

    PubMed

    Martin, Ken; Ibáñez, Luis; Avila, Lisa; Barré, Sébastien; Kaspersen, Jon H

    2005-12-01

    The Insight Toolkit (ITK) initiative from the National Library of Medicine has provided a suite of state-of-the-art segmentation and registration algorithms ideally suited to volume visualization and analysis. A volume visualization application that effectively utilizes these algorithms provides many benefits: it allows access to ITK functionality for non-programmers, it creates a vehicle for sharing and comparing segmentation techniques, and it serves as a visual debugger for algorithm developers. This paper describes the integration of image processing functionalities provided by the ITK into VolView, a visualization application for high performance volume rendering. A free version of this visualization application is publicly available and is available in the online version of this paper. The process for developing ITK plugins for VolView according to the publicly available API is described in detail, and an application of ITK VolView plugins to the segmentation of Abdominal Aortic Aneurysms (AAAs) is presented. The source code of the ITK plugins is also publicly available and it is included in the online version.

  9. Plenoptic layer-based modeling for image based rendering.

    PubMed

    Pearson, James; Brookes, Mike; Dragotti, Pier Luigi

    2013-09-01

    Image based rendering is an attractive alternative to model based rendering for generating novel views because of its lower complexity and potential for photo-realistic results. To reduce the number of images necessary for alias-free rendering, some geometric information for the 3D scene is normally necessary. In this paper, we present a fast automatic layer-based method for synthesizing an arbitrary new view of a scene from a set of existing views. Our algorithm takes advantage of the knowledge of the typical structure of multiview data to perform occlusion-aware layer extraction. In addition, the number of depth layers used to approximate the geometry of the scene is chosen based on plenoptic sampling theory with the layers placed non-uniformly to account for the scene distribution. The rendering is achieved using a probabilistic interpolation approach and by extracting the depth layer information on a small number of key images. Numerical results demonstrate that the algorithm is fast and yet is only 0.25 dB away from the ideal performance achieved with the ground-truth knowledge of the 3D geometry of the scene of interest. This indicates that there are measurable benefits from following the predictions of plenoptic theory and that they remain true when translated into a practical system for real world data.

  10. Validity of multislice computerized tomography for diagnosis of maxillofacial fractures using an independent workstation.

    PubMed

    Dos Santos, Denise Takehana; Costa e Silva, Adriana Paula Andrade; Vannier, Michael Walter; Cavalcanti, Marcelo Gusmão Paraiso

    2004-12-01

    The purpose of this study was to demonstrate the sensitivity and specificity of multislice computerized tomography (CT) for diagnosis of maxillofacial fractures following specific protocols using an independent workstation. The study population consisted of 56 patients with maxillofacial fractures who were submitted to a multislice CT. The original data were transferred to an independent workstation using volumetric imaging software to generate axial images and simultaneous multiplanar (MPR) and 3-dimensional (3D-CT) volume rendering reconstructed images. The images were then processed and interpreted by 2 examiners using the following protocols independently of each other: axial, MPR/axial, 3D-CT images, and the association of axial/MPR/3D images. The clinical/surgical findings were considered the gold standard corroborating the diagnosis of the fractures and their anatomic localization. The statistical analysis was carried out using validity and chi-squared tests. The association of axial/MPR/3D images indicated a higher sensitivity (range 95.8%) and specificity (range 99%) than the other methods regarding the analysis of all regions. CT imaging demonstrated high specificity and sensitivity for maxillofacial fractures. The association of axial/MPR/3D-CT images added important information in relationship to other CT protocols.

  11. CT imaging with a mobile C-arm prototype

    NASA Astrophysics Data System (ADS)

    Cheryauka, Arvi; Tubbs, David; Langille, Vinton; Kalya, Prabhanjana; Smith, Brady; Cherone, Rocco

    2008-03-01

    Mobile X-ray imagery is an omnipresent tool in conventional musculoskeletal and soft tissue applications. The next generation of mobile C-arm systems can provide clinicians of minimally-invasive surgery and pain management procedures with both real-time high-resolution fluoroscopy and intra-operative CT imaging modalities. In this study, we research two C-arm CT experimental system configurations and evaluate their imaging capabilities. In a non-destructive evaluation configuration, the X-ray Tube - Detector assembly is stationary while an imaging object is placed on a rotating table. In a medical imaging configuration, the C-arm gantry moves around the patient and the table. In our research setting, we connect the participating devices through a Mobile X-Ray Imaging Environment known as MOXIE. MOXIE is a set of software applications for internal research at GE Healthcare - Surgery and used to examine imaging performance of experimental systems. Anthropomorphic phantom volume renderings and orthogonal slices of reconstructed images are obtained and displayed. The experimental C-arm CT results show CT-like image quality that may be suitable for interventional procedures, real-time data management, and, therefore, have great potential for effective use on the clinical floor.

  12. Calibration, reconstruction, and rendering of cylindrical millimeter-wave image data

    NASA Astrophysics Data System (ADS)

    Sheen, David M.; Hall, Thomas E.

    2011-05-01

    Cylindrical millimeter-wave imaging systems and technology have been under development at the Pacific Northwest National Laboratory (PNNL) for several years. This technology has been commercialized, and systems are currently being deployed widely across the United States and internationally. These systems are effective at screening for concealed items of all types; however, new sensor designs, image reconstruction techniques, and image rendering algorithms could potentially improve performance. At PNNL, a number of specific techniques have been developed recently to improve cylindrical imaging methods including wideband techniques, combining data from full 360-degree scans, polarimetric imaging techniques, calibration methods, and 3-D data visualization techniques. Many of these techniques exploit the three-dimensionality of the cylindrical imaging technique by optimizing the depth resolution of the system and using this information to enhance detection. Other techniques, such as polarimetric methods, exploit scattering physics of the millimeter-wave interaction with concealed targets on the body. In this paper, calibration, reconstruction, and three-dimensional rendering techniques will be described that optimize the depth information in these images and the display of the images to the operator.

  13. Helical CT scan with 2D and 3D reconstructions and virtual endoscopy versus conventional endoscopy in the assessment of airway disease in neonates, infants and children.

    PubMed

    Yunus, Mahira

    2012-11-01

    To study the use of helical computed tomography 2-D and 3-D images, and virtual endoscopy in the evaluation of airway disease in neonates, infants and children and its value in lesion detection, characterisation and extension. Conducted at Al-Noor Hospital, Makkah, Saudi Arabia, from January 1 to June 30, 2006, the study comprised of 40 patients with strider, having various causes of airway obstruction. They were examined by helical CT scan with 2-D and 3-D reconstructions and virtual endoscopy. The level and characterisation of lesions were carried out and results were compared with actual endoscopic findings. Conventional endoscopy was chosen as the gold standard, and the evaluation of endoscopy was done in terms of sensitivity and specificity of the procedure. For statistical purposes, SPSS version 10 was used. All CT methods detected airway stenosis or obstruction. Accuracy was 98% (n=40) for virtual endoscopy, 96% (n=48) for 3-D external rendering, 90% (n=45) for multiplanar reconstructions and 86% (n=43) for axial images. Comparing the results of 3-D internal and external volume rendering images with conventional endoscopy for detection and grading of stenosis were closer than with 2-D minimum intensity multiplanar reconstruction and axial CT slices. Even high-grade stenosis could be evaluated with virtual endoscope through which conventional endoscope cannot be passed. A case of 4-year-old patient with tracheomalacia could not be diagnosed by helical CT scan and virtual bronchoscopy which was diagriosed on conventional endoscopy and needed CT scan in inspiration and expiration. Virtual endoscopy [VE] enabled better assessment of stenosis compared to the reading of 3-D external rendering, 2-D multiplanar reconstruction [MPR] or axial slices. It can replace conventional endoscopy in the assessment of airway disease without any additional risk.

  14. Efficient Encoding and Rendering of Time-Varying Volume Data

    NASA Technical Reports Server (NTRS)

    Ma, Kwan-Liu; Smith, Diann; Shih, Ming-Yun; Shen, Han-Wei

    1998-01-01

    Visualization of time-varying volumetric data sets, which may be obtained from numerical simulations or sensing instruments, provides scientists insights into the detailed dynamics of the phenomenon under study. This paper describes a coherent solution based on quantization, coupled with octree and difference encoding for visualizing time-varying volumetric data. Quantization is used to attain voxel-level compression and may have a significant influence on the performance of the subsequent encoding and visualization steps. Octree encoding is used for spatial domain compression, and difference encoding for temporal domain compression. In essence, neighboring voxels may be fused into macro voxels if they have similar values, and subtrees at consecutive time steps may be merged if they are identical. The software rendering process is tailored according to the tree structures and the volume visualization process. With the tree representation, selective rendering may be performed very efficiently. Additionally, the I/O costs are reduced. With these combined savings, a higher level of user interactivity is achieved. We have studied a variety of time-varying volume datasets, performed encoding based on data statistics, and optimized the rendering calculations wherever possible. Preliminary tests on workstations have shown in many cases tremendous reduction by as high as 90% in both storage space and inter-frame delay.

  15. Femtosecond two-photon high-resolution 3D imaging, spatial-volume rendering and microspectral characterization of immunolocalized MHC-II and mLangerin/CD207 antigens in the mouse epidermis.

    PubMed

    Tirlapur, Uday K; Mulholland, William J; Bellhouse, Brian J; Kendall, Mark; Cornhill, J Fredrick; Cui, Zhanfeng

    2006-10-01

    Langerhans cells (LCs) play a sentinel role by initiating both adaptive and innate immune responses to antigens pertinent to the skin. With the discovery of various LCs markers including antibodies to major histocompatibility complex class II (MHC-II) molecules and CD1a, intracellular presence of racket-shaped "Birbeck granules," and very recently Langerin/CD207, LCs can be readily distinguished from other subsets of dendritic cells. Femtosecond two-photon laser scanning microscopy (TPLSM) in recent years has emerged as an alternative to the single photon-excitation based confocal laser scanning microscope (CLSM), particularly for minimally-invasive deep-tissue 3D and 4D vital as well as nonvital biomedical imaging. We have recently combined high resolution two-photon immunofluorescence (using anti MHC-II and Langerin/CD207 antibodies) imaging with microspectroscopy and advanced image-processing/volume-rendering modalities. In this work, we demonstrate the use of this novel state-of-the-art combinational approach to characterize the steady state 3D organization and spectral features of the mouse epidermis, particularly to identify the spatial distribution of LCs. Our findings provide unequivocal direct evidence that, in the mouse epidermis, the MHC-II and mLangerin/CD207 antigens do indeed manifest a high degree of colocalization around the nucleus of the LCs, while in the distal dendritic processes, mLangerin/CD207 antigens are rather sparsely distributed as punctuate structures. This unique possibility to simultaneously visualize high resolution 3D-resolved spatial distributions of two different immuno-reactive antigens, namely MHC-II and mLangerin/CD207, along with the nuclei of LCs and the adjacent epidermal cells can find interesting applications. These could involve aspects associated with pragmatic analysis of the kinetics of LCs migration as a function of immuno-dermatological responses during (1) human Immunodeficiency virus disease progression, (2) vaccination and targeted gene therapy, (3) skin transplantation/plastic surgery, (4) ultraviolet and other radiation exposure, (5) tissue-engineering of 3D skin constructs, as well as in (6) cosmetic industry, to unravel the influence of cosmeceuticals.

  16. Photogrammetric Modeling and Image-Based Rendering for Rapid Virtual Environment Creation

    DTIC Science & Technology

    2004-12-01

    area and different methods have been proposed. Pertinent methods include: Camera Calibration , Structure from Motion, Stereo Correspondence, and Image...Based Rendering 1.1.1 Camera Calibration Determining the 3D structure of a model from multiple views becomes simpler if the intrinsic (or internal...can introduce significant nonlinearities into the image. We have found that camera calibration is a straightforward process which can simplify the

  17. MTO-like reference mask modeling for advanced inverse lithography technology patterns

    NASA Astrophysics Data System (ADS)

    Park, Jongju; Moon, Jongin; Son, Suein; Chung, Donghoon; Kim, Byung-Gook; Jeon, Chan-Uk; LoPresti, Patrick; Xue, Shan; Wang, Sonny; Broadbent, Bill; Kim, Soonho; Hur, Jiuk; Choo, Min

    2017-07-01

    Advanced Inverse Lithography Technology (ILT) can result in mask post-OPC databases with very small address units, all-angle figures, and very high vertex counts. This creates mask inspection issues for existing mask inspection database rendering. These issues include: large data volumes, low transfer rate, long data preparation times, slow inspection throughput, and marginal rendering accuracy leading to high false detections. This paper demonstrates the application of a new rendering method including a new OASIS-like mask inspection format, new high-speed rendering algorithms, and related hardware to meet the inspection challenges posed by Advanced ILT masks.

  18. Photoacoustic and ultrasound dual-modality imaging for inflammatory arthritis

    NASA Astrophysics Data System (ADS)

    Xu, Guan; Chamberland, David; Girish, Gandikota; Wang, Xueding

    2014-03-01

    Arthritis is a leading cause of disability, affecting 46 million of the population in the U.S. Rendering new optical contrast in articular tissues at high spatial and temporal resolution, emerging photoacoustic imaging (PAI) combined with more established ultrasound (US) imaging technologies provides unique opportunities for diagnosis and treatment monitoring of inflammatory arthritis. In addition to capturing peripheral bone and soft tissue images, PAI has the capability to quantify hemodynamic properties including regional blood oxygenation and blood volume, both abnormal in synovial tissues affected by arthritis. Therefore, PAI, especially when performed together with US, should be of considerable help for further understanding the pathophysiology of arthritis as well as assisting in therapeutic decisions, including assessing the efficacy of new pharmacological therapies. In this paper, we will review our recent work on the development of PAI for application to the diagnostic imaging and therapeutic monitoring of inflammatory arthritis. We will present the imaging results from a home-built imaging system and another one based on a commercial US. The performance of PAI in evaluating pharmacological therapy on animal model of arthritis will be shown. Moreover, our resent work on PAI and US dual-modality imaging of human peripheral joints in vivo will also be presented.

  19. Evaluation of left ventricular wall motion and function in patients with previous myocardial infarction by three-dimensional 99mTc-HSAD multigated cardiac pool imaging.

    PubMed

    Yamazaki, J; Naitou, K; Ishida, S; Uno, N; Saisho, K; Munakata, T; Morishita, T; Takano, M; Yabe, Y

    1997-05-01

    To evaluate left ventricular (LV) wall motion stereoscopically from all directions and to calculate the LV volume by three-dimensional (3D) imaging. 99mTc-DTPA human serum albumin-multigated cardiac pool-single photon emission computed tomography (99mTc-MUGA-SPECT) was performed. A new data processing program was developed with the Application Visualization System-Medical Viewer (AVS-MV) based on images obtained from 99mTc-MUGA-SPECT. In patients with previous myocardial infarction, LV function and LV wall motion were evaluated by 3D-99mTc-MUGA imaging. The LV end-diastolic volume (LVEDV) and end-systolic volume (LVESV) were obtained from 3D-99mTc-MUGA images by the surface rendering method, and the left ventricular ejection fraction (LVEF) was calculated at thresholds of 35% (T1), 40% (T2), 45% (T3), and 50% (T4). There was a strong correlation between the LV volume calculated by 3D-99mTc-MUGA imaging at a threshold of 40% and that determined by contrast left ventriculography (LVEDV: 194.7 +/- 36.0 ml vs. 198.7 +/- 39.1 ml, r = 0.791, p < 0.001; LVESV: 91.6 +/- 44.5 ml vs. 93.3 +/- 41.3 ml, r = 0.953, p < 0.001), respectively. When compared with the LVEF data obtained by left ventriculography, significant correlations were found for 3D images reconstructed at each threshold (T1: r = 0.966; T2: r = 0.962; T3: r = 0.958; and T4: r = 0.955). In addition, when LV wall motion obtained by 3D-99mTc-MUGA imaging (LAT and LAO views) was compared with the results obtained by left ventriculography (RAO and LAO views), there was good agreement. 3D-99mTc-MUGA imaging was superior in allowing evaluation of LV wall motion in all directions and in assessment of LV function, since data acquisition and image reconstruction could be done within a short time with the three-detector imaging system and AVS-MV. This method appears to be very useful for the observation of both LV wall motion and LV function in patients with ischemic heart disease, because it is a noninvasive examination.

  20. Interactive 3D visualization of structural changes in the brain of a person with corticobasal syndrome

    PubMed Central

    Hänel, Claudia; Pieperhoff, Peter; Hentschel, Bernd; Amunts, Katrin; Kuhlen, Torsten

    2014-01-01

    The visualization of the progression of brain tissue loss in neurodegenerative diseases like corticobasal syndrome (CBS) can provide not only information about the localization and distribution of the volume loss, but also helps to understand the course and the causes of this neurodegenerative disorder. The visualization of such medical imaging data is often based on 2D sections, because they show both internal and external structures in one image. Spatial information, however, is lost. 3D visualization of imaging data is capable to solve this problem, but it faces the difficulty that more internally located structures may be occluded by structures near the surface. Here, we present an application with two designs for the 3D visualization of the human brain to address these challenges. In the first design, brain anatomy is displayed semi-transparently; it is supplemented by an anatomical section and cortical areas for spatial orientation, and the volumetric data of volume loss. The second design is guided by the principle of importance-driven volume rendering: A direct line-of-sight to the relevant structures in the deeper parts of the brain is provided by cutting out a frustum-like piece of brain tissue. The application was developed to run in both, standard desktop environments and in immersive virtual reality environments with stereoscopic viewing for improving the depth perception. We conclude, that the presented application facilitates the perception of the extent of brain degeneration with respect to its localization and affected regions. PMID:24847243

  1. Stereoscopic augmented reality using ultrasound volume rendering for laparoscopic surgery in children

    NASA Astrophysics Data System (ADS)

    Oh, Jihun; Kang, Xin; Wilson, Emmanuel; Peters, Craig A.; Kane, Timothy D.; Shekhar, Raj

    2014-03-01

    In laparoscopic surgery, live video provides visualization of the exposed organ surfaces in the surgical field, but is unable to show internal structures beneath those surfaces. The laparoscopic ultrasound is often used to visualize the internal structures, but its use is limited to intermittent confirmation because of the need for an extra hand to maneuver the ultrasound probe. Other limitations of using ultrasound are the difficulty of interpretation and the need for an extra port. The size of the ultrasound transducer may also be too large for its usage in small children. In this paper, we report on an augmented reality (AR) visualization system that features continuous hands-free volumetric ultrasound scanning of the surgical anatomy and video imaging from a stereoscopic laparoscope. The acquisition of volumetric ultrasound image is realized by precisely controlling a back-and-forth movement of an ultrasound transducer mounted on a linear slider. Furthermore, the ultrasound volume is refreshed several times per minute. This scanner will sit outside of the body in the envisioned use scenario and could be even integrated into the operating table. An overlay of the maximum intensity projection (MIP) of ultrasound volume on the laparoscopic stereo video through geometric transformations features an AR visualization system particularly suitable for children, because ultrasound is radiation-free and provides higher-quality images in small patients. The proposed AR representation promises to be better than the AR representation using ultrasound slice data.

  2. Abdo-Man: a 3D-printed anthropomorphic phantom for validating quantitative SIRT.

    PubMed

    Gear, Jonathan I; Cummings, Craig; Craig, Allison J; Divoli, Antigoni; Long, Clive D C; Tapner, Michael; Flux, Glenn D

    2016-12-01

    The use of selective internal radiation therapy (SIRT) is rapidly increasing, and the need for quantification and dosimetry is becoming more widespread to facilitate treatment planning and verification. The aim of this project was to develop an anthropomorphic phantom that can be used as a validation tool for post-SIRT imaging and its application to dosimetry. The phantom design was based on anatomical data obtained from a T1-weighted volume-interpolated breath-hold examination (VIBE) on a Siemens Aera 1.5 T MRI scanner. The liver, lungs and abdominal trunk were segmented using the Hermes image processing workstation. Organ volumes were then uploaded to the Delft Visualization and Image processing Development Environment for smoothing and surface rendering. Triangular meshes defining the iso-surfaces were saved as stereo lithography (STL) files and imported into the Autodesk® Meshmixer software. Organ volumes were subtracted from the abdomen and a removable base designed to allow access to the liver cavity. Connection points for placing lesion inserts and filling holes were also included. The phantom was manufactured using a Stratasys Connex3 PolyJet 3D printer. The printer uses stereolithography technology combined with ink jet printing. Print material is a solid acrylic plastic, with similar properties to polymethylmethacrylate (PMMA). Measured Hounsfield units and calculated attenuation coefficients of the material were shown to also be similar to PMMA. Total print time for the phantom was approximately 5 days. Initial scans of the phantom have been performed with Y-90 bremsstrahlung SPECT/CT, Y-90 PET/CT and Tc-99m SPECT/CT. The CT component of these images compared well with the original anatomical reference, and measurements of volume agreed to within 9 %. Quantitative analysis of the phantom was performed using all three imaging techniques. Lesion and normal liver absorbed doses were calculated from the quantitative images in three dimensions using the local deposition method. 3D printing is a flexible and cost-efficient technology for manufacture of anthropomorphic phantom. Application of such phantoms will enable quantitative imaging and dosimetry methodologies to be evaluated, which with optimisation could help improve outcome for patients.

  3. Segmentation, surface rendering, and surface simplification of 3-D skull images for the repair of a large skull defect

    NASA Astrophysics Data System (ADS)

    Wan, Weibing; Shi, Pengfei; Li, Shuguang

    2009-10-01

    Given the potential demonstrated by research into bone-tissue engineering, the use of medical image data for the rapid prototyping (RP) of scaffolds is a subject worthy of research. Computer-aided design and manufacture and medical imaging have created new possibilities for RP. Accurate and efficient design and fabrication of anatomic models is critical to these applications. We explore the application of RP computational methods to the repair of a pediatric skull defect. The focus of this study is the segmentation of the defect region seen in computerized tomography (CT) slice images of this patient's skull and the three-dimensional (3-D) surface rendering of the patient's CT-scan data. We see if our segmentation and surface rendering software can improve the generation of an implant model to fill a skull defect.

  4. LONI visualization environment.

    PubMed

    Dinov, Ivo D; Valentino, Daniel; Shin, Bae Cheol; Konstantinidis, Fotios; Hu, Guogang; MacKenzie-Graham, Allan; Lee, Erh-Fang; Shattuck, David; Ma, Jeff; Schwartz, Craig; Toga, Arthur W

    2006-06-01

    Over the past decade, the use of informatics to solve complex neuroscientific problems has increased dramatically. Many of these research endeavors involve examining large amounts of imaging, behavioral, genetic, neurobiological, and neuropsychiatric data. Superimposing, processing, visualizing, or interpreting such a complex cohort of datasets frequently becomes a challenge. We developed a new software environment that allows investigators to integrate multimodal imaging data, hierarchical brain ontology systems, on-line genetic and phylogenic databases, and 3D virtual data reconstruction models. The Laboratory of Neuro Imaging visualization environment (LONI Viz) consists of the following components: a sectional viewer for imaging data, an interactive 3D display for surface and volume rendering of imaging data, a brain ontology viewer, and an external database query system. The synchronization of all components according to stereotaxic coordinates, region name, hierarchical ontology, and genetic labels is achieved via a comprehensive BrainMapper functionality, which directly maps between position, structure name, database, and functional connectivity information. This environment is freely available, portable, and extensible, and may prove very useful for neurobiologists, neurogenetisists, brain mappers, and for other clinical, pedagogical, and research endeavors.

  5. An Integrated MRI and MRS Approach to Evaluation of Multiple Sclerosis with Cognitive Impairment

    NASA Astrophysics Data System (ADS)

    Liang, Zhengrong; Li, Lihong; Lu, Hongbing; Huang, Wei; Tudorica, Alina; Krupp, Lauren

    Magnetic resonance imaging and spectroscopy (MRI/MRS) plays a unique role in multiple sclerosis (MS) evaluation, because of its ability to provide both high image contrast and significant chemical change among brain tissues. The image contrast renders the possibility of quantifying the tissue volumetric and texture variations, e.g., cerebral atrophy and progressing speed, reflecting the ongoing destructive pathologic processes. Any chemical change reflects an early sign of pathological alteration, e.g., decreased N-acetyl aspartate (NAA) in lesions and normal appearing white matter, related to axonal damage or dysfunction. Both MRI and MRS encounter partial volume (PV) effect, which compromises the quantitative capability, especially for MRS. This work aims to develop a statistical framework to segment the tissue mixtures inside each image element, eliminating theoretically the PV effect, and apply the framework to the evaluation of MS with cognitive impairment. The quantitative measures from MRI/MRS neuroimaging are strongly correlated with the qualitative neuropsychological scores of Brief Repeatable Battery (BRB) test on cognitive impairment, demonstrating the usefulness of the PV image segmentation framework in this clinically significant problem.

  6. Thoracic wall trauma—misdiagnosed lesions on radiographs and usefulness of ultrasound, multidetector computed tomography and magnetic resonance imaging

    PubMed Central

    Facenda, Catherine; Vaz, Nuno; Castañeda, Edgar Augusto; del Amo, Montserrat; Garcia-Diez, Ana Isabel; Pomes, Jaime

    2017-01-01

    Blunt injuries to the chest wall are an important chapter on emergency room (ER) departments, being the third most common injuries in trauma patients which ominous complications could appear. This article describes different types of traumatic events affecting the chest wall, which maybe misdiagnosed with conventional X-ray. Special emphasis has been done in computed tomography (CT) and multidetector CT (MDCT) imaging. This technique is considered the “gold-standard” for those traumatic patients, due to its fast acquisition covering the whole area of interest in axial plane, reconstructing multiplanar (2D, 3D) volume-rendered images with a superb quality and angiographic CT capabilities for evaluating vascular damage. Complementary techniques such as ultrasonography (US) and magnetic resonance imaging (MRI) may improve the diagnostic accuracy due to its great capacity in visualising soft-tissue trauma (muscle-tendinous tears) and subtle fractures. All these imaging methods have an important role in quantifying the severity of chest wall trauma. The findings of this study have been exposed with cases of our archives in a didactic way. PMID:28932697

  7. Multi-viewpoint Image Array Virtual Viewpoint Rapid Generation Algorithm Based on Image Layering

    NASA Astrophysics Data System (ADS)

    Jiang, Lu; Piao, Yan

    2018-04-01

    The use of multi-view image array combined with virtual viewpoint generation technology to record 3D scene information in large scenes has become one of the key technologies for the development of integrated imaging. This paper presents a virtual viewpoint rendering method based on image layering algorithm. Firstly, the depth information of reference viewpoint image is quickly obtained. During this process, SAD is chosen as the similarity measure function. Then layer the reference image and calculate the parallax based on the depth information. Through the relative distance between the virtual viewpoint and the reference viewpoint, the image layers are weighted and panned. Finally the virtual viewpoint image is rendered layer by layer according to the distance between the image layers and the viewer. This method avoids the disadvantages of the algorithm DIBR, such as high-precision requirements of depth map and complex mapping operations. Experiments show that, this algorithm can achieve the synthesis of virtual viewpoints in any position within 2×2 viewpoints range, and the rendering speed is also very impressive. The average result proved that this method can get satisfactory image quality. The average SSIM value of the results relative to real viewpoint images can reaches 0.9525, the PSNR value can reaches 38.353 and the image histogram similarity can reaches 93.77%.

  8. Effects of VR system fidelity on analyzing isosurface visualization of volume datasets.

    PubMed

    Laha, Bireswar; Bowman, Doug A; Socha, John J

    2014-04-01

    Volume visualization is an important technique for analyzing datasets from a variety of different scientific domains. Volume data analysis is inherently difficult because volumes are three-dimensional, dense, and unfamiliar, requiring scientists to precisely control the viewpoint and to make precise spatial judgments. Researchers have proposed that more immersive (higher fidelity) VR systems might improve task performance with volume datasets, and significant results tied to different components of display fidelity have been reported. However, more information is needed to generalize these results to different task types, domains, and rendering styles. We visualized isosurfaces extracted from synchrotron microscopic computed tomography (SR-μCT) scans of beetles, in a CAVE-like display. We ran a controlled experiment evaluating the effects of three components of system fidelity (field of regard, stereoscopy, and head tracking) on a variety of abstract task categories that are applicable to various scientific domains, and also compared our results with those from our prior experiment using 3D texture-based rendering. We report many significant findings. For example, for search and spatial judgment tasks with isosurface visualization, a stereoscopic display provides better performance, but for tasks with 3D texture-based rendering, displays with higher field of regard were more effective, independent of the levels of the other display components. We also found that systems with high field of regard and head tracking improve performance in spatial judgment tasks. Our results extend existing knowledge and produce new guidelines for designing VR systems to improve the effectiveness of volume data analysis.

  9. Toward frameless stereotaxy: anatomical-vascular correlation and registration

    NASA Astrophysics Data System (ADS)

    Henri, Christopher J.; Cukiert, A.; Collins, D. Louis; Olivier, A.; Peters, Terence M.

    1992-09-01

    We present a method to correlate and register a projection angiogram with volume rendered tomographic data from the same patient. Previously, we have described how this may be accomplished using a stereotactic frame to handle the required coordinate transformations. Here we examine the efficacy of employing anatomically based landmarks as opposed to external fiducials to achieve the same results. The experiments required a neurosurgeon to identify several homologous points in a DSA image and a MRI volume which were subsequently used to compute the coordinate transformations governing the matching procedure. Correlation accuracy was assessed by comparing these results to those employing fiducial markers on a stereotactic frame, and by examining how different levels of noise in the positions of the homologous points affect the resulting coordinate transformations. Further simulations suggest that this method has potential to be used in planning stereotactic procedures without the use of a frame.

  10. Agreement and reliability of pelvic floor measurements during rest and on maximum Valsalva maneuver using three-dimensional translabial ultrasound and virtual reality imaging.

    PubMed

    Speksnijder, L; Oom, D M J; Koning, A H J; Biesmeijer, C S; Steegers, E A P; Steensma, A B

    2016-08-01

    Imaging of the levator ani hiatus provides valuable information for the diagnosis and follow-up of patients with pelvic organ prolapse (POP). This study compared measurements of levator ani hiatal volume during rest and on maximum Valsalva, obtained using conventional three-dimensional (3D) translabial ultrasound and virtual reality imaging. Our objectives were to establish their agreement and reliability, and their relationship with prolapse symptoms and POP quantification (POP-Q) stage. One hundred women with an intact levator ani were selected from our tertiary clinic database. Information on clinical symptoms were obtained using standardized questionnaires. Ultrasound datasets were analyzed using a rendered volume with a slice thickness of 1.5 cm, at the level of minimal hiatal dimensions, during rest and on maximum Valsalva. The levator area (in cm(2) ) was measured and multiplied by 1.5 to obtain the levator ani hiatal volume (in cm(3) ) on conventional 3D ultrasound. Levator ani hiatal volume (in cm(3) ) was measured semi-automatically by virtual reality imaging using a segmentation algorithm. Twenty patients were chosen randomly to analyze intra- and interobserver agreement. The mean difference between levator hiatal volume measurements on 3D ultrasound and by virtual reality was 1.52 cm(3) (95% CI, 1.00-2.04 cm(3) ) at rest and 1.16 cm(3) (95% CI, 0.56-1.76 cm(3) ) during maximum Valsalva (P < 0.001). Both intra- and interobserver intraclass correlation coefficients were ≥ 0.96 for conventional 3D ultrasound and > 0.99 for virtual reality. Patients with prolapse symptoms or POP-Q Stage ≥ 2 had significantly larger hiatal measurements than those without symptoms or POP-Q Stage < 2. Levator ani hiatal volume at rest and on maximum Valsalva is significantly smaller when using virtual reality compared with conventional 3D ultrasound; however, this difference does not seem clinically important. Copyright © 2015 ISUOG. Published by John Wiley & Sons Ltd. Copyright © 2015 ISUOG. Published by John Wiley & Sons Ltd.

  11. Comparison of grey matter volume and thickness for analysing cortical changes in chronic schizophrenia: a matter of surface area, grey/white matter intensity contrast, and curvature.

    PubMed

    Kong, Li; Herold, Christina J; Zöllner, Frank; Salat, David H; Lässer, Marc M; Schmid, Lena A; Fellhauer, Iven; Thomann, Philipp A; Essig, Marco; Schad, Lothar R; Erickson, Kirk I; Schröder, Johannes

    2015-02-28

    Grey matter volume and cortical thickness are the two most widely used measures for detecting grey matter morphometric changes in various diseases such as schizophrenia. However, these two measures only share partial overlapping regions in identifying morphometric changes. Few studies have investigated the contributions of the potential factors to the differences of grey matter volume and cortical thickness. To investigate this question, 3T magnetic resonance images from 22 patients with schizophrenia and 20 well-matched healthy controls were chosen for analyses. Grey matter volume and cortical thickness were measured by VBM and Freesurfer. Grey matter volume results were then rendered onto the surface template of Freesurfer to compare the differences from cortical thickness in anatomical locations. Discrepancy regions of the grey matter volume and thickness where grey matter volume significantly decreased but without corresponding evidence of cortical thinning involved the rostral middle frontal, precentral, lateral occipital and superior frontal gyri. Subsequent region-of-interest analysis demonstrated that changes in surface area, grey/white matter intensity contrast and curvature accounted for the discrepancies. Our results suggest that the differences between grey matter volume and thickness could be jointly driven by surface area, grey/white matter intensity contrast and curvature. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  12. Center for Automatic Target Recognition Research. Delivery Order 0005: Image Georegistration, Camera Calibration, and Dismount Categorization in Support of DEBU from Layered Sensing

    DTIC Science & Technology

    2011-07-01

    rendering of a subject using 316,691 polygon faces and 161,951 points. The small white dots on the surface of the subject are landmark points. The...Figure 17: CAESAR Data. The leftmost image is a color polygon rendering of a subject using 316,691 polygon faces and 161,951 points. The small white...polygon rendering of a subject using 316,691 polygon faces and 161,951 points. The small white dots on the surface of the subject are landmark points

  13. Semantic layers for illustrative volume rendering.

    PubMed

    Rautek, Peter; Bruckner, Stefan; Gröller, Eduard

    2007-01-01

    Direct volume rendering techniques map volumetric attributes (e.g., density, gradient magnitude, etc.) to visual styles. Commonly this mapping is specified by a transfer function. The specification of transfer functions is a complex task and requires expert knowledge about the underlying rendering technique. In the case of multiple volumetric attributes and multiple visual styles the specification of the multi-dimensional transfer function becomes more challenging and non-intuitive. We present a novel methodology for the specification of a mapping from several volumetric attributes to multiple illustrative visual styles. We introduce semantic layers that allow a domain expert to specify the mapping in the natural language of the domain. A semantic layer defines the mapping of volumetric attributes to one visual style. Volumetric attributes and visual styles are represented as fuzzy sets. The mapping is specified by rules that are evaluated with fuzzy logic arithmetics. The user specifies the fuzzy sets and the rules without special knowledge about the underlying rendering technique. Semantic layers allow for a linguistic specification of the mapping from attributes to visual styles replacing the traditional transfer function specification.

  14. Evaluation of a hyperspectral image database for demosaicking purposes

    NASA Astrophysics Data System (ADS)

    Larabi, Mohamed-Chaker; Süsstrunk, Sabine

    2011-01-01

    We present a study on the the applicability of hyperspectral images to evaluate color filter array (CFA) design and the performance of demosaicking algorithms. The aim is to simulate a typical digital still camera processing pipe-line and to compare two different scenarios: evaluate the performance of demosaicking algorithms applied to raw camera RGB values before color rendering to sRGB, and evaluate the performance of demosaicking algorithms applied on the final sRGB color rendered image. The second scenario is the most frequently used one in literature because CFA design and algorithms are usually tested on a set of existing images that are already rendered, such as the Kodak Photo CD set containing the well-known lighthouse image. We simulate the camera processing pipe-line with measured spectral sensitivity functions of a real camera. Modeling a Bayer CFA, we select three linear demosaicking techniques in order to perform the tests. The evaluation is done using CMSE, CPSNR, s-CIELAB and MSSIM metrics to compare demosaicking results. We find that the performance, and especially the difference between demosaicking algorithms, is indeed significant depending if the mosaicking/demosaicking is applied to camera raw values as opposed to already rendered sRGB images. We argue that evaluating the former gives a better indication how a CFA/demosaicking combination will work in practice, and that it is in the interest of the community to create a hyperspectral image dataset dedicated to that effect.

  15. Real-time 3D image reconstruction guidance in liver resection surgery.

    PubMed

    Soler, Luc; Nicolau, Stephane; Pessaux, Patrick; Mutter, Didier; Marescaux, Jacques

    2014-04-01

    Minimally invasive surgery represents one of the main evolutions of surgical techniques. However, minimally invasive surgery adds difficulty that can be reduced through computer technology. From a patient's medical image [US, computed tomography (CT) or MRI], we have developed an Augmented Reality (AR) system that increases the surgeon's intraoperative vision by providing a virtual transparency of the patient. AR is based on two major processes: 3D modeling and visualization of anatomical or pathological structures appearing in the medical image, and the registration of this visualization onto the real patient. We have thus developed a new online service, named Visible Patient, providing efficient 3D modeling of patients. We have then developed several 3D visualization and surgical planning software tools to combine direct volume rendering and surface rendering. Finally, we have developed two registration techniques, one interactive and one automatic providing intraoperative augmented reality view. From January 2009 to June 2013, 769 clinical cases have been modeled by the Visible Patient service. Moreover, three clinical validations have been realized demonstrating the accuracy of 3D models and their great benefit, potentially increasing surgical eligibility in liver surgery (20% of cases). From these 3D models, more than 50 interactive AR-assisted surgical procedures have been realized illustrating the potential clinical benefit of such assistance to gain safety, but also current limits that automatic augmented reality will overcome. Virtual patient modeling should be mandatory for certain interventions that have now to be defined, such as liver surgery. Augmented reality is clearly the next step of the new surgical instrumentation but remains currently limited due to the complexity of organ deformations during surgery. Intraoperative medical imaging used in new generation of automated augmented reality should solve this issue thanks to the development of Hybrid OR.

  16. Metadata requirements for results of diagnostic imaging procedures: a BIIF profile to support user applications

    NASA Astrophysics Data System (ADS)

    Brown, Nicholas J.; Lloyd, David S.; Reynolds, Melvin I.; Plummer, David L.

    2002-05-01

    A visible digital image is rendered from a set of digital image data. Medical digital image data can be stored as either: (a) pre-rendered format, corresponding to a photographic print, or (b) un-rendered format, corresponding to a photographic negative. The appropriate image data storage format and associated header data (metadata) required by a user of the results of a diagnostic procedure recorded electronically depends on the task(s) to be performed. The DICOM standard provides a rich set of metadata that supports the needs of complex applications. Many end user applications, such as simple report text viewing and display of a selected image, are not so demanding and generic image formats such as JPEG are sometimes used. However, these are lacking some basic identification requirements. In this paper we make specific proposals for minimal extensions to generic image metadata of value in various domains, which enable safe use in the case of two simple healthcare end user scenarios: (a) viewing of text and a selected JPEG image activated by a hyperlink and (b) viewing of one or more JPEG images together with superimposed text and graphics annotation using a file specified by a profile of the ISO/IEC Basic Image Interchange Format (BIIF).

  17. Percutaneous Vertebroplasty: Preliminary Experiences with Rotational Acquisitions and 3D Reconstructions for Therapy Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hodek-Wuerz, Roman; Martin, Jean-Baptiste; Wilhelm, Kai

    Percutaneous vertebroplasty (PVP) is carried out under fluoroscopic control in most centers. The exclusion of implant leakage and the assessment of implant distribution might be difficult to assess based on two-dimensional radiographic projection images only. We evaluated the feasibility of performing a follow-up examination after PVP with rotational acquisitions and volumetric reconstructions in the angio suite. Twenty consecutive patients underwent standard PVP procedures under fluoroscopic control. Immediate postprocedure evaluation of the implant distribution in the angio suite (BV 3000; Philips, The Netherlands) was performed using rotational acquisitions (typical parameters for the image acquisition included a 17-cm field-of-view, 200 acquired imagesmore » for a total angular range of 180{sup o}). Postprocessing of acquired volumetric datasets included multiplanar reconstruction (MPR), maximum intensity projection (MIP), and volume rendering technique (VRT) images that were displayed as two-dimensional slabs or as entire three-dimensional volumes. Image evaluation included lesion and implant assessment with special attention given to implant leakage. Findings from rotational acquisitions were compared to findings from postinterventional CT. The time to perform and to postprocess the rotational acquisitions was in all cases less then 10 min. Assessment of implant distribution after PVP using rotational image acquisition methods and volumetric reconstructions was possible in all patients. Cement distribution and potential leakage sites were visualized best on MIP images presented as slabs. From a total of 33 detected leakages with CT, 30 could be correctly detected by rotational image acquisition. Rotational image acquisitions and volumetric reconstruction methods provided a fast method to control radiographically the result of PVP in our cases.« less

  18. Robotic intrafractional US guidance for liver SABR: System design, beam avoidance, and clinical imaging.

    PubMed

    Schlosser, Jeffrey; Gong, Ren Hui; Bruder, Ralf; Schweikard, Achim; Jang, Sungjune; Henrie, John; Kamaya, Aya; Koong, Albert; Chang, Daniel T; Hristov, Dimitre

    2016-11-01

    To present a system for robotic 4D ultrasound (US) imaging concurrent with radiotherapy beam delivery and estimate the proportion of liver stereotactic ablative body radiotherapy (SABR) cases in which robotic US image guidance can be deployed without interfering with clinically used VMAT beam configurations. The image guidance hardware comprises a 4D US machine, an optical tracking system for measuring US probe pose, and a custom-designed robot for acquiring hands-free US volumes. In software, a simulation environment incorporating the LINAC, couch, planning CT, and robotic US guidance hardware was developed. Placement of the robotic US hardware was guided by a target visibility map rendered on the CT surface by using the planning CT to simulate US propagation. The visibility map was validated in a prostate phantom and evaluated in patients by capturing live US from imaging positions suggested by the visibility map. In 20 liver SABR patients treated with VMAT, the simulation environment was used to virtually place the robotic hardware and US probe. Imaging targets were either planning target volumes (PTVs, range 5.9-679.5 ml) or gross tumor volumes (GTVs, range 0.9-343.4 ml). Presence or absence of mechanical interference with LINAC, couch, and patient body as well as interferences with treated beams was recorded. For PTV targets, robotic US guidance without mechanical interference was possible in 80% of the cases and guidance without beam interference was possible in 60% of the cases. For the smaller GTV targets, these proportions were 95% and 85%, respectively. GTV size (1/20), elongated shape (1/20), and depth (1/20) were the main factors limiting the availability of noninterfering imaging positions. The robotic US imaging system was deployed in two liver SABR patients during CT simulation with successful acquisition of 4D US sequences in different imaging positions. This study indicates that for VMAT liver SABR, robotic US imaging of a relevant internal target may be possible in 85% of the cases while using treatment plans currently deployed in the clinic. With beam replanning to account for the presence of robotic US guidance, intrafractional US may be an option for 95% of the liver SABR cases.

  19. Predicting Outcome after Pediatric Traumatic Brain Injury by Early Magnetic Resonance Imaging Lesion Location and Volume

    PubMed Central

    Smitherman, Emily; Hernandez, Ana; Stavinoha, Peter L.; Huang, Rong; Kernie, Steven G.; Diaz-Arrastia, Ramon

    2016-01-01

    Abstract Brain lesions after traumatic brain injury (TBI) are heterogeneous, rendering outcome prognostication difficult. The aim of this study is to investigate whether early magnetic resonance imaging (MRI) of lesion location and lesion volume within discrete brain anatomical zones can accurately predict long-term neurological outcome in children post-TBI. Fluid-attenuated inversion recovery (FLAIR) MRI hyperintense lesions in 63 children obtained 6.2±5.6 days postinjury were correlated with the Glasgow Outcome Scale Extended-Pediatrics (GOS-E Peds) score at 13.5±8.6 months. FLAIR lesion volume was expressed as hyperintensity lesion volume index (HLVI)=(hyperintensity lesion volume / whole brain volume)×100 measured within three brain zones: zone A (cortical structures); zone B (basal ganglia, corpus callosum, internal capsule, and thalamus); and zone C (brainstem). HLVI-total and HLVI-zone C predicted good and poor outcome groups (p<0.05). GOS-E Peds correlated with HLVI-total (r=0.39; p=0.002) and HLVI in all three zones: zone A (r=0.31; p<0.02); zone B (r=0.35; p=0.004); and zone C (r=0.37; p=0.003). In adolescents ages 13–17 years, HLVI-total correlated best with outcome (r=0.5; p=0.007), whereas in younger children under the age of 13, HLVI-zone B correlated best (r=0.52; p=0.001). Compared to patients with lesions in zone A alone or in zones A and B, patients with lesions in all three zones had a significantly higher odds ratio (4.38; 95% confidence interval, 1.19–16.0) for developing an unfavorable outcome. PMID:25808802

  20. A Review on Real-Time 3D Ultrasound Imaging Technology

    PubMed Central

    Zeng, Zhaozheng

    2017-01-01

    Real-time three-dimensional (3D) ultrasound (US) has attracted much more attention in medical researches because it provides interactive feedback to help clinicians acquire high-quality images as well as timely spatial information of the scanned area and hence is necessary in intraoperative ultrasound examinations. Plenty of publications have been declared to complete the real-time or near real-time visualization of 3D ultrasound using volumetric probes or the routinely used two-dimensional (2D) probes. So far, a review on how to design an interactive system with appropriate processing algorithms remains missing, resulting in the lack of systematic understanding of the relevant technology. In this article, previous and the latest work on designing a real-time or near real-time 3D ultrasound imaging system are reviewed. Specifically, the data acquisition techniques, reconstruction algorithms, volume rendering methods, and clinical applications are presented. Moreover, the advantages and disadvantages of state-of-the-art approaches are discussed in detail. PMID:28459067

  1. A Review on Real-Time 3D Ultrasound Imaging Technology.

    PubMed

    Huang, Qinghua; Zeng, Zhaozheng

    2017-01-01

    Real-time three-dimensional (3D) ultrasound (US) has attracted much more attention in medical researches because it provides interactive feedback to help clinicians acquire high-quality images as well as timely spatial information of the scanned area and hence is necessary in intraoperative ultrasound examinations. Plenty of publications have been declared to complete the real-time or near real-time visualization of 3D ultrasound using volumetric probes or the routinely used two-dimensional (2D) probes. So far, a review on how to design an interactive system with appropriate processing algorithms remains missing, resulting in the lack of systematic understanding of the relevant technology. In this article, previous and the latest work on designing a real-time or near real-time 3D ultrasound imaging system are reviewed. Specifically, the data acquisition techniques, reconstruction algorithms, volume rendering methods, and clinical applications are presented. Moreover, the advantages and disadvantages of state-of-the-art approaches are discussed in detail.

  2. Using synchrotron X-ray phase-contrast micro-computed tomography to study tissue damage by laser irradiation.

    PubMed

    Robinson, Alan M; Stock, Stuart R; Soriano, Carmen; Xiao, Xianghui; Richter, Claus-Peter

    2016-11-01

    The aim of this study was to determine if X-ray micro-computed tomography could be used to locate and characterize tissue damage caused by laser irradiation and to describe its advantages over classical histology for this application. A surgical CO 2 laser, operated in single pulse mode (100 milliseconds) at different power settings, was used to ablate different types of cadaveric animal tissues. Tissue samples were then harvested and imaged with synchrotron X-ray phase-contrast and micro-computed tomography to generate stacks of virtual sections of the tissues. Subsequently, Fiji (ImageJ) software was used to locate tissue damage, then to quantify volumes of laser ablation cones and thermal coagulation damage from 3D renderings of tissue image stacks. Visual comparisons of tissue structures in X-ray images with those visible by classic light microscopy histology were made. We demonstrated that micro-computed tomography could be used to rapidly identify areas of surgical laser ablation, vacuolization, carbonization, and thermally coagulated tissue. Quantification and comparison of the ablation crater, which represents the volume of ablated tissue, and the thermal coagulation zone volumes were performed faster than we could by classical histology. We demonstrated that these procedures can be performed on fresh hydrated and non-sectioned plastic embedded tissue. We demonstrated that the application of non-destructive micro-computed tomography to the visualization and analysis of laser induced tissue damage without tissue sectioning is possible. This will improve evaluation of new surgical lasers and their corresponding effect on tissues. Lasers Surg. Med. 48:866-877, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  3. Perception-based 3D tactile rendering from a single image for human skin examinations by dynamic touch.

    PubMed

    Kim, K; Lee, S

    2015-05-01

    Diagnosis of skin conditions is dependent on the assessment of skin surface properties that are represented by more tactile properties such as stiffness, roughness, and friction than visual information. Due to this reason, adding tactile feedback to existing vision based diagnosis systems can help dermatologists diagnose skin diseases or disorders more accurately. The goal of our research was therefore to develop a tactile rendering system for skin examinations by dynamic touch. Our development consists of two stages: converting a single image to a 3D haptic surface and rendering the generated haptic surface in real-time. Converting to 3D surfaces from 2D single images was implemented with concerning human perception data collected by a psychophysical experiment that measured human visual and haptic sensibility to 3D skin surface changes. For the second stage, we utilized real skin biomechanical properties found by prior studies. Our tactile rendering system is a standalone system that can be used with any single cameras and haptic feedback devices. We evaluated the performance of our system by conducting an identification experiment with three different skin images with five subjects. The participants had to identify one of the three skin surfaces by using a haptic device (Falcon) only. No visual cue was provided for the experiment. The results indicate that our system provides sufficient performance to render discernable tactile rendering with different skin surfaces. Our system uses only a single skin image and automatically generates a 3D haptic surface based on human haptic perception. Realistic skin interactions can be provided in real-time for the purpose of skin diagnosis, simulations, or training. Our system can also be used for other applications like virtual reality and cosmetic applications. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  4. Corridor One:An Integrated Distance Visualization Enuronments for SSI+ASCI Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christopher R. Johnson, Charles D. Hansen

    2001-10-29

    The goal of Corridor One: An Integrated Distance Visualization Environment for ASCI and SSI Application was to combine the forces of six leading edge laboratories working in the areas of visualization and distributed computing and high performance networking (Argonne National Laboratory, Lawrence Berkeley National Laboratory, Los Alamos National Laboratory, University of Illinois, University of Utah and Princeton University) to develop and deploy the most advanced integrated distance visualization environment for large-scale scientific visualization and demonstrate it on applications relevant to the DOE SSI and ASCI programs. The Corridor One team brought world class expertise in parallel rendering, deep image basedmore » rendering, immersive environment technology, large-format multi-projector wall based displays, volume and surface visualization algorithms, collaboration tools and streaming media technology, network protocols for image transmission, high-performance networking, quality of service technology and distributed computing middleware. Our strategy was to build on the very successful teams that produced the I-WAY, ''Computational Grids'' and CAVE technology and to add these to the teams that have developed the fastest parallel visualizations systems and the most widely used networking infrastructure for multicast and distributed media. Unfortunately, just as we were getting going on the Corridor One project, DOE cut the program after the first year. As such, our final report consists of our progress during year one of the grant.« less

  5. Interactive 3-D graphics workstations in stereotaxy: clinical requirements, algorithms, and solutions

    NASA Astrophysics Data System (ADS)

    Ehricke, Hans-Heino; Daiber, Gerhard; Sonntag, Ralf; Strasser, Wolfgang; Lochner, Mathias; Rudi, Lothar S.; Lorenz, Walter J.

    1992-09-01

    In stereotactic treatment planning the spatial relationships between a variety of objects has to be taken into account in order to avoid destruction of vital brain structures and rupture of vasculature. The visualization of these highly complex relations may be supported by 3-D computer graphics methods. In this context the three-dimensional display of the intracranial vascular tree and additional objects, such as neuroanatomy, pathology, stereotactic devices, or isodose surfaces, is of high clinical value. We report an advanced rendering method for a depth-enhanced maximum intensity projection from magnetic resonance angiography (MRA) and a walk-through approach to the analysis of MRA volume data. Furthermore, various methods for a multiple-object 3-D rendering in stereotaxy are discussed. The development of advanced applications in medical imaging can hardly be successful if image acquisition problems are disregarded. We put particular emphasis on the use of conventional MRI and MRA for stereotactic guidance. The problem of MR distortion is discussed and a novel three- dimensional approach to the quantification and correction of the distortion patterns is presented. Our results suggest that the sole use of MR for stereotactic guidance is highly practical. The true three-dimensionality of the acquired datasets opens up new perspectives to stereotactic treatment planning. For the first time it is possible now to integrate all the necessary information into 3-D scenes, thus enabling an interactive 3-D planning.

  6. 3D image display of fetal ultrasonic images by thin shell

    NASA Astrophysics Data System (ADS)

    Wang, Shyh-Roei; Sun, Yung-Nien; Chang, Fong-Ming; Jiang, Ching-Fen

    1999-05-01

    Due to the properties of convenience and non-invasion, ultrasound has become an essential tool for diagnosis of fetal abnormality during women pregnancy in obstetrics. However, the 'noisy and blurry' nature of ultrasound data makes the rendering of the data a challenge in comparison with MRI and CT images. In spite of the speckle noise, the unwanted objects usually occlude the target to be observed. In this paper, we proposed a new system that can effectively depress the speckle noise, extract the target object, and clearly render the 3D fetal image in almost real-time from 3D ultrasound image data. The system is based on a deformable model that detects contours of the object according to the local image feature of ultrasound. Besides, in order to accelerate rendering speed, a thin shell is defined to separate the observed organ from unrelated structures depending on those detected contours. In this way, we can support quick 3D display of ultrasound, and the efficient visualization of 3D fetal ultrasound thus becomes possible.

  7. MR volumetric analysis of the course of nephroblastomatosis under chemotherapy in childhood.

    PubMed

    Günther, Patrick; Tröger, Jochen; Graf, Norbert; Waag, Karl Ludwig; Schenk, Jens-Peter

    2004-08-01

    Nephroblastomatosis is a paediatric renal disease that may undergo malignant transformation. When neoadjuvant chemotherapy is indicated for nephroblastomatosis or bilateral Wilms' tumours, exact volumetric analysis using high-speed data processing and visualization may aid in determining tumour response. Using 3D-volume-rendering software, the 0.5-T MRI data of a 2-year-old girl with bilateral nephroblastomatosis was analysed. Exact volume determination of foci of nephroblastomatosis was performed by automatic and manual segmentation, and the relation to normal renal parenchyma was determined over a 12-month period. At the first visit, 80% (460/547 ml) of the extremely enlarged right kidney was due to nephroblastomatosis. Total tumour volume within the right kidney decreased to 74 ml under chemotherapy. Volume analysis of the two emerging right-sided masses after treatment correctly suggested Wilms' tumour. Three-dimensional rendering of the growing masses aided the surgeon in nephron-sparing surgery during tumour resection.

  8. Automatic Intensity-based 3D-to-2D Registration of CT Volume and Dual-energy Digital Radiography for the Detection of Cardiac Calcification

    PubMed Central

    Chen, Xiang; Gilkeson, Robert; Fei, Baowei

    2013-01-01

    We are investigating three-dimensional (3D) to two-dimensional (2D) registration methods for computed tomography (CT) and dual-energy digital radiography (DR) for the detection of coronary artery calcification. CT is an established tool for the diagnosis of coronary artery diseases (CADs). Dual-energy digital radiography could be a cost-effective alternative for screening coronary artery calcification. In order to utilize CT as the “gold standard” to evaluate the ability of DR images for the detection and localization of calcium, we developed an automatic intensity-based 3D-to-2D registration method for 3D CT volumes and 2D DR images. To generate digital rendering radiographs (DRR) from the CT volumes, we developed three projection methods, i.e. Gaussian-weighted projection, threshold-based projection, and average-based projection. We tested normalized cross correlation (NCC) and normalized mutual information (NMI) as similarity measurement. We used the Downhill Simplex method as the search strategy. Simulated projection images from CT were fused with the corresponding DR images to evaluate the localization of cardiac calcification. The registration method was evaluated by digital phantoms, physical phantoms, and clinical data sets. The results from the digital phantoms show that the success rate is 100% with mean errors of less 0.8 mm and 0.2 degree for both NCC and NMI. The registration accuracy of the physical phantoms is 0.34 ± 0.27 mm. Color overlay and 3D visualization of the clinical data show that the two images are registered well. This is consistent with the improvement of the NMI values from 0.20 ± 0.03 to 0.25 ± 0.03 after registration. The automatic 3D-to-2D registration method is accurate and robust and may provide a useful tool to evaluate the dual-energy DR images for the detection of coronary artery calcification. PMID:24386527

  9. Automatic Intensity-based 3D-to-2D Registration of CT Volume and Dual-energy Digital Radiography for the Detection of Cardiac Calcification.

    PubMed

    Chen, Xiang; Gilkeson, Robert; Fei, Baowei

    2007-03-03

    We are investigating three-dimensional (3D) to two-dimensional (2D) registration methods for computed tomography (CT) and dual-energy digital radiography (DR) for the detection of coronary artery calcification. CT is an established tool for the diagnosis of coronary artery diseases (CADs). Dual-energy digital radiography could be a cost-effective alternative for screening coronary artery calcification. In order to utilize CT as the "gold standard" to evaluate the ability of DR images for the detection and localization of calcium, we developed an automatic intensity-based 3D-to-2D registration method for 3D CT volumes and 2D DR images. To generate digital rendering radiographs (DRR) from the CT volumes, we developed three projection methods, i.e. Gaussian-weighted projection, threshold-based projection, and average-based projection. We tested normalized cross correlation (NCC) and normalized mutual information (NMI) as similarity measurement. We used the Downhill Simplex method as the search strategy. Simulated projection images from CT were fused with the corresponding DR images to evaluate the localization of cardiac calcification. The registration method was evaluated by digital phantoms, physical phantoms, and clinical data sets. The results from the digital phantoms show that the success rate is 100% with mean errors of less 0.8 mm and 0.2 degree for both NCC and NMI. The registration accuracy of the physical phantoms is 0.34 ± 0.27 mm. Color overlay and 3D visualization of the clinical data show that the two images are registered well. This is consistent with the improvement of the NMI values from 0.20 ± 0.03 to 0.25 ± 0.03 after registration. The automatic 3D-to-2D registration method is accurate and robust and may provide a useful tool to evaluate the dual-energy DR images for the detection of coronary artery calcification.

  10. Automatic intensity-based 3D-to-2D registration of CT volume and dual-energy digital radiography for the detection of cardiac calcification

    NASA Astrophysics Data System (ADS)

    Chen, Xiang; Gilkeson, Robert; Fei, Baowei

    2007-03-01

    We are investigating three-dimensional (3D) to two-dimensional (2D) registration methods for computed tomography (CT) and dual-energy digital radiography (DR) for the detection of coronary artery calcification. CT is an established tool for the diagnosis of coronary artery diseases (CADs). Dual-energy digital radiography could be a cost-effective alternative for screening coronary artery calcification. In order to utilize CT as the "gold standard" to evaluate the ability of DR images for the detection and localization of calcium, we developed an automatic intensity-based 3D-to-2D registration method for 3D CT volumes and 2D DR images. To generate digital rendering radiographs (DRR) from the CT volumes, we developed three projection methods, i.e. Gaussian-weighted projection, threshold-based projection, and average-based projection. We tested normalized cross correlation (NCC) and normalized mutual information (NMI) as similarity measurement. We used the Downhill Simplex method as the search strategy. Simulated projection images from CT were fused with the corresponding DR images to evaluate the localization of cardiac calcification. The registration method was evaluated by digital phantoms, physical phantoms, and clinical data sets. The results from the digital phantoms show that the success rate is 100% with mean errors of less 0.8 mm and 0.2 degree for both NCC and NMI. The registration accuracy of the physical phantoms is 0.34 +/- 0.27 mm. Color overlay and 3D visualization of the clinical data show that the two images are registered well. This is consistent with the improvement of the NMI values from 0.20 +/- 0.03 to 0.25 +/- 0.03 after registration. The automatic 3D-to-2D registration method is accurate and robust and may provide a useful tool to evaluate the dual-energy DR images for the detection of coronary artery calcification.

  11. Novel T lymphocyte proliferation assessment using whole mouse cryo-imaging

    NASA Astrophysics Data System (ADS)

    Wuttisarnwattana, Patiwet; Raza, Syed A.; Eid, Saada; Cooke, Kenneth R.; Wilson, David L.

    2014-03-01

    New imaging technologies enable one to assess T-cell proliferation, an important feature of the immunological response. However, none of the traditional imaging modalities allow one to examine quantiatively T-cell function with microscopic resolution and single cell sensitivity over an entire mouse. To address this need, we established T-cells proliferation assays using 3D microscopic cryo-imaging. Assays include: (1) biodistribution of T-cells, (2) secondary lymphoid organ (SLO) volume measurement, (3) carboxyfluorescein succinimidyl ester (CFSE) dilution per cell as cells divide. To demonstrate the application, a graft-versus-host-disease (GVHD) model was used. 3D visualization show that T-cells specifically homed to the SLOs (spleen and lymph nodes) as well as GVHD target organs (such as GI-tract, liver, skin and thymus).The spleen was chosen as representative of the SLOs. For spleen size analysis, volumes of red and white pulp were measured. Spleen volumes of the allogeneic mice (with GVHD) were significantly larger than those of the syngeneic mice (without GVHD) at 72 to 120 hours post-transplant. For CFSE dilution approach, we employed color-coded volume rendering and probability density function (PDF) of single cell intensity to assess T-cell proliferation in the spleen. As compared to syngeneic T-cells, the allogeneic T-cells quickly aggregated in the spleen as indicated by increasing of CFSE signal over the first 48 hours. Then they rapidly proliferated as evidenced by reduced CFSE intensity (at 48-96 hours). Results suggest that assays can be used to study GVHD treatments using T-cell proliferation and biodistibution as assays. In summary, this is the first time that we are able to track and visualize T-cells in whole mouse with single cell sensitivity. We believe that our technique can be an alternative choice to traditional in vitro immunological proliferation assays by providing assessment of proliferation in an in vivo model.

  12. METRO-APEX Volume 15.1: Industrialist's Manual No. 5, Caesar's Rendering Plant. Revised.

    ERIC Educational Resources Information Center

    University of Southern California, Los Angeles. COMEX Research Project.

    The Industrialist's Manual No. 5 (Caesar's Rendering Plant) is one of a set of twenty-one manuals used in METRO-APEX 1974, a computerized college and professional level, computer-supported, role-play, simulation exercise of a community with "normal" problems. Stress is placed on environmental quality considerations. APEX 1974 is an…

  13. ProteinShader: illustrative rendering of macromolecules

    PubMed Central

    Weber, Joseph R

    2009-01-01

    Background Cartoon-style illustrative renderings of proteins can help clarify structural features that are obscured by space filling or balls and sticks style models, and recent advances in programmable graphics cards offer many new opportunities for improving illustrative renderings. Results The ProteinShader program, a new tool for macromolecular visualization, uses information from Protein Data Bank files to produce illustrative renderings of proteins that approximate what an artist might create by hand using pen and ink. A combination of Hermite and spherical linear interpolation is used to draw smooth, gradually rotating three-dimensional tubes and ribbons with a repeating pattern of texture coordinates, which allows the application of texture mapping, real-time halftoning, and smooth edge lines. This free platform-independent open-source program is written primarily in Java, but also makes extensive use of the OpenGL Shading Language to modify the graphics pipeline. Conclusion By programming to the graphics processor unit, ProteinShader is able to produce high quality images and illustrative rendering effects in real-time. The main feature that distinguishes ProteinShader from other free molecular visualization tools is its use of texture mapping techniques that allow two-dimensional images to be mapped onto the curved three-dimensional surfaces of ribbons and tubes with minimum distortion of the images. PMID:19331660

  14. Terahertz Computed Tomography of NASA Thermal Protection System Materials

    NASA Technical Reports Server (NTRS)

    Roth, D. J.; Reyes-Rodriguez, S.; Zimdars, D. A.; Rauser, R. W.; Ussery, W. W.

    2011-01-01

    A terahertz axial computed tomography system has been developed that uses time domain measurements in order to form cross-sectional image slices and three-dimensional volume renderings of terahertz-transparent materials. The system can inspect samples as large as 0.0283 cubic meters (1 cubic foot) with no safety concerns as for x-ray computed tomography. In this study, the system is evaluated for its ability to detect and characterize flat bottom holes, drilled holes, and embedded voids in foam materials utilized as thermal protection on the external fuel tanks for the Space Shuttle. X-ray micro-computed tomography was also performed on the samples to compare against the terahertz computed tomography results and better define embedded voids. Limits of detectability based on depth and size for the samples used in this study are loosely defined. Image sharpness and morphology characterization ability for terahertz computed tomography are qualitatively described.

  15. Non-invasive coronary angiography with multislice computed tomography. Technology, methods, preliminary experience and prospects.

    PubMed

    Traversi, Egidio; Bertoli, Giuseppe; Barazzoni, Giancarlo; Baldi, Maurizia; Tramarin, Roberto

    2004-02-01

    The recent technical developments in multislice computed tomography (MSCT), with ECG retro-gated image reconstruction, have elicited great interest in the possibility of accurate non-invasive imaging of the coronary arteries. The latest generation of MSCT systems with 8-16 rows of detectors permits acquisition of the whole cardiac volume during a single 15-20 s breath-hold with a submillimetric definition of the images and an outstanding signal-to-noise ratio. Thus the race which, between MSCT, electron beam computed tomography and cardiac magnetic resonance imaging, can best provide routine and reliable imaging of the coronary arteries in clinical practice has recommenced. Currently available MSCT systems offer different options for both cardiac image acquisition and reconstruction, including multiplanar and curved multiplanar reconstruction, three-dimensional volume rendering, maximum intensity projection, and virtual angioscopy. In our preliminary experience including 176 patients suffering from known or suspected coronary artery disease, MSCT was feasible in 161 (91.5%) and showed a sensitivity of 80.4% and a specificity of 80.3%, with respect to standard coronary angiography, in detecting critical stenosis in coronary arteries and artery or venous bypass grafts. These results correspond to a positive predictive value of 58.6% and a negative predictive value of 92.2%. The true role that MSCT is likely to play in the future in non-invasive coronary imaging is still to be defined. Nevertheless, the huge amount of data obtainable by MSCT along with the rapid technological advances, shorter acquisition times and reconstruction algorithm developments will make the technique stronger, and possible applications are expected not only for non-invasive coronary angiography, but also for cardiac function and myocardial perfusion evaluation, as an all-in-one examination.

  16. A JPEG backward-compatible HDR image compression

    NASA Astrophysics Data System (ADS)

    Korshunov, Pavel; Ebrahimi, Touradj

    2012-10-01

    High Dynamic Range (HDR) imaging is expected to become one of the technologies that could shape next generation of consumer digital photography. Manufacturers are rolling out cameras and displays capable of capturing and rendering HDR images. The popularity and full public adoption of HDR content is however hindered by the lack of standards in evaluation of quality, file formats, and compression, as well as large legacy base of Low Dynamic Range (LDR) displays that are unable to render HDR. To facilitate wide spread of HDR usage, the backward compatibility of HDR technology with commonly used legacy image storage, rendering, and compression is necessary. Although many tone-mapping algorithms were developed for generating viewable LDR images from HDR content, there is no consensus on which algorithm to use and under which conditions. This paper, via a series of subjective evaluations, demonstrates the dependency of perceived quality of the tone-mapped LDR images on environmental parameters and image content. Based on the results of subjective tests, it proposes to extend JPEG file format, as the most popular image format, in a backward compatible manner to also deal with HDR pictures. To this end, the paper provides an architecture to achieve such backward compatibility with JPEG and demonstrates efficiency of a simple implementation of this framework when compared to the state of the art HDR image compression.

  17. Correlation between age and the parameters of medial epiphysis and metaphysis of the clavicle using CT volume rendering images.

    PubMed

    Hua, Wei; Guang-you, Zhu; Lei, Wan; Chong-liang, Ying; Ya-hui, Wang

    2014-11-01

    The aim of this study was to evaluate the correlation between age and the parameters of medial epiphysis of clavicle based on CT volume rendering (VR) images. The CT images of the medial clavicle from 795 teenagers (387 male and 408 female) between 15 and 25 years were collected in East and South China. VR images were recombined from 0.60 mm thickness slice CT images. The ratio of epiphyseal diameter/metaphyseal diameter and the ratio of epiphyseal area/metaphyseal area of two sides of medial clavicle were measured and calculated by three different examiners, the quantitative data consistency was checked by intraclass correlation coefficients (ICC). The diameter ratio of left and right side are depicted as X1 and X3, and the area ratio of left and right side as X2 and X4. Descriptive statistic analysis of the data was performed and several mathematical models were established using least square. CT images from additional 60 teenagers (30 male and 30 female) were used to verify the accuracy of the established mathematical models. ICC indicated that the measurement of epiphyseal diameter, metaphyseal diameter, the ratio of epiphyseal diameter/metaphyseal diameter, epiphyseal area, metaphyseal area and the ratio of epiphyseal area/metaphyseal area of medial clavicle in the left and right side of the three examiners was approaching 1. The 95% reference range for the mean of every examination of both genders gradually increased with age. Females' data indicated by medial epiphysis of the clavicle development were earlier than male's, especially from 15 to 21 years. The difference of medial epiphysis of the clavicle development in gender becomes smaller after 21 years. The highest accuracy of the mathematical models made clear that male's accuracy was 73.5% (±1.0 year) and 85.3% (±1.5 year), and female's was 68.6% (±1.0 year) and 82.2% (±1.5 year) respectively. The methods of data collection and analysis were reliable and feasible. Combined the high accuracy of these established mathematical models, it is applicable to use the ratio of epiphyseal diameter/metaphyseal diameter and the ratio of epiphyseal area/metaphyseal area of left and right side of medial clavicle to estimate the teenager's age. Bearing this in mind, further studies are needed to evaluate slice thickness as the most critical parameter. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  18. Flat panel angiography images in the post-operative follow-up of surgically clipped intracranial aneurysms.

    PubMed

    Budai, Caterina; Cirillo, Luigi; Patruno, Francesco; Dall'olio, Massimo; Princiotta, Ciro; Leonardi, Marco

    2014-04-01

    Cerebral aneurysms must be monitored for varying periods after surgical and/or endovascular treatment and the duration of follow-up will depend on the type of therapy and the immediate post-operative outcome. Surgical clipping for intracranial aneurysms is a valid treatment but the metal clips generate artefacts so that follow-up monitoring still relies on catheter angiography. This study reports our preliminary experience with volumetric angiography using a Philips Allura Xper FD biplane system in the post-operative monitoring of aneurysm residues or major vascular changes following the surgical clipping of intracranial aneurysms. Volumetric angiography yields not only volume-rendered (VR) images, but a volume CT can also be reconstructed at high spatial and contrast resolution from a single acquisition, significantly enhancing the technique's diagnostic power. Between August 2012 and April 2013, we studied 19 patients with a total of 26 aneurysms treated by surgical clipping alone or in combination with endovascular treatment. All patients underwent standard post-operative angiographic follow-up including a rotational volumetric acquisition. Follow-up monitoring disclosed eight aneurysm residues whose assessment was optimal after surgical clipping both in patients with one metal clip and in those with two or more clips. In addition, small residues (1.3 mm) could be monitored together with any change in the calibre or course of vessels located adjacent to the clips. In conclusion, flat panel volume CT is much more reliable than the old 3D acquisitions that yielded only VR images. This is particularly true in patients with small aneurysm residues or lesions with multiple metal clips.

  19. Gesture-Controlled Interface for Contactless Control of Various Computer Programs with a Hooking-Based Keyboard and Mouse-Mapping Technique in the Operating Room

    PubMed Central

    Park, Ben Joonyeon; Jang, Taekjin; Choi, Jong Woo; Kim, Namkug

    2016-01-01

    We developed a contactless interface that exploits hand gestures to effectively control medical images in the operating room. We developed an in-house program called GestureHook that exploits message hooking techniques to convert gestures into specific functions. For quantitative evaluation of this program, we used gestures to control images of a dynamic biliary CT study and compared the results with those of a mouse (8.54 ± 1.77 s to 5.29 ± 1.00 s; p < 0.001) and measured the recognition rates of specific gestures and the success rates of tasks based on clinical scenarios. For clinical applications, this program was set up in the operating room to browse images for plastic surgery. A surgeon browsed images from three different programs: CT images from a PACS program, volume-rendered images from a 3D PACS program, and surgical planning photographs from a basic image viewing program. All programs could be seamlessly controlled by gestures and motions. This approach can control all operating room programs without source code modification and provide surgeons with a new way to safely browse through images and easily switch applications during surgical procedures. PMID:26981146

  20. Gesture-Controlled Interface for Contactless Control of Various Computer Programs with a Hooking-Based Keyboard and Mouse-Mapping Technique in the Operating Room.

    PubMed

    Park, Ben Joonyeon; Jang, Taekjin; Choi, Jong Woo; Kim, Namkug

    2016-01-01

    We developed a contactless interface that exploits hand gestures to effectively control medical images in the operating room. We developed an in-house program called GestureHook that exploits message hooking techniques to convert gestures into specific functions. For quantitative evaluation of this program, we used gestures to control images of a dynamic biliary CT study and compared the results with those of a mouse (8.54 ± 1.77 s to 5.29 ± 1.00 s; p < 0.001) and measured the recognition rates of specific gestures and the success rates of tasks based on clinical scenarios. For clinical applications, this program was set up in the operating room to browse images for plastic surgery. A surgeon browsed images from three different programs: CT images from a PACS program, volume-rendered images from a 3D PACS program, and surgical planning photographs from a basic image viewing program. All programs could be seamlessly controlled by gestures and motions. This approach can control all operating room programs without source code modification and provide surgeons with a new way to safely browse through images and easily switch applications during surgical procedures.

  1. Relation between cannabis use and subcortical volumes in people at clinical high risk of psychosis

    PubMed Central

    Buchy, Lisa; Mathalon, Daniel H.; Cannon, Tyrone D.; Cadenhead, Kristin S.; Cornblatt, Barbara A.; McGlashan, Thomas H.; Perkins, Diana O.; Seidman, Larry J.; Tsuang, Ming T.; Walker, Elaine F.; Woods, Scott W.; Bearden, Carrie E.; Addington, Jean

    2016-01-01

    Among people at genetic risk of schizophrenia, those who use cannabis show smaller thalamic and hippocampal volumes. We evaluated this relationship in people at clinical high risk (CHR) of psychosis. The Alcohol and Drug Use Scale was used to identify 132 CHR cannabis users, the majority of whom were non-dependent cannabis users, 387 CHR non-users, and 204 healthy control non-users, and all participants completed magnetic resonance imaging scans. Volumes of the thalamus, hippocampus and amygdala were extracted with FreeSurfer, and compared across groups. Comparing all CHR participants with healthy control participants revealed no significant differences in volumes of any ROI. However, when comparing CHR users to CHR non-users, a significant ROI × Cannabis group effect emerged: CHR users showed significantly smaller amygdala compared to CHR non-users. However, when limiting analysis to CHR subjects who reported using alcohol at a ‘use without impairment’ severity level, the amygdala effect was non-significant; rather, smaller hippocampal volumes were seen in CHR cannabis users compared to non-users. Controlling statistically for effects of alcohol and tobacco use rendered all results non-significant. These results highlight the importance of controlling for residual confounding effects of other substance use when examining the relationship between cannabis use and neural structure. PMID:27289213

  2. Autostereoscopic image creation by hyperview matrix controlled single pixel rendering

    NASA Astrophysics Data System (ADS)

    Grasnick, Armin

    2017-06-01

    Just as the increasing awareness level of the stereoscopic cinema, so the perception of limitations while watching movies with 3D glasses has been emerged as well. It is not only that the additional glasses are uncomfortable and annoying; there are some tangible arguments for avoiding 3D glasses. These "stereoscopic deficits" are caused by the 3D glasses itself. In contrast to natural viewing with naked eyes, the artificial 3D viewing with 3D glasses introduces specific "unnatural" side effects. The most of the moviegoers has experienced unspecific discomfort in 3D cinema, which they may have associated with insufficient image quality. Obviously, quality problems with 3D glasses can be solved by technical improvement. But this simple answer can -and already has- mislead some decision makers to relax on the existing 3D glasses solution. It needs to be underlined, that there are inherent difficulties with the glasses, which can never be solved with modest advancement; as the 3D glasses initiate them. To overcome the limitations of stereoscopy in display applications, several technologies has been proposed to create a 3D impression without the need of 3D glasses, known as autostereoscopy. But even todays autostereoscopic displays cannot solve all viewing problems and still show limitations. A hyperview display could be a suitable candidate, if it would be possible to create an affordable device and generate the necessary content in an acceptable time frame. All autostereoscopic displays, based on the idea of lightfield, integral photography or super-multiview could be unified within the concept of hyperview. It is essential for functionality that every of these display technologies uses numerous of different perspective images to create the 3D impression. Such a calculation of a very high number of views will require much more computing time as for the formation of a simple stereoscopic image pair. The hyperview concept allows to describe the screen image of any 3D technology just with a simple equation. This formula can be utilized to create a specific hyperview matrix for a certain 3D display - independent of the technology used. A hyperview matrix may contain the references to loads of images and act as an instruction for a subsequent rendering process of particular pixels. Naturally, a single pixel will deliver an image with no resolution and does not provide any idea of the rendered scene. However, by implementing the method of pixel recycling, a 3D image can be perceived, even if all source images are different. It will be proven that several millions of perspectives can be rendered with the support of GPU rendering and benefit from the hyperview matrix. In result, a conventional autostereoscopic display, which is designed to represent only a few perspectives can be used to show a hyperview image by using a suitable hyperview matrix. It will be shown that a millions-of-views-hyperview-image can be presented on a conventional autostereoscopic display. For such an hyperview image it is required that all pixels of the displays are allocated by different source images. Controlled by the hyperview matrix, an adapted renderer can render a full hyperview image in real-time.

  3. From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data

    PubMed Central

    Tsai, Wen-Ting; Hassan, Ahmed; Sarkar, Purbasha; Correa, Joaquin; Metlagel, Zoltan; Jorgens, Danielle M.; Auer, Manfred

    2014-01-01

    Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g., signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets. PMID:25145678

  4. Three-dimensional magnetic resonance imaging based on time-of-flight magnetic resonance angiography for superficial cerebral arteriovenous malformation--technical note.

    PubMed

    Murata, Takahiro; Horiuchi, Tetsuyoshi; Rahmah, Nunung Nur; Sakai, Keiichi; Hongo, Kazuhiro

    2011-01-01

    Direct surgery remains important for the treatment of superficial cerebral arteriovenous malformation (AVM). Surgical planning on the basis of careful analysis from various neuroimaging modalities can aid in resection of superficial AVM with favorable outcome. Three-dimensional (3D) magnetic resonance (MR) imaging reconstructed from time-of-flight (TOF) MR angiography was developed as an adjunctive tool for surgical planning of superficial AVM. 3-T TOF MR imaging without contrast medium was performed preoperatively in patients with superficial AVM. The images were imported into OsiriX imaging software and the 3D reconstructed MR image was produced using the volume rendering method. This 3D MR image could clearly visualize the surface angioarchitecture of the AVM with the surrounding brain on a single image, and clarified feeding arteries including draining veins and the relationship with sulci or fissures surrounding the nidus. 3D MR image of the whole AVM angioarchitecture was also displayed by skeletonization of the surrounding brain. Preoperative 3D MR image corresponded to the intraoperative view. Feeders on the brain surface were easily confirmed and obliterated during surgery, with the aid of the 3D MR images. 3D MR imaging for surgical planning of superficial AVM is simple and noninvasive to perform, enhances intraoperative orientation, and is helpful for successful resection.

  5. [MODERN INSTRUMENTS FOR EAR, NOSE AND THROAT RENDERING AND EVALUATION IN RESEARCHES ON RUSSIAN SEGMENT OF THE INTERNATIONAL SPACE STATION].

    PubMed

    Popova, I I; Orlov, O I; Matsnev, E I; Revyakin, Yu G

    2016-01-01

    The paper reports the results of testing some diagnostic video systems enabling digital rendering of TNT teeth and jaws. The authors substantiate the criteria of choosing and integration of imaging systems in future on Russian segment of the International space station kit LOR developed for examination and download of high-quality images of cosmonauts' TNT, parodentium and teeth.

  6. Parallel rendering

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1995-01-01

    This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

  7. State of the "art": a taxonomy of artistic stylization techniques for images and video.

    PubMed

    Kyprianidis, Jan Eric; Collomosse, John; Wang, Tinghuai; Isenberg, Tobias

    2013-05-01

    This paper surveys the field of nonphotorealistic rendering (NPR), focusing on techniques for transforming 2D input (images and video) into artistically stylized renderings. We first present a taxonomy of the 2D NPR algorithms developed over the past two decades, structured according to the design characteristics and behavior of each technique. We then describe a chronology of development from the semiautomatic paint systems of the early nineties, through to the automated painterly rendering systems of the late nineties driven by image gradient analysis. Two complementary trends in the NPR literature are then addressed, with reference to our taxonomy. First, the fusion of higher level computer vision and NPR, illustrating the trends toward scene analysis to drive artistic abstraction and diversity of style. Second, the evolution of local processing approaches toward edge-aware filtering for real-time stylization of images and video. The survey then concludes with a discussion of open challenges for 2D NPR identified in recent NPR symposia, including topics such as user and aesthetic evaluation.

  8. Detection of neuron membranes in electron microscopy images using a serial neural network architecture.

    PubMed

    Jurrus, Elizabeth; Paiva, Antonio R C; Watanabe, Shigeki; Anderson, James R; Jones, Bryan W; Whitaker, Ross T; Jorgensen, Erik M; Marc, Robert E; Tasdizen, Tolga

    2010-12-01

    Study of nervous systems via the connectome, the map of connectivities of all neurons in that system, is a challenging problem in neuroscience. Towards this goal, neurobiologists are acquiring large electron microscopy datasets. However, the shear volume of these datasets renders manual analysis infeasible. Hence, automated image analysis methods are required for reconstructing the connectome from these very large image collections. Segmentation of neurons in these images, an essential step of the reconstruction pipeline, is challenging because of noise, anisotropic shapes and brightness, and the presence of confounding structures. The method described in this paper uses a series of artificial neural networks (ANNs) in a framework combined with a feature vector that is composed of image intensities sampled over a stencil neighborhood. Several ANNs are applied in series allowing each ANN to use the classification context provided by the previous network to improve detection accuracy. We develop the method of serial ANNs and show that the learned context does improve detection over traditional ANNs. We also demonstrate advantages over previous membrane detection methods. The results are a significant step towards an automated system for the reconstruction of the connectome. Copyright 2010 Elsevier B.V. All rights reserved.

  9. Beyond the Renderer: Software Architecture for Parallel Graphics and Visualization

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1996-01-01

    As numerous implementations have demonstrated, software-based parallel rendering is an effective way to obtain the needed computational power for a variety of challenging applications in computer graphics and scientific visualization. To fully realize their potential, however, parallel renderers need to be integrated into a complete environment for generating, manipulating, and delivering visual data. We examine the structure and components of such an environment, including the programming and user interfaces, rendering engines, and image delivery systems. We consider some of the constraints imposed by real-world applications and discuss the problems and issues involved in bringing parallel rendering out of the lab and into production.

  10. Space Object and Light Attribute Rendering (SOLAR) Projection System

    DTIC Science & Technology

    2017-05-08

    AVAILABILITY STATEMENT A DISTRIBUTION UNLIMITED: PB Public Release 13. SUPPLEMENTARY NOTES 14. ABSTRACT A state of the art planetarium style projection system...Rendering (SOLAR) Projection System 1 Abstract A state of the art planetarium style projection system called Space Object and Light Attribute Rendering...planetarium style projection system for emulation of a variety of close proximity and long range imaging experiments. University at Buffalo’s Space

  11. Roughness based perceptual analysis towards digital skin imaging system with haptic feedback.

    PubMed

    Kim, K

    2016-08-01

    To examine psoriasis or atopic eczema, analyzing skin roughness by palpation is essential to precisely diagnose skin diseases. However, optical sensor based skin imaging systems do not allow dermatologists to touch skin images. To solve the problem, a new haptic rendering technology that can accurately display skin roughness must be developed. In addition, the rendering algorithm must be able to filter spatial noises created during 2D to 3D image conversion without losing the original roughness on the skin image. In this study, a perceptual way to design a noise filter that will remove spatial noises and in the meantime recover maximized roughness is introduced by understanding human sensitivity on surface roughness. A visuohaptic rendering system that can provide a user with seeing and touching digital skin surface roughness has been developed including a geometric roughness estimation method from a meshed surface. In following, a psychophysical experiment was designed and conducted with 12 human subjects to measure human perception with the developed visual and haptic interfaces to examine surface roughness. From the psychophysical experiment, it was found that touch is more sensitive at lower surface roughness, and vice versa. Human perception with both senses, vision and touch, becomes less sensitive to surface distortions as roughness increases. When interact with both channels, visual and haptic interfaces, the performance to detect abnormalities on roughness is greatly improved by sensory integration with the developed visuohaptic rendering system. The result can be used as a guideline to design a noise filter that can perceptually remove spatial noises while recover maximized roughness values from a digital skin image obtained by optical sensors. In addition, the result also confirms that the developed visuohaptic rendering system can help dermatologists or skin care professionals examine skin conditions by using vision and touch at the same time. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  12. Techniques for efficient, real-time, 3D visualization of multi-modality cardiac data using consumer graphics hardware.

    PubMed

    Levin, David; Aladl, Usaf; Germano, Guido; Slomka, Piotr

    2005-09-01

    We exploit consumer graphics hardware to perform real-time processing and visualization of high-resolution, 4D cardiac data. We have implemented real-time, realistic volume rendering, interactive 4D motion segmentation of cardiac data, visualization of multi-modality cardiac data and 3D display of multiple series cardiac MRI. We show that an ATI Radeon 9700 Pro can render a 512x512x128 cardiac Computed Tomography (CT) study at 0.9 to 60 frames per second (fps) depending on rendering parameters and that 4D motion based segmentation can be performed in real-time. We conclude that real-time rendering and processing of cardiac data can be implemented on consumer graphics cards.

  13. An Incremental Weighted Least Squares Approach to Surface Lights Fields

    NASA Astrophysics Data System (ADS)

    Coombe, Greg; Lastra, Anselmo

    An Image-Based Rendering (IBR) approach to appearance modelling enables the capture of a wide variety of real physical surfaces with complex reflectance behaviour. The challenges with this approach are handling the large amount of data, rendering the data efficiently, and previewing the model as it is being constructed. In this paper, we introduce the Incremental Weighted Least Squares approach to the representation and rendering of spatially and directionally varying illumination. Each surface patch consists of a set of Weighted Least Squares (WLS) node centers, which are low-degree polynomial representations of the anisotropic exitant radiance. During rendering, the representations are combined in a non-linear fashion to generate a full reconstruction of the exitant radiance. The rendering algorithm is fast, efficient, and implemented entirely on the GPU. The construction algorithm is incremental, which means that images are processed as they arrive instead of in the traditional batch fashion. This human-in-the-loop process enables the user to preview the model as it is being constructed and to adapt to over-sampling and under-sampling of the surface appearance.

  14. 3D Volumetric Analysis of Fluid Inclusions Using Confocal Microscopy

    NASA Astrophysics Data System (ADS)

    Proussevitch, A.; Mulukutla, G.; Sahagian, D.; Bodnar, B.

    2009-05-01

    Fluid inclusions preserve valuable information regarding hydrothermal, metamorphic, and magmatic processes. The molar quantities of liquid and gaseous components in the inclusions can be estimated from their volumetric measurements at room temperatures combined with knowledge of the PVTX properties of the fluid and homogenization temperatures. Thus, accurate measurements of inclusion volumes and their two phase components are critical. One of the greatest advantages of the Laser Scanning Confocal Microscopy (LSCM) in application to fluid inclsion analsyis is that it is affordable for large numbers of samples, given the appropriate software analysis tools and methodology. Our present work is directed toward developing those tools and methods. For the last decade LSCM has been considered as a potential method for inclusion volume measurements. Nevertheless, the adequate and accurate measurement by LSCM has not yet been successful for fluid inclusions containing non-fluorescing fluids due to many technical challenges in image analysis despite the fact that the cost of collecting raw LSCM imagery has dramatically decreased in recent years. These problems mostly relate to image analysis methodology and software tools that are needed for pre-processing and image segmentation, which enable solid, liquid and gaseous components to be delineated. Other challenges involve image quality and contrast, which is controlled by fluorescence of the material (most aqueous fluid inclusions do not fluoresce at the appropriate laser wavelengths), material optical properties, and application of transmitted and/or reflected confocal illumination. In this work we have identified the key problems of image analysis and propose some potential solutions. For instance, we found that better contrast of pseudo-confocal transmitted light images could be overlayed with poor-contrast true-confocal reflected light images within the same stack of z-ordered slices. This approach allows one to narrow the interface boundaries between the phases before the application of segmentation routines. In turn, we found that an active contour segmentation technique works best for these types of geomaterials. The method was developed by adapting a medical software package implemented using the Insight Toolkit (ITK) set of algorithms developed for segmentation of anatomical structures. We have developed a manual analysis procedure with the potential of 2 micron resolution in 3D volume rendering that is specifically designed for application to fluid inclusion volume measurements.

  15. Immersive volume rendering of blood vessels

    NASA Astrophysics Data System (ADS)

    Long, Gregory; Kim, Han Suk; Marsden, Alison; Bazilevs, Yuri; Schulze, Jürgen P.

    2012-03-01

    In this paper, we present a novel method of visualizing flow in blood vessels. Our approach reads unstructured tetrahedral data, resamples it, and uses slice based 3D texture volume rendering. Due to the sparse structure of blood vessels, we utilize an octree to efficiently store the resampled data by discarding empty regions of the volume. We use animation to convey time series data, wireframe surface to give structure, and utilize the StarCAVE, a 3D virtual reality environment, to add a fully immersive element to the visualization. Our tool has great value in interdisciplinary work, helping scientists collaborate with clinicians, by improving the understanding of blood flow simulations. Full immersion in the flow field allows for a more intuitive understanding of the flow phenomena, and can be a great help to medical experts for treatment planning.

  16. CubeSat Artist Rendering and NASA M-Cubed/COVE

    NASA Image and Video Library

    2012-02-14

    The image on the left is an artist rendering of Montana State University Explorer 1 CubeSat; at right is a CubeSat created by the University of Michigan designated the Michigan Mulitpurpose Mini-satellite, or M-Cubed.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harrison, Cyrus; Larsen, Matt; Brugger, Eric

    Strawman is a system designed to explore the in situ visualization and analysis needs of simulation code teams running multi-physics calculations on many-core HPC architectures. It porvides rendering pipelines that can leverage both many-core CPUs and GPUs to render images of simulation meshes.

  18. Simplification of Visual Rendering in Simulated Prosthetic Vision Facilitates Navigation.

    PubMed

    Vergnieux, Victor; Macé, Marc J-M; Jouffrais, Christophe

    2017-09-01

    Visual neuroprostheses are still limited and simulated prosthetic vision (SPV) is used to evaluate potential and forthcoming functionality of these implants. SPV has been used to evaluate the minimum requirement on visual neuroprosthetic characteristics to restore various functions such as reading, objects and face recognition, object grasping, etc. Some of these studies focused on obstacle avoidance but only a few investigated orientation or navigation abilities with prosthetic vision. The resolution of current arrays of electrodes is not sufficient to allow navigation tasks without additional processing of the visual input. In this study, we simulated a low resolution array (15 × 18 electrodes, similar to a forthcoming generation of arrays) and evaluated the navigation abilities restored when visual information was processed with various computer vision algorithms to enhance the visual rendering. Three main visual rendering strategies were compared to a control rendering in a wayfinding task within an unknown environment. The control rendering corresponded to a resizing of the original image onto the electrode array size, according to the average brightness of the pixels. In the first rendering strategy, vision distance was limited to 3, 6, or 9 m, respectively. In the second strategy, the rendering was not based on the brightness of the image pixels, but on the distance between the user and the elements in the field of view. In the last rendering strategy, only the edges of the environments were displayed, similar to a wireframe rendering. All the tested renderings, except the 3 m limitation of the viewing distance, improved navigation performance and decreased cognitive load. Interestingly, the distance-based and wireframe renderings also improved the cognitive mapping of the unknown environment. These results show that low resolution implants are usable for wayfinding if specific computer vision algorithms are used to select and display appropriate information regarding the environment. © 2017 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  19. See-Through Imaging of Laser-Scanned 3d Cultural Heritage Objects Based on Stochastic Rendering of Large-Scale Point Clouds

    NASA Astrophysics Data System (ADS)

    Tanaka, S.; Hasegawa, K.; Okamoto, N.; Umegaki, R.; Wang, S.; Uemura, M.; Okamoto, A.; Koyamada, K.

    2016-06-01

    We propose a method for the precise 3D see-through imaging, or transparent visualization, of the large-scale and complex point clouds acquired via the laser scanning of 3D cultural heritage objects. Our method is based on a stochastic algorithm and directly uses the 3D points, which are acquired using a laser scanner, as the rendering primitives. This method achieves the correct depth feel without requiring depth sorting of the rendering primitives along the line of sight. Eliminating this need allows us to avoid long computation times when creating natural and precise 3D see-through views of laser-scanned cultural heritage objects. The opacity of each laser-scanned object is also flexibly controllable. For a laser-scanned point cloud consisting of more than 107 or 108 3D points, the pre-processing requires only a few minutes, and the rendering can be executed at interactive frame rates. Our method enables the creation of cumulative 3D see-through images of time-series laser-scanned data. It also offers the possibility of fused visualization for observing a laser-scanned object behind a transparent high-quality photographic image placed in the 3D scene. We demonstrate the effectiveness of our method by applying it to festival floats of high cultural value. These festival floats have complex outer and inner 3D structures and are suitable for see-through imaging.

  20. Lighting design for globally illuminated volume rendering.

    PubMed

    Zhang, Yubo; Ma, Kwan-Liu

    2013-12-01

    With the evolution of graphics hardware, high quality global illumination becomes available for real-time volume rendering. Compared to local illumination, global illumination can produce realistic shading effects which are closer to real world scenes, and has proven useful for enhancing volume data visualization to enable better depth and shape perception. However, setting up optimal lighting could be a nontrivial task for average users. There were lighting design works for volume visualization but they did not consider global light transportation. In this paper, we present a lighting design method for volume visualization employing global illumination. The resulting system takes into account view and transfer-function dependent content of the volume data to automatically generate an optimized three-point lighting environment. Our method fully exploits the back light which is not used by previous volume visualization systems. By also including global shadow and multiple scattering, our lighting system can effectively enhance the depth and shape perception of volumetric features of interest. In addition, we propose an automatic tone mapping operator which recovers visual details from overexposed areas while maintaining sufficient contrast in the dark areas. We show that our method is effective for visualizing volume datasets with complex structures. The structural information is more clearly and correctly presented under the automatically generated light sources.

  1. Demons registration for in vivo and deformable laser scanning confocal endomicroscopy.

    PubMed

    Chiew, Wei-Ming; Lin, Feng; Seah, Hock Soon

    2017-09-01

    A critical effect found in noninvasive in vivo endomicroscopic imaging modalities is image distortions due to sporadic movement exhibited by living organisms. In three-dimensional confocal imaging, this effect results in a dataset that is tilted across deeper slices. Apart from that, the sequential flow of the imaging-processing pipeline restricts real-time adjustments due to the unavailability of information obtainable only from subsequent stages. To solve these problems, we propose an approach to render Demons-registered datasets as they are being captured, focusing on the coupling between registration and visualization. To improve the acquisition process, we also propose a real-time visual analytics tool, which complements the imaging pipeline and the Demons registration pipeline with useful visual indicators to provide real-time feedback for immediate adjustments. We highlight the problem of deformation within the visualization pipeline for object-ordered and image-ordered rendering. Visualizations of critical information including registration forces and partial renderings of the captured data are also presented in the analytics system. We demonstrate the advantages of the algorithmic design through experimental results with both synthetically deformed datasets and actual in vivo, time-lapse tissue datasets expressing natural deformations. Remarkably, this algorithm design is for embedded implementation in intelligent biomedical imaging instrumentation with customizable circuitry. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  2. Demons registration for in vivo and deformable laser scanning confocal endomicroscopy

    NASA Astrophysics Data System (ADS)

    Chiew, Wei Ming; Lin, Feng; Seah, Hock Soon

    2017-09-01

    A critical effect found in noninvasive in vivo endomicroscopic imaging modalities is image distortions due to sporadic movement exhibited by living organisms. In three-dimensional confocal imaging, this effect results in a dataset that is tilted across deeper slices. Apart from that, the sequential flow of the imaging-processing pipeline restricts real-time adjustments due to the unavailability of information obtainable only from subsequent stages. To solve these problems, we propose an approach to render Demons-registered datasets as they are being captured, focusing on the coupling between registration and visualization. To improve the acquisition process, we also propose a real-time visual analytics tool, which complements the imaging pipeline and the Demons registration pipeline with useful visual indicators to provide real-time feedback for immediate adjustments. We highlight the problem of deformation within the visualization pipeline for object-ordered and image-ordered rendering. Visualizations of critical information including registration forces and partial renderings of the captured data are also presented in the analytics system. We demonstrate the advantages of the algorithmic design through experimental results with both synthetically deformed datasets and actual in vivo, time-lapse tissue datasets expressing natural deformations. Remarkably, this algorithm design is for embedded implementation in intelligent biomedical imaging instrumentation with customizable circuitry.

  3. The Resource, Spring 2002

    DTIC Science & Technology

    2002-01-01

    wrappers to other widely used languages, namely TCL/TK, Java, and Python . VTK is very powerful and covers polygonal models and image processing classes and...follows: � Large Data Visualization and Rendering � Information Visualization for Beginners � Rendering and Visualization in Parallel Environments

  4. An application of the MPP to the interactive manipulation of stereo images of digital terrain models

    NASA Technical Reports Server (NTRS)

    Pol, Sanjay; Mcallister, David; Davis, Edward

    1987-01-01

    Massively Parallel Processor algorithms were developed for the interactive manipulation of flat shaded digital terrain models defined over grids. The emphasis is on real time manipulation of stereo images. Standard graphics transformations are applied to a 128 x 128 grid of elevations followed by shading and a perspective projection to produce the right eye image. The surface is then rendered using a simple painter's algorithm for hidden surface removal. The left eye image is produced by rotating the surface 6 degs about the viewer's y axis followed by a perspective projection and rendering of the image as described above. The left and right eye images are then presented on a graphics device using standard stereo technology. Performance evaluations and comparisons are presented.

  5. Hippocampal subfield segmentation in temporal lobe epilepsy: Relation to outcomes.

    PubMed

    Kreilkamp, B A K; Weber, B; Elkommos, S B; Richardson, M P; Keller, S S

    2018-06-01

    To investigate the clinical and surgical outcome correlates of preoperative hippocampal subfield volumes in patients with refractory temporal lobe epilepsy (TLE) using a new magnetic resonance imaging (MRI) multisequence segmentation technique. We recruited 106 patients with TLE and hippocampal sclerosis (HS) who underwent conventional T1-weighted and T2 short TI inversion recovery MRI. An automated hippocampal segmentation algorithm was used to identify twelve subfields in each hippocampus. A total of 76 patients underwent amygdalohippocampectomy and postoperative seizure outcome assessment using the standardized ILAE classification. Semiquantitative hippocampal internal architecture (HIA) ratings were correlated with hippocampal subfield volumes. Patients with left TLE had smaller volumes of the contralateral presubiculum and hippocampus-amygdala transition area compared to those with right TLE. Patients with right TLE had reduced contralateral hippocampal tail volumes and improved outcomes. In all patients, there were no significant relationships between hippocampal subfield volumes and clinical variables such as duration and age at onset of epilepsy. There were no significant differences in any hippocampal subfield volumes between patients who were rendered seizure free and those with persistent postoperative seizure symptoms. Ipsilateral but not contralateral HIA ratings were significantly correlated with gross hippocampal and subfield volumes. Our results suggest that ipsilateral hippocampal subfield volumes are not related to the chronicity/severity of TLE. We did not find any hippocampal subfield volume or HIA rating differences in patients with optimal and unfavorable outcomes. In patients with TLE and HS, sophisticated analysis of hippocampal architecture on MRI may have limited value for prediction of postoperative outcome. © 2018 The Authors. Acta Neurologica Scandinavica Published by John Wiley & Sons Ltd.

  6. JAtlasView: a Java atlas-viewer for browsing biomedical 3D images and atlases.

    PubMed

    Feng, Guangjie; Burton, Nick; Hill, Bill; Davidson, Duncan; Kerwin, Janet; Scott, Mark; Lindsay, Susan; Baldock, Richard

    2005-03-09

    Many three-dimensional (3D) images are routinely collected in biomedical research and a number of digital atlases with associated anatomical and other information have been published. A number of tools are available for viewing this data ranging from commercial visualization packages to freely available, typically system architecture dependent, solutions. Here we discuss an atlas viewer implemented to run on any workstation using the architecture neutral Java programming language. We report the development of a freely available Java based viewer for 3D image data, descibe the structure and functionality of the viewer and how automated tools can be developed to manage the Java Native Interface code. The viewer allows arbitrary re-sectioning of the data and interactive browsing through the volume. With appropriately formatted data, for example as provided for the Electronic Atlas of the Developing Human Brain, a 3D surface view and anatomical browsing is available. The interface is developed in Java with Java3D providing the 3D rendering. For efficiency the image data is manipulated using the Woolz image-processing library provided as a dynamically linked module for each machine architecture. We conclude that Java provides an appropriate environment for efficient development of these tools and techniques exist to allow computationally efficient image-processing libraries to be integrated relatively easily.

  7. How Many Bits Are Enough?

    NASA Technical Reports Server (NTRS)

    Larimer, James; Gille, Jennifer; Luszcz, Jeff; Hindson, William S. (Technical Monitor)

    1997-01-01

    Carlson and Cohen suggest that 'the perfect image is one that looks like a piece of the world viewed through a picture frame.' They propose that the metric for the perfect image be the discriminability of the reconstructed image from the ideal image the reconstruction is meant to represent. If these two images, the ideal and the reconstruction are noticeably different, then the reconstruction is less than perfect. If they cannot be discriminated then the reconstructed image is perfect. This definition has the advantage that it can be used to define 'good enough' image quality. An image that fully satisfies a task's image quality requirements for example text legibility, is selected to be the standard. Rendered images are then compared to the standard. Rendered images that are indiscriminable from the standard are good enough. Test patterns and test image sets serve as standards for many tasks and are commonplace to the image communications and display industries, so this is not a new nor novel idea.

  8. Multimodal Hierarchical Imaging of Serial Sections for Finding Specific Cellular Targets within Large Volumes

    PubMed Central

    Wacker, Irene U.; Veith, Lisa; Spomer, Waldemar; Hofmann, Andreas; Thaler, Marlene; Hillmer, Stefan; Gengenbach, Ulrich; Schröder, Rasmus R.

    2018-01-01

    Targeting specific cells at ultrastructural resolution within a mixed cell population or a tissue can be achieved by hierarchical imaging using a combination of light and electron microscopy. Samples embedded in resin are sectioned into arrays consisting of ribbons of hundreds of ultrathin sections and deposited on pieces of silicon wafer or conductively coated coverslips. Arrays are imaged at low resolution using a digital consumer like smartphone camera or light microscope (LM) for a rapid large area overview, or a wide field fluorescence microscope (fluorescence light microscopy (FLM)) after labeling with fluorophores. After post-staining with heavy metals, arrays are imaged in a scanning electron microscope (SEM). Selection of targets is possible from 3D reconstructions generated by FLM or from 3D reconstructions made from the SEM image stacks at intermediate resolution if no fluorescent markers are available. For ultrastructural analysis, selected targets are finally recorded in the SEM at high-resolution (a few nanometer image pixels). A ribbon-handling tool that can be retrofitted to any ultramicrotome is demonstrated. It helps with array production and substrate removal from the sectioning knife boat. A software platform that allows automated imaging of arrays in the SEM is discussed. Compared to other methods generating large volume EM data, such as serial block-face SEM (SBF-SEM) or focused ion beam SEM (FIB-SEM), this approach has two major advantages: (1) The resin-embedded sample is conserved, albeit in a sliced-up version. It can be stained in different ways and imaged with different resolutions. (2) As the sections can be post-stained, it is not necessary to use samples strongly block-stained with heavy metals to introduce contrast for SEM imaging or render the tissue blocks conductive. This makes the method applicable to a wide variety of materials and biological questions. Particularly prefixed materials e.g., from biopsy banks and pathology labs, can directly be embedded and reconstructed in 3D. PMID:29630046

  9. Color images of Kansas subsurface geology from well logs

    USGS Publications Warehouse

    Collins, D.R.; Doveton, J.H.

    1986-01-01

    Modern wireline log combinations give highly diagnostic information that goes beyond the basic shale content, pore volume, and fluid saturation of older logs. Pattern recognition of geology from logs is made conventionally through either the examination of log overlays or log crossplots. Both methods can be combined through the use of color as a medium of information by setting the three color primaries of blue, green, and red light as axes of three dimensional color space. Multiple log readings of zones are rendered as composite color mixtures which, when plotted sequentially with depth, show lithological successions in a striking manner. The method is extremely simple to program and display on a color monitor. Illustrative examples are described from the Kansas subsurface. ?? 1986.

  10. Scalable Multi-Platform Distribution of Spatial 3d Contents

    NASA Astrophysics Data System (ADS)

    Klimke, J.; Hagedorn, B.; Döllner, J.

    2013-09-01

    Virtual 3D city models provide powerful user interfaces for communication of 2D and 3D geoinformation. Providing high quality visualization of massive 3D geoinformation in a scalable, fast, and cost efficient manner is still a challenging task. Especially for mobile and web-based system environments, software and hardware configurations of target systems differ significantly. This makes it hard to provide fast, visually appealing renderings of 3D data throughout a variety of platforms and devices. Current mobile or web-based solutions for 3D visualization usually require raw 3D scene data such as triangle meshes together with textures delivered from server to client, what makes them strongly limited in terms of size and complexity of the models they can handle. In this paper, we introduce a new approach for provisioning of massive, virtual 3D city models on different platforms namely web browsers, smartphones or tablets, by means of an interactive map assembled from artificial oblique image tiles. The key concept is to synthesize such images of a virtual 3D city model by a 3D rendering service in a preprocessing step. This service encapsulates model handling and 3D rendering techniques for high quality visualization of massive 3D models. By generating image tiles using this service, the 3D rendering process is shifted from the client side, which provides major advantages: (a) The complexity of the 3D city model data is decoupled from data transfer complexity (b) the implementation of client applications is simplified significantly as 3D rendering is encapsulated on server side (c) 3D city models can be easily deployed for and used by a large number of concurrent users, leading to a high degree of scalability of the overall approach. All core 3D rendering techniques are performed on a dedicated 3D rendering server, and thin-client applications can be compactly implemented for various devices and platforms.

  11. Scalable isosurface visualization of massive datasets on commodity off-the-shelf clusters

    PubMed Central

    Bajaj, Chandrajit

    2009-01-01

    Tomographic imaging and computer simulations are increasingly yielding massive datasets. Interactive and exploratory visualizations have rapidly become indispensable tools to study large volumetric imaging and simulation data. Our scalable isosurface visualization framework on commodity off-the-shelf clusters is an end-to-end parallel and progressive platform, from initial data access to the final display. Interactive browsing of extracted isosurfaces is made possible by using parallel isosurface extraction, and rendering in conjunction with a new specialized piece of image compositing hardware called Metabuffer. In this paper, we focus on the back end scalability by introducing a fully parallel and out-of-core isosurface extraction algorithm. It achieves scalability by using both parallel and out-of-core processing and parallel disks. It statically partitions the volume data to parallel disks with a balanced workload spectrum, and builds I/O-optimal external interval trees to minimize the number of I/O operations of loading large data from disk. We also describe an isosurface compression scheme that is efficient for progress extraction, transmission and storage of isosurfaces. PMID:19756231

  12. Magnetic Resonance Imaging of Alimentary Tract Development in Manduca sexta

    PubMed Central

    Rowland, Ian J.; Goodman, Walter G.

    2016-01-01

    Non-invasive 3D magnetic resonance imaging techniques were used to investigate metamorphosis of the alimentary tract of Manduca sexta from the larval to the adult stage. The larval midgut contracts in volume immediately following cessation of feeding and then greatly enlarges during the late pharate pupal period. Magnetic resonance imaging revealed that the foregut and hindgut of the pharate pupa undergo ecdysis considerably earlier than the external exoskeleton. Expansion of air sacs in the early pupa and development of flight muscles several days later appear to orient the midgut into its adult position in the abdomen. The crop, an adult auxiliary storage organ, begins development as a dorsal outgrowth of the foregut. This coincides with a reported increase in pupal ecdysteroid titers. An outgrowth of the hindgut, the rectal sac, appears several days later and continues to expand until it nearly fills the dorsal half of the abdominal cavity. This development correlates with a second rise in pupal ecdysteroid titers. In the pharate pupa, the presence of paramagnetic species renders the silk glands hyperintense. PMID:27280776

  13. Spatio-temporal visualization of air-sea CO2 flux and carbon budget using volume rendering

    NASA Astrophysics Data System (ADS)

    Du, Zhenhong; Fang, Lei; Bai, Yan; Zhang, Feng; Liu, Renyi

    2015-04-01

    This paper presents a novel visualization method to show the spatio-temporal dynamics of carbon sinks and sources, and carbon fluxes in the ocean carbon cycle. The air-sea carbon budget and its process of accumulation are demonstrated in the spatial dimension, while the distribution pattern and variation of CO2 flux are expressed by color changes. In this way, we unite spatial and temporal characteristics of satellite data through visualization. A GPU-based direct volume rendering technique using half-angle slicing is adopted to dynamically visualize the released or absorbed CO2 gas with shadow effects. A data model is designed to generate four-dimensional (4D) data from satellite-derived air-sea CO2 flux products, and an out-of-core scheduling strategy is also proposed for on-the-fly rendering of time series of satellite data. The presented 4D visualization method is implemented on graphics cards with vertex, geometry and fragment shaders. It provides a visually realistic simulation and user interaction for real-time rendering. This approach has been integrated into the Information System of Ocean Satellite Monitoring for Air-sea CO2 Flux (IssCO2) for the research and assessment of air-sea CO2 flux in the China Seas.

  14. Novel Application of Confocal Laser Scanning Microscopy and 3D Volume Rendering toward Improving the Resolution of the Fossil Record of Charcoal

    PubMed Central

    Belcher, Claire M.; Punyasena, Surangi W.; Sivaguru, Mayandi

    2013-01-01

    Variations in the abundance of fossil charcoals between rocks and sediments are assumed to reflect changes in fire activity in Earth’s past. These variations in fire activity are often considered to be in response to environmental, ecological or climatic changes. The role that fire plays in feedbacks to such changes is becoming increasingly important to understand and highlights the need to create robust estimates of variations in fossil charcoal abundance. The majority of charcoal based fire reconstructions quantify the abundance of charcoal particles and do not consider the changes in the morphology of the individual particles that may have occurred due to fragmentation as part of their transport history. We have developed a novel application of confocal laser scanning microscopy coupled to image processing that enables the 3-dimensional reconstruction of individual charcoal particles. This method is able to measure the volume of both microfossil and mesofossil charcoal particles and allows the abundance of charcoal in a sample to be expressed as total volume of charcoal. The method further measures particle surface area and shape allowing both relationships between different size and shape metrics to be analysed and full consideration of variations in particle size and size sorting between different samples to be studied. We believe application of this new imaging approach could allow significant improvement in our ability to estimate variations in past fire activity using fossil charcoals. PMID:23977267

  15. The Chinese Visible Human (CVH) datasets incorporate technical and imaging advances on earlier digital humans

    PubMed Central

    Zhang, Shao-Xiang; Heng, Pheng-Ann; Liu, Zheng-Jin; Tan, Li-Wen; Qiu, Ming-Guo; Li, Qi-Yu; Liao, Rong-Xia; Li, Kai; Cui, Gao-Yu; Guo, Yan-Li; Yang, Xiao-Ping; Liu, Guang-Jiu; Shan, Jing-Lu; Liu, Ji-Jun; Zhang, Wei-Guo; Chen, Xian-Hong; Chen, Jin-Hua; Wang, Jian; Chen, Wei; Lu, Ming; You, Jian; Pang, Xue-Li; Xiao, Hong; Xie, Yong-Ming; Cheng, Jack Chun-Yiu

    2004-01-01

    We report the availability of a digitized Chinese male and a digitzed Chinese female typical of the population and with no obvious abnormalities. The embalming and milling procedures incorporate three technical improvements over earlier digitized cadavers. Vascular perfusion with coloured gelatin was performed to facilitate blood vessel identification. Embalmed cadavers were embedded in gelatin and cryosectioned whole so as to avoid section loss resulting from cutting the body into smaller pieces. Milling performed at −25 °C prevented small structures (e.g. teeth, concha nasalis and articular cartilage) from falling off from the milling surface. The male image set (.tiff images each of 36 Mb) has a section resolution of 3072 × 2048 pixels (∼170 μm, the accompanying magnetic resonance imaging and computer tomography data have a resolution of 512 × 512, i.e. ∼440 μm). The Chinese Visible Human male and female datasets are available at http://www.chinesevisiblehuman.com. (The male is 90.65 Gb and female 131.04 Gb). MPEG videos of direct records of real-time volume rendering are at: http://www.cse.cuhk.edu.hk/~crc PMID:15032906

  16. Segmentation of 3D microPET images of the rat brain via the hybrid gaussian mixture method with kernel density estimation.

    PubMed

    Chen, Tai-Been; Chen, Jyh-Cheng; Lu, Henry Horng-Shing

    2012-01-01

    Segmentation of positron emission tomography (PET) is typically achieved using the K-Means method or other approaches. In preclinical and clinical applications, the K-Means method needs a prior estimation of parameters such as the number of clusters and appropriate initialized values. This work segments microPET images using a hybrid method combining the Gaussian mixture model (GMM) with kernel density estimation. Segmentation is crucial to registration of disordered 2-deoxy-2-fluoro-D-glucose (FDG) accumulation locations with functional diagnosis and to estimate standardized uptake values (SUVs) of region of interests (ROIs) in PET images. Therefore, simulation studies are conducted to apply spherical targets to evaluate segmentation accuracy based on Tanimoto's definition of similarity. The proposed method generates a higher degree of similarity than the K-Means method. The PET images of a rat brain are used to compare the segmented shape and area of the cerebral cortex by the K-Means method and the proposed method by volume rendering. The proposed method provides clearer and more detailed activity structures of an FDG accumulation location in the cerebral cortex than those by the K-Means method.

  17. Computer aided diagnosis and treatment planning for developmental dysplasia of the hip

    NASA Astrophysics Data System (ADS)

    Li, Bin; Lu, Hongbing; Cai, Wenli; Li, Xiang; Meng, Jie; Liang, Zhengrong

    2005-04-01

    The developmental dysplasia of the hip (DDH) is a congenital malformation affecting the proximal femurs and acetabulum that are subluxatable, dislocatable, and dislocated. Early diagnosis and treatment is important because failure to diagnose and improper treatment can result in significant morbidity. In this paper, we designed and implemented a computer aided system for the diagnosis and treatment planning of this disease. With the design, the patient received CT (computed tomography) or MRI (magnetic resonance imaging) scan first. A mixture-based PV partial-volume algorithm was applied to perform bone segmentation on CT image, followed by three-dimensional (3D) reconstruction and display of the segmented image, demonstrating the special relationship between the acetabulum and femurs for visual judgment. Several standard procedures, such as Salter procedure, Pemberton procedure and Femoral Shortening osteotomy, were simulated on the screen to rehearse a virtual treatment plan. Quantitative measurement of Acetabular Index (AI) and Femoral Neck Anteversion (FNA) were performed on the 3D image for evaluation of DDH and treatment plans. PC graphics-card GPU architecture was exploited to accelerate the 3D rendering and geometric manipulation. The prototype system was implemented on PC/Windows environment and is currently under clinical trial on patient datasets.

  18. Angular distribution of Pigment epithelium central limit-Inner limit of the retina Minimal Distance (PIMD), in the young not pathological optic nerve head imaged by OCT

    NASA Astrophysics Data System (ADS)

    Söderberg, Per G.; Sandberg-Melin, Camilla

    2018-02-01

    The present study aimed to elucidate the angular distribution of the Pigment epithelium central limit-Inner limit of the retina Minimal Distance measured over 2π radians in the frontal plane (PIMD-2π) in young healthy eyes. Both healthy eyes of 16 subjects aged [20;30[ years were included. In each eye, a volume of the optical nerve head (ONH) was captured three times with a TOPCON DRI OCT Triton (Japan). Each volume renders a representation of the ONH 2.8 mm along the sagittal axis resolved in 993 steps, 6 mm long the frontal axis resolved in 512 steps and 6 x mm along the longitudinal axis resolved in 256 steps. The captured volumes were transferred to a custom made software for semiautomatic segmentation of PIMD around the circumference of the ONH. The phases of iterated volumes were calibrated with cross correlation. It was found that PIMD-2π expresses a double hump with a small maximum superiorly, a larger maximum inferiorly, and minima in between. The measurements indicated that there is no difference of PIMD-2π between genders nor between dominant and not dominant eye within subject. The variation between eyes within subject is of the same order as the variation among subjects. The variation among volumes within eye is substantially lower.

  19. Relation between cannabis use and subcortical volumes in people at clinical high risk of psychosis.

    PubMed

    Buchy, Lisa; Mathalon, Daniel H; Cannon, Tyrone D; Cadenhead, Kristin S; Cornblatt, Barbara A; McGlashan, Thomas H; Perkins, Diana O; Seidman, Larry J; Tsuang, Ming T; Walker, Elaine F; Woods, Scott W; Bearden, Carrie E; Addington, Jean

    2016-08-30

    Among people at genetic risk of schizophrenia, those who use cannabis show smaller thalamic and hippocampal volumes. We evaluated this relationship in people at clinical high risk (CHR) of psychosis. The Alcohol and Drug Use Scale was used to identify 132 CHR cannabis users, the majority of whom were non-dependent cannabis users, 387 CHR non-users, and 204 healthy control non-users, and all participants completed magnetic resonance imaging scans. Volumes of the thalamus, hippocampus and amygdala were extracted with FreeSurfer, and compared across groups. Comparing all CHR participants with healthy control participants revealed no significant differences in volumes of any ROI. However, when comparing CHR users to CHR non-users, a significant ROI×Cannabis group effect emerged: CHR users showed significantly smaller amygdala compared to CHR non-users. However, when limiting analysis to CHR subjects who reported using alcohol at a 'use without impairment' severity level, the amygdala effect was non-significant; rather, smaller hippocampal volumes were seen in CHR cannabis users compared to non-users. Controlling statistically for effects of alcohol and tobacco use rendered all results non-significant. These results highlight the importance of controlling for residual confounding effects of other substance use when examining the relationship between cannabis use and neural structure. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  20. Iodine contrast cone beam CT imaging of breast cancer

    NASA Astrophysics Data System (ADS)

    Partain, Larry; Prionas, Stavros; Seppi, Edward; Virshup, Gary; Roos, Gerhard; Sutherland, Robert; Boone, John

    2007-03-01

    An iodine contrast agent, in conjunction with an X-ray cone beam CT imaging system, was used to clearly image three, biopsy verified, cancer lesions in two patients. The lesions were approximately in the 10 mm to 6 mm diameter range. Additional regions were also enhanced with approximate dimensions down to 1 mm or less in diameter. A flat panel detector, with 194 μm pixels in 2 x 2 binning mode, was used to obtain 500 projection images at 30 fps with an 80 kVp X-ray system operating at 112 mAs, for an 8-9 mGy dose - equivalent to two view mammography for these women. The patients were positioned prone, while the gantry rotated in the horizontal plane around the uncompressed, pendant breasts. This gantry rotated 360 degrees during the patient's 16.6 sec breath hold. A volume of 100 cc of 320 mg/ml iodine-contrast was power injected at 4 cc/sec, via catheter into the arm vein of the patient. The resulting 512 x 512 x 300 cone beam CT data set of Feldkamp reconstructed ~(0.3 mm) 3 voxels were analyzed. An interval of voxel contrast values, characteristic of the regions with iodine contrast enhancement, were used with surface rendering to clearly identify up to a total of 13 highlighted volumes. This included the three largest lesions, that were previously biopsied and confirmed to be malignant. The other ten highlighted regions, of smaller diameters, are likely areas of increased contrast trapping unrelated to cancer angiogenesis. However the technique itself is capable of resolving lesions that small.

  1. SU-E-J-257: Image Artifacts Caused by Implanted Calypso Beacons in MRI Studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amro, H; Chetty, I; Gordon, J

    2014-06-01

    Purpose: The presence of Calypso Beacon-transponders in patients can cause artifacts during MRI imaging studies. This could be a problem for post-treatment follow up of cancer patients using MRI studies to evaluate metastasis and for functional imaging studies.This work assesses (1) the volume immediately surrounding the transponders that will not be visualized by the MRI due to the beacons, and (2) the dependence of the non-visualized volume on beacon orientation, and scanning techniques. Methods: Two phantoms were used in this study (1) water filled box, (2) and a 2300 cc block of pork meat. Calypso beacons were implanted in themore » phantoms both in parallel and perpendicular orientations with respect to the MR scanner magnetic field. MR image series of the phantom were obtained with on a 1.0T high field open MR-SIM with multiple pulse sequences, for example, T1-weighted fast field echo and T2-weighted turbo spin echo. Results: On average, a no-signal region with 2 cm radius and 3 cm length was measured. Image artifacts are more significant when beacons are placed parallel to scanner magnetic field; the no-signal area around the beacon was about 0.5 cm larger in orthogonal orientation. The no-signal region surrounding the beacons slightly varies in dimension for the different pulse sequences. Conclusion: The use of Calypso beacons can prohibit the use of MRI studies in post-treatment assessments, especially in the immediate region surrounding the implanted beacon. A characterization of the MR scanner by identifying the no-signal regions due to implanted beacons is essential. This may render the use of Calypso beacons useful for some cases and give the treating physician a chance to identify those patients prior to beacon implantation.« less

  2. Pairwise mixture model for unmixing partial volume effect in multi-voxel MR spectroscopy of brain tumour patients

    NASA Astrophysics Data System (ADS)

    Olliverre, Nathan; Asad, Muhammad; Yang, Guang; Howe, Franklyn; Slabaugh, Gregory

    2017-03-01

    Multi-Voxel Magnetic Resonance Spectroscopy (MV-MRS) provides an important and insightful technique for the examination of the chemical composition of brain tissue, making it an attractive medical imaging modality for the examination of brain tumours. MRS, however, is affected by the issue of the Partial Volume Effect (PVE), where the signals of multiple tissue types can be found within a single voxel and provides an obstacle to the interpretation of the data. The PVE results from the low resolution achieved in MV-MRS images relating to the signal to noise ratio (SNR). To counteract PVE, this paper proposes a novel Pairwise Mixture Model (PMM), that extends a recently reported Signal Mixture Model (SMM) for representing the MV-MRS signal as normal, low or high grade tissue types. Inspired by Conditional Random Field (CRF) and its continuous variant the PMM incorporates the surrounding voxel neighbourhood into an optimisation problem, the solution of which provides an estimation to a set of coefficients. The values of the estimated coefficients represents the amount of each tissue type (normal, low or high) found within a voxel. These coefficients can then be visualised as a nosological rendering using a coloured grid representing the MV-MRS image overlaid on top of a structural image, such as a Magnetic Resonance Image (MRI). Experimental results show an accuracy of 92.69% in classifying patient tumours as either low or high grade compared against the histopathology for each patient. Compared to 91.96% achieved by the SMM, the proposed PMM method demonstrates the importance of incorporating spatial coherence into the estimation as well as its potential clinical usage.

  3. Superresolution with the focused plenoptic camera

    NASA Astrophysics Data System (ADS)

    Georgiev, Todor; Chunev, Georgi; Lumsdaine, Andrew

    2011-03-01

    Digital images from a CCD or CMOS sensor with a color filter array must undergo a demosaicing process to combine the separate color samples into a single color image. This interpolation process can interfere with the subsequent superresolution process. Plenoptic superresolution, which relies on precise sub-pixel sampling across captured microimages, is particularly sensitive to such resampling of the raw data. In this paper we present an approach for superresolving plenoptic images that takes place at the time of demosaicing the raw color image data. Our approach exploits the interleaving provided by typical color filter arrays (e.g., Bayer filter) to further refine plenoptic sub-pixel sampling. Our rendering algorithm treats the color channels in a plenoptic image separately, which improves final superresolution by a factor of two. With appropriate plenoptic capture we show the theoretical possibility for rendering final images at full sensor resolution.

  4. Context-dependent JPEG backward-compatible high-dynamic range image compression

    NASA Astrophysics Data System (ADS)

    Korshunov, Pavel; Ebrahimi, Touradj

    2013-10-01

    High-dynamic range (HDR) imaging is expected, together with ultrahigh definition and high-frame rate video, to become a technology that may change photo, TV, and film industries. Many cameras and displays capable of capturing and rendering both HDR images and video are already available in the market. The popularity and full-public adoption of HDR content is, however, hindered by the lack of standards in evaluation of quality, file formats, and compression, as well as large legacy base of low-dynamic range (LDR) displays that are unable to render HDR. To facilitate the wide spread of HDR usage, the backward compatibility of HDR with commonly used legacy technologies for storage, rendering, and compression of video and images are necessary. Although many tone-mapping algorithms are developed for generating viewable LDR content from HDR, there is no consensus of which algorithm to use and under which conditions. We, via a series of subjective evaluations, demonstrate the dependency of the perceptual quality of the tone-mapped LDR images on the context: environmental factors, display parameters, and image content itself. Based on the results of subjective tests, it proposes to extend JPEG file format, the most popular image format, in a backward compatible manner to deal with HDR images also. An architecture to achieve such backward compatibility with JPEG is proposed. A simple implementation of lossy compression demonstrates the efficiency of the proposed architecture compared with the state-of-the-art HDR image compression.

  5. Processing-in-Memory Enabled Graphics Processors for 3D Rendering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Chenhao; Song, Shuaiwen; Wang, Jing

    2017-02-06

    The performance of 3D rendering of Graphics Processing Unit that convents 3D vector stream into 2D frame with 3D image effects significantly impact users’ gaming experience on modern computer systems. Due to the high texture throughput in 3D rendering, main memory bandwidth becomes a critical obstacle for improving the overall rendering performance. 3D stacked memory systems such as Hybrid Memory Cube (HMC) provide opportunities to significantly overcome the memory wall by directly connecting logic controllers to DRAM dies. Based on the observation that texel fetches significantly impact off-chip memory traffic, we propose two architectural designs to enable Processing-In-Memory based GPUmore » for efficient 3D rendering.« less

  6. Automatic partitioning of head CTA for enabling segmentation

    NASA Astrophysics Data System (ADS)

    Suryanarayanan, Srikanth; Mullick, Rakesh; Mallya, Yogish; Kamath, Vidya; Nagaraj, Nithin

    2004-05-01

    Radiologists perform a CT Angiography procedure to examine vascular structures and associated pathologies such as aneurysms. Volume rendering is used to exploit volumetric capabilities of CT that provides complete interactive 3-D visualization. However, bone forms an occluding structure and must be segmented out. The anatomical complexity of the head creates a major challenge in the segmentation of bone and vessel. An analysis of the head volume reveals varying spatial relationships between vessel and bone that can be separated into three sub-volumes: "proximal", "middle", and "distal". The "proximal" and "distal" sub-volumes contain good spatial separation between bone and vessel (carotid referenced here). Bone and vessel appear contiguous in the "middle" partition that remains the most challenging region for segmentation. The partition algorithm is used to automatically identify these partition locations so that different segmentation methods can be developed for each sub-volume. The partition locations are computed using bone, image entropy, and sinus profiles along with a rule-based method. The algorithm is validated on 21 cases (varying volume sizes, resolution, clinical sites, pathologies) using ground truth identified visually. The algorithm is also computationally efficient, processing a 500+ slice volume in 6 seconds (an impressive 0.01 seconds / slice) that makes it an attractive algorithm for pre-processing large volumes. The partition algorithm is integrated into the segmentation workflow. Fast and simple algorithms are implemented for processing the "proximal" and "distal" partitions. Complex methods are restricted to only the "middle" partition. The partitionenabled segmentation has been successfully tested and results are shown from multiple cases.

  7. Morphology of the pancreas in type 2 diabetes: effect of weight loss with or without normalisation of insulin secretory capacity.

    PubMed

    Al-Mrabeh, Ahmad; Hollingsworth, Kieren G; Steven, Sarah; Taylor, Roy

    2016-08-01

    This study was designed to establish whether the low volume and irregular border of the pancreas in type 2 diabetes would be normalised after reversal of diabetes. A total of 29 individuals with type 2 diabetes undertook a very low energy (very low calorie) diet for 8 weeks followed by weight maintenance for 6 months. Methods were established to quantify the pancreas volume and degree of irregularity of the pancreas border. Three-dimensional volume-rendering and fractal dimension (FD) analysis of the MRI-acquired images were employed, as was three-point Dixon imaging to quantify the fat content. There was no change in pancreas volume 6 months after reversal of diabetes compared with baseline (52.0 ± 4.9 cm(3) and 51.4 ± 4.5 cm(3), respectively; p = 0.69), nor was any volumetric change observed in the non-responders. There was an inverse relationship between the volume and fat content of the pancreas in the total study population (r =-0.50, p = 0.006). Reversal of diabetes was associated with an increase in irregularity of the pancreas borders between baseline and 8 weeks (FD 1.143 ± 0.013 and 1.169 ± 0.006, respectively; p = 0.05), followed by a decrease at 6 months (1.130 ± 0.012, p = 0.006). On the other hand, no changes in FD were seen in the non-reversed group. Restoration of normal insulin secretion did not increase the subnormal pancreas volume over 6 months in the study population. A significant change in irregularity of the pancreas borders occurred after acute weight loss only after reversal of diabetes. Pancreas morphology in type 2 diabetes may be prognostically important, and its relationship to change in beta cell function requires further study.

  8. Real-time 3D image reconstruction guidance in liver resection surgery

    PubMed Central

    Nicolau, Stephane; Pessaux, Patrick; Mutter, Didier; Marescaux, Jacques

    2014-01-01

    Background Minimally invasive surgery represents one of the main evolutions of surgical techniques. However, minimally invasive surgery adds difficulty that can be reduced through computer technology. Methods From a patient’s medical image [US, computed tomography (CT) or MRI], we have developed an Augmented Reality (AR) system that increases the surgeon’s intraoperative vision by providing a virtual transparency of the patient. AR is based on two major processes: 3D modeling and visualization of anatomical or pathological structures appearing in the medical image, and the registration of this visualization onto the real patient. We have thus developed a new online service, named Visible Patient, providing efficient 3D modeling of patients. We have then developed several 3D visualization and surgical planning software tools to combine direct volume rendering and surface rendering. Finally, we have developed two registration techniques, one interactive and one automatic providing intraoperative augmented reality view. Results From January 2009 to June 2013, 769 clinical cases have been modeled by the Visible Patient service. Moreover, three clinical validations have been realized demonstrating the accuracy of 3D models and their great benefit, potentially increasing surgical eligibility in liver surgery (20% of cases). From these 3D models, more than 50 interactive AR-assisted surgical procedures have been realized illustrating the potential clinical benefit of such assistance to gain safety, but also current limits that automatic augmented reality will overcome. Conclusions Virtual patient modeling should be mandatory for certain interventions that have now to be defined, such as liver surgery. Augmented reality is clearly the next step of the new surgical instrumentation but remains currently limited due to the complexity of organ deformations during surgery. Intraoperative medical imaging used in new generation of automated augmented reality should solve this issue thanks to the development of Hybrid OR. PMID:24812598

  9. GPU accelerated generation of digitally reconstructed radiographs for 2-D/3-D image registration.

    PubMed

    Dorgham, Osama M; Laycock, Stephen D; Fisher, Mark H

    2012-09-01

    Recent advances in programming languages for graphics processing units (GPUs) provide developers with a convenient way of implementing applications which can be executed on the CPU and GPU interchangeably. GPUs are becoming relatively cheap, powerful, and widely available hardware components, which can be used to perform intensive calculations. The last decade of hardware performance developments shows that GPU-based computation is progressing significantly faster than CPU-based computation, particularly if one considers the execution of highly parallelisable algorithms. Future predictions illustrate that this trend is likely to continue. In this paper, we introduce a way of accelerating 2-D/3-D image registration by developing a hybrid system which executes on the CPU and utilizes the GPU for parallelizing the generation of digitally reconstructed radiographs (DRRs). Based on the advancements of the GPU over the CPU, it is timely to exploit the benefits of many-core GPU technology by developing algorithms for DRR generation. Although some previous work has investigated the rendering of DRRs using the GPU, this paper investigates approximations which reduce the computational overhead while still maintaining a quality consistent with that needed for 2-D/3-D registration with sufficient accuracy to be clinically acceptable in certain applications of radiation oncology. Furthermore, by comparing implementations of 2-D/3-D registration on the CPU and GPU, we investigate current performance and propose an optimal framework for PC implementations addressing the rigid registration problem. Using this framework, we are able to render DRR images from a 256×256×133 CT volume in ~24 ms using an NVidia GeForce 8800 GTX and in ~2 ms using NVidia GeForce GTX 580. In addition to applications requiring fast automatic patient setup, these levels of performance suggest image-guided radiation therapy at video frame rates is technically feasible using relatively low cost PC architecture.

  10. LORENZ: a system for planning long-bone fracture reduction

    NASA Astrophysics Data System (ADS)

    Birkfellner, Wolfgang; Burgstaller, Wolfgang; Wirth, Joachim; Baumann, Bernard; Jacob, Augustinus L.; Bieri, Kurt; Traud, Stefan; Strub, Michael; Regazzoni, Pietro; Messmer, Peter

    2003-05-01

    Long bone fractures belong to the most common injuries encountered in clinical routine trauma surgery. Preoperative assessment and decision making is usually based on standard 2D radiographs of the injured limb. Taking into account that a 3D - imaging modality such as computed tomography (CT) is not used for diagnosis in clinical routine, we have designed LORENZ, a fracture reduction planning tool based on such standard radiographs. Taking into account the considerable success of so-called image free navigation systems for total knee replacement in orthopaedic surgery, we assume that a similar tool for long bone fracture reposition should have considerable impact on computer-aided trauma surgery in a standard clinical routine setup. The case for long bone fracture reduction is, however, somewhat more complicated since not only scale independent angles indicating biomechanical measures such as varus and valgus are involved. Reduction path planning requires that the individual anatomy and the classification of the fracture is taken into account. In this paper, we present the basic ideas of this planning tool, it's current state, and the methodology chosen. LORENZ takes one or more conventional radiographs of the broken limb as input data. In addition, one or more x-rays of the opposite healthy bone are taken and mirrored if necessary. A most adequate CT model is being selected from a database; currently, this is achieved by using a scale space approach on the digitized x-ray images and comparing standard perspective renderings to these x-rays. After finding a CT-volume with a similar bone, a triangulated surface model is generated, and the surgeon can break the bone and arrange the fragments in 3D according to the x-ray images of the broken bone. Common osteosynthesis plates and implants can be loaded from CAD-datasets and are visualized as well. In addition, LORENZ renders virtual x-ray views of the fracture reduction process. The hybrid surface/voxel rendering engine of LORENZ also features full collision detection of fragments and implants by using the RAPID collision detection library. The reduction path is saved, and a TCP/IP interface to a robot for executing the reduction was added. LORENZ is platform independent and was programmed using Qt, AVW and OpenGL. We present a prototype for computer-aided fracture reduction planning based on standard radiographs. First test on clinical CT-Xray image pairs showed good performance; a current effort focuses on improving the speed of model retrieval by using orthonormal image moment decomposition, and on clinical evaluation for both training and surgical planning purposes. Furthermore, user-interface aspects are currently under evaluation and will be discussed.

  11. American River Watershed Investigation, California, Feasibility Report. Part 1. Main Report. Part 2. Environmental Impact Statement/Environmental Impact Report

    DTIC Science & Technology

    1991-12-01

    determined more by economic forces than by flood protection. Thus, if inadequate flood protection rendered development in portions of the American River flood...1978 Patwin. In: Handbook of North American Indians: Volume 8 California, Robert F. Heizer , volume editor. Smithsonian Institution, Washington, D.C. pp...Norman L. & Arlean H. Towne. 1978 Nisenan. In: Handbook of North American Indians: Volume 8 California, Robert F. Heizer , volume editor. Smithsonian

  12. A 3D ultrasound scanner: real time filtering and rendering algorithms.

    PubMed

    Cifarelli, D; Ruggiero, C; Brusacà, M; Mazzarella, M

    1997-01-01

    The work described here has been carried out within a collaborative project between DIST and ESAOTE BIOMEDICA aiming to set up a new ultrasonic scanner performing 3D reconstruction. A system is being set up to process and display 3D ultrasonic data in a fast, economical and user friendly way to help the physician during diagnosis. A comparison is presented among several algorithms for digital filtering, data segmentation and rendering for real time, PC based, three-dimensional reconstruction from B-mode ultrasonic biomedical images. Several algorithms for digital filtering have been compared as relates to processing time and to final image quality. Three-dimensional data segmentation techniques and rendering has been carried out with special reference to user friendly features for foreseeable applications and reconstruction speed.

  13. Determination of diffusion coefficients of biocides on their passage through organic resin-based renders.

    PubMed

    Styszko, Katarzyna; Kupiec, Krzysztof

    2016-10-01

    In this study the diffusion coefficients of isoproturon, diuron and cybutryn in acrylate and silicone resin-based renders were determined. The diffusion coefficients were determined using measuring concentrations of biocides in the liquid phase after being in contact with renders for specific time intervals. The mathematical solution of the transient diffusion equation for an infinite plate contacted on one side with a limited volume of water was used to calculate the diffusion coefficient. The diffusion coefficients through the acrylate render were 8.10·10(-9) m(2) s(-1) for isoproturon, 1.96·10(-9) m(2) s(-1) for diuron and 1.53·10(-9) m(2) s(-1) for cybutryn. The results for the silicone render were lower by one order of magnitude. The compounds with a high diffusion coefficient for one polymer had likewise high values for the other polymer. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. A spatially encoded dose difference maximal intensity projection map for patient dose evaluation: A new first line patient quality assurance tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu Weigang; Graff, Pierre; Boettger, Thomas

    2011-04-15

    Purpose: To develop a spatially encoded dose difference maximal intensity projection (DD-MIP) as an online patient dose evaluation tool for visualizing the dose differences between the planning dose and dose on the treatment day. Methods: Megavoltage cone-beam CT (MVCBCT) images acquired on the treatment day are used for generating the dose difference index. Each index is represented by different colors for underdose, acceptable, and overdose regions. A maximal intensity projection (MIP) algorithm is developed to compress all the information of an arbitrary 3D dose difference index into a 2D DD-MIP image. In such an algorithm, a distance transformation is generatedmore » based on the planning CT. Then, two new volumes representing the overdose and underdose regions of the dose difference index are encoded with the distance transformation map. The distance-encoded indices of each volume are normalized using the skin distance obtained on the planning CT. After that, two MIPs are generated based on the underdose and overdose volumes with green-to-blue and green-to-red lookup tables, respectively. Finally, the two MIPs are merged with an appropriate transparency level and rendered in planning CT images. Results: The spatially encoded DD-MIP was implemented in a dose-guided radiotherapy prototype and tested on 33 MVCBCT images from six patients. The user can easily establish the threshold for the overdose and underdose. A 3% difference between the treatment and planning dose was used as the threshold in the study; hence, the DD-MIP shows red or blue color for the dose difference >3% or {<=}3%, respectively. With such a method, the overdose and underdose regions can be visualized and distinguished without being overshadowed by superficial dose differences. Conclusions: A DD-MIP algorithm was developed that compresses information from 3D into a single or two orthogonal projections while hinting the user whether the dose difference is on the skin surface or deeper.« less

  15. A Fast Method for the Segmentation of Synaptic Junctions and Mitochondria in Serial Electron Microscopic Images of the Brain.

    PubMed

    Márquez Neila, Pablo; Baumela, Luis; González-Soriano, Juncal; Rodríguez, Jose-Rodrigo; DeFelipe, Javier; Merchán-Pérez, Ángel

    2016-04-01

    Recent electron microscopy (EM) imaging techniques permit the automatic acquisition of a large number of serial sections from brain samples. Manual segmentation of these images is tedious, time-consuming and requires a high degree of user expertise. Therefore, there is considerable interest in developing automatic segmentation methods. However, currently available methods are computationally demanding in terms of computer time and memory usage, and to work properly many of them require image stacks to be isotropic, that is, voxels must have the same size in the X, Y and Z axes. We present a method that works with anisotropic voxels and that is computationally efficient allowing the segmentation of large image stacks. Our approach involves anisotropy-aware regularization via conditional random field inference and surface smoothing techniques to improve the segmentation and visualization. We have focused on the segmentation of mitochondria and synaptic junctions in EM stacks from the cerebral cortex, and have compared the results to those obtained by other methods. Our method is faster than other methods with similar segmentation results. Our image regularization procedure introduces high-level knowledge about the structure of labels. We have also reduced memory requirements with the introduction of energy optimization in overlapping partitions, which permits the regularization of very large image stacks. Finally, the surface smoothing step improves the appearance of three-dimensional renderings of the segmented volumes.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bargellini, Irene, E-mail: irenebargellini@hotmail.com; Turini, Francesca; Bozzi, Elena

    To assess feasibility of proper hepatic artery catheterization using a 3D model obtained from preprocedural computed tomographic angiography (CTA), fused with real-time fluoroscopy, during transarterial chemoembolization of hepatocellular carcinoma. Twenty consecutive cirrhotic patients with hepatocellular carcinoma undergoing transarterial chemoembolization were prospectively enrolled onto the study. The early arterial phase axial images of the preprocedural CTA were postprocessed on an independent workstation connected to the angiographic system (Innova 4100; GE Healthcare, Milwaukee, WI), obtaining a 3D volume rendering image (VR) that included abdominal aorta, splanchnic arteries, and first and second lumbar vertebrae. The VR image was manually registered to the real-timemore » X-ray fluoroscopy, with the lumbar spine used as the reference. The VR image was then used as guidance to selectively catheterize the proper hepatic artery. The procedure was considered successful when performed with no need for intraarterial contrast injections or angiographic acquisitions. The procedure was successful in 19 (95 %) of 20 patients. In one patient, celiac trunk angiography was required for the presence of a significant ostial stenosis that was underestimated at computed tomography. Time for image reconstruction and registration was <10 min in all cases. The use of preprocedural CTA model with fluoroscopy enables confident and direct catheterization of the proper hepatic artery with no need for preliminary celiac trunk angiography, thus reducing radiation exposure and contrast media administration.« less

  17. Automatic knee cartilage delineation using inheritable segmentation

    NASA Astrophysics Data System (ADS)

    Dries, Sebastian P. M.; Pekar, Vladimir; Bystrov, Daniel; Heese, Harald S.; Blaffert, Thomas; Bos, Clemens; van Muiswinkel, Arianne M. C.

    2008-03-01

    We present a fully automatic method for segmentation of knee joint cartilage from fat suppressed MRI. The method first applies 3-D model-based segmentation technology, which allows to reliably segment the femur, patella, and tibia by iterative adaptation of the model according to image gradients. Thin plate spline interpolation is used in the next step to position deformable cartilage models for each of the three bones with reference to the segmented bone models. After initialization, the cartilage models are fine adjusted by automatic iterative adaptation to image data based on gray value gradients. The method has been validated on a collection of 8 (3 left, 5 right) fat suppressed datasets and demonstrated the sensitivity of 83+/-6% compared to manual segmentation on a per voxel basis as primary endpoint. Gross cartilage volume measurement yielded an average error of 9+/-7% as secondary endpoint. For cartilage being a thin structure, already small deviations in distance result in large errors on a per voxel basis, rendering the primary endpoint a hard criterion.

  18. Virtual arthroscopy of the visible human female temporomandibular joint.

    PubMed

    Ishimaru, T; Lew, D; Haller, J; Vannier, M W

    1999-07-01

    This study was designed to obtain views of the temporomandibular joint (TMJ) by means of computed arthroscopic simulation (virtual arthroscopy) using three-dimensional (3D) processing. Volume renderings of the TMJ from very thin cryosection slices of the Visible Human Female were taken off the Internet. Analyze(AVW) software (Biomedical Imaging Resource, Mayo Foundation, Rochester, MN) on a Silicon Graphics 02 workstation (Mountain View, CA) was then used to obtain 3D images and allow the navigation "fly-through" of the simulated joint. Good virtual arthroscopic views of the upper and lower joint spaces of both TMJs were obtained by fly-through simulation from the lateral and endaural sides. It was possible to observe the presence of a partial defect in the articular disc and an osteophyte on the condyle. Virtual arthroscopy provided visualization of regions not accessible to real arthroscopy. These results indicate that virtual arthroscopy will be a new technique to investigate the TMJ of the patient with TMJ disorders in the near future.

  19. The Impact of Nonphysician Providers on Diagnostic and Interventional Radiology Practices: Operational and Educational Implications.

    PubMed

    Hawkins, C Matthew; Bowen, Michael A; Gilliland, Charles A; Walls, D Gail; Duszak, Richard

    2015-09-01

    The numbers of nurse practitioners (NPs) and physician assistants (PAs) are increasing throughout the entire health care enterprise, and a similar expansion continues within radiology. The use of radiologist assistants is growing in some radiology practices as well. The increased volume of services rendered by this growing nonphysician provider subset of the health care workforce within and outside radiology departments warrants closer review, particularly with regard to their potential influence on radiology education and medical imaging resource utilization. In this article (the second in a two-part series), the authors review recent literature and offer recommendations for radiology practices regarding the impact NPs, PAs, and radiologist assistants may have on interventional and diagnostic radiology practices. Their potential impact on medical education is also discussed. Finally, staffing for radiology departments, as a result of an enlarging nonradiology NP and PA workforce ordering diagnostic imaging, is considered. Copyright © 2015 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  20. Accuracy and Specific Value of Cardiovascular 3D-Models in Pediatric CT-Angiography.

    PubMed

    Hammon, Matthias; Rompel, Oliver; Seuss, Hannes; Dittrich, Sven; Uder, Michael; Rüffer, Andrè; Cesnjevar, Robert; Ehret, Nicole; Glöckler, Martin

    2017-12-01

    Computed tomography (CT)-angiography is routinely performed prior to catheter-based and surgical treatment in congenital heart disease. To date, little is known about the accuracy and advantage of different 3D-reconstructions in CT-data. Exact anatomical information is crucial. We analyzed 35 consecutive CT-angiographies of infants with congenital heart disease. All datasets are reconstructed three-dimensionally using volume rendering technique (VRT) and threshold-based segmentation (stereolithographic model, STL). Additionally, the two-dimensional maximum intensity projection (MIP) reconstructs two-dimensional data. In each dataset and resulting image, measurements of vascular diameters for four different vessels were estimated and compared to the reference standard, measured via multiplanar reformation (MPR). The resulting measurements obtained via the STL-images, MIP-images, and the VRT-images were compared with the reference standard. There was a significant difference (p < 0.05) between measurements. The mean difference was 0.0 for STL-images, -0.1 for MIP-images, and -0.3 for VRT-images. The range of the differences was -0.7 to 1.0 mm for STL-images, -0.6 to 0.5 mm for MIP-images and -1.1 to 0.7 mm for VRT-images. There was an excellent correlation between the STL-, MIP-, VRT-measurements, and the reference standard. Inter-reader reliability was excellent (p < 0.01). STL-models of cardiovascular structures are more accurate than the traditional VRT-models. Additionally, they can be standardized and are reproducible.

  1. [Non-destructive, preclinical evaluation of root canal anatomy of human teeth with flat-panel detector volume CT (FD-VCT)].

    PubMed

    Heidrich, G; Hassepass, F; Dullin, C; Attin, T; Grabbe, E; Hannig, C

    2005-12-01

    Successful endodontic diagnostics and therapy call for adequate depiction of the root canal anatomy with multimodal diagnostic imaging. The aim of the present study is to evaluate visualization of the endodont with flat-panel detector volume CT (FD-VCT). 13 human teeth were examined with the prototype of a FD-VCT. After data acquisition and generation of volume data sets in volume rendering technology (VRT), the findings obtained were compared to conventional X-rays and cross-section preparations of the teeth. The anatomical structures of the endodont such as root canals, side canals and communications between different root canals as well as denticles could be detected precisely with FD-VCT. The length of curved root canals was also determined accurately. The spatial resolution of the system is around 140 microm. Only around 73 % of the main root canals detected with FD-VCT and 87 % of the roots could be visualized with conventional dental X-rays. None of the side canals, shown with FD-VCT, was detectable on conventional X-rays. In all cases the enamel and dentin of the teeth could be well delineated. No differences in image quality could be discerned between stored and freshly extracted teeth, or between primary and adult teeth. FD-VCT is an innovative diagnostic modality in preclinical and experimental use for non-destructive three-dimensional analysis of teeth. Thanks to the high isotropic spatial resolution compared with conventional X-rays, even the minutest structures, such as side canals, can be detected and evaluated. Potential applications in endodontics include diagnostics and evaluation of all steps of root canal treatment, ranging from trepanation through determination of the length of the root canal to obturation.

  2. Unconscious neural processing differs with method used to render stimuli invisible

    PubMed Central

    Fogelson, Sergey V.; Kohler, Peter J.; Miller, Kevin J.; Granger, Richard; Tse, Peter U.

    2014-01-01

    Visual stimuli can be kept from awareness using various methods. The extent of processing that a given stimulus receives in the absence of awareness is typically used to make claims about the role of consciousness more generally. The neural processing elicited by a stimulus, however, may also depend on the method used to keep it from awareness, and not only on whether the stimulus reaches awareness. Here we report that the method used to render an image invisible has a dramatic effect on how category information about the unseen stimulus is encoded across the human brain. We collected fMRI data while subjects viewed images of faces and tools, that were rendered invisible using either continuous flash suppression (CFS) or chromatic flicker fusion (CFF). In a third condition, we presented the same images under normal fully visible viewing conditions. We found that category information about visible images could be extracted from patterns of fMRI responses throughout areas of neocortex known to be involved in face or tool processing. However, category information about stimuli kept from awareness using CFS could be recovered exclusively within occipital cortex, whereas information about stimuli kept from awareness using CFF was also decodable within temporal and frontal regions. We conclude that unconsciously presented objects are processed differently depending on how they are rendered subjectively invisible. Caution should therefore be used in making generalizations on the basis of any one method about the neural basis of consciousness or the extent of information processing without consciousness. PMID:24982647

  3. Unconscious neural processing differs with method used to render stimuli invisible.

    PubMed

    Fogelson, Sergey V; Kohler, Peter J; Miller, Kevin J; Granger, Richard; Tse, Peter U

    2014-01-01

    Visual stimuli can be kept from awareness using various methods. The extent of processing that a given stimulus receives in the absence of awareness is typically used to make claims about the role of consciousness more generally. The neural processing elicited by a stimulus, however, may also depend on the method used to keep it from awareness, and not only on whether the stimulus reaches awareness. Here we report that the method used to render an image invisible has a dramatic effect on how category information about the unseen stimulus is encoded across the human brain. We collected fMRI data while subjects viewed images of faces and tools, that were rendered invisible using either continuous flash suppression (CFS) or chromatic flicker fusion (CFF). In a third condition, we presented the same images under normal fully visible viewing conditions. We found that category information about visible images could be extracted from patterns of fMRI responses throughout areas of neocortex known to be involved in face or tool processing. However, category information about stimuli kept from awareness using CFS could be recovered exclusively within occipital cortex, whereas information about stimuli kept from awareness using CFF was also decodable within temporal and frontal regions. We conclude that unconsciously presented objects are processed differently depending on how they are rendered subjectively invisible. Caution should therefore be used in making generalizations on the basis of any one method about the neural basis of consciousness or the extent of information processing without consciousness.

  4. Evaluating progressive-rendering algorithms in appearance design tasks.

    PubMed

    Jiawei Ou; Karlik, Ondrej; Křivánek, Jaroslav; Pellacini, Fabio

    2013-01-01

    Progressive rendering is becoming a popular alternative to precomputational approaches to appearance design. However, progressive algorithms create images exhibiting visual artifacts at early stages. A user study investigated these artifacts' effects on user performance in appearance design tasks. Novice and expert subjects performed lighting and material editing tasks with four algorithms: random path tracing, quasirandom path tracing, progressive photon mapping, and virtual-point-light rendering. Both the novices and experts strongly preferred path tracing to progressive photon mapping and virtual-point-light rendering. None of the participants preferred random path tracing to quasirandom path tracing or vice versa; the same situation held between progressive photon mapping and virtual-point-light rendering. The user workflow didn’t differ significantly with the four algorithms. The Web Extras include a video showing how four progressive-rendering algorithms converged (at http://youtu.be/ck-Gevl1e9s), the source code used, and other supplementary materials.

  5. Retrospective 4D MR image construction from free-breathing slice Acquisitions: A novel graph-based approach.

    PubMed

    Tong, Yubing; Udupa, Jayaram K; Ciesielski, Krzysztof C; Wu, Caiyun; McDonough, Joseph M; Mong, David A; Campbell, Robert M

    2017-01-01

    Dynamic or 4D imaging of the thorax has many applications. Both prospective and retrospective respiratory gating and tracking techniques have been developed for 4D imaging via CT and MRI. For pediatric imaging, due to radiation concerns, MRI becomes the de facto modality of choice. In thoracic insufficiency syndrome (TIS), patients often suffer from extreme malformations of the chest wall, diaphragm, and/or spine with inability of the thorax to support normal respiration or lung growth (Campbell et al., 2003, Campbell and Smith, 2007), as such patient cooperation needed by some of the gating and tracking techniques are difficult to realize without causing patient discomfort and interference with the breathing mechanism itself. Therefore (ventilator-supported) free-breathing MRI acquisition is currently the best choice for imaging these patients. This, however, raises a question of how to create a consistent 4D image from such acquisitions. This paper presents a novel graph-based technique for compiling the best 4D image volume representing the thorax over one respiratory cycle from slice images acquired during unencumbered natural tidal-breathing of pediatric TIS patients. In our approach, for each coronal (or sagittal) slice position, images are acquired at a rate of about 200-300ms/slice over several natural breathing cycles which yields over 2000 slices. A weighted graph is formed where each acquired slice constitutes a node and the weight of the arc between two nodes defines the degree of contiguity in space and time of the two slices. For each respiratory phase, an optimal 3D spatial image is constructed by finding the best path in the graph in the spatial direction. The set of all such 3D images for a given respiratory cycle constitutes a 4D image. Subsequently, the best 4D image among all such constructed images is found over all imaged respiratory cycles. Two types of evaluation studies are carried out to understand the behavior of this algorithm and in comparison to a method called Random Stacking - a 4D phantom study and 10 4D MRI acquisitions from TIS patients and normal subjects. The 4D phantom was constructed by 3D printing the pleural spaces of an adult thorax, which were segmented in a breath-held MRI acquisition. Qualitative visual inspection via cine display of the slices in space and time and in 3D rendered form showed smooth variation for all data sets constructed by the proposed method. Quantitative evaluation was carried out to measure spatial and temporal contiguity of the slices via segmented pleural spaces. The optimal method showed smooth variation of the pleural space as compared to Random Stacking whose behavior was erratic. The volumes of the pleural spaces at the respiratory phase corresponding to end inspiration and end expiration were compared to volumes obtained from breath-hold acquisitions at roughly the same phase. The mean difference was found to be roughly 3%. The proposed method is purely image-based and post-hoc and does not need breath holding or external surrogates or instruments to record respiratory motion or tidal volume. This is important and practically warranted for pediatric patients. The constructed 4D images portray spatial and temporal smoothness that should be expected in a consistent 4D volume. We believe that the method can be routinely used for thoracic 4D imaging. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Medical review practices for driver licensing volume 2: case studies of medical referrals and licensing outcomes in Maine, Ohio, Oregon, Texas, Washington, and Wisconsin.

    DOT National Transportation Integrated Search

    2017-03-01

    This is the second of three reports examining driver medical review practices in the United States and how : they fulfill the basic functions of identifying, assessing, and rendering licensing decisions on medically at-risk : drivers. This volume pre...

  7. Structuring Mentoring Relationships for Competence, Character, and Purpose

    ERIC Educational Resources Information Center

    Rhodes, Jean E.; Spencer, Renee

    2010-01-01

    We close this volume with a final commentary from two leaders in the mentoring field. Rhodes and Spencer articulate how the contributions to this volume offer a richer, more complex rendering of relational styles and processes than has been laid out previously in the mentoring literature. They suggest that these efforts should provoke discussion…

  8. Field Operations and Enforcement Manual for Air Pollution Control. Volume III: Inspection Procedures for Specific Industries.

    ERIC Educational Resources Information Center

    Weisburd, Melvin I.

    The Field Operations and Enforcement Manual for Air Pollution Control, Volume III, explains in detail the following: inspection procedures for specific sources, kraft pulp mills, animal rendering, steel mill furnaces, coking operations, petroleum refineries, chemical plants, non-ferrous smelting and refining, foundries, cement plants, aluminum…

  9. Registration of multiple video images to preoperative CT for image-guided surgery

    NASA Astrophysics Data System (ADS)

    Clarkson, Matthew J.; Rueckert, Daniel; Hill, Derek L.; Hawkes, David J.

    1999-05-01

    In this paper we propose a method which uses multiple video images to establish the pose of a CT volume with respect to video camera coordinates for use in image guided surgery. The majority of neurosurgical procedures require the neurosurgeon to relate the pre-operative MR/CT data to the intra-operative scene. Registration of 2D video images to the pre-operative 3D image enables a perspective projection of the pre-operative data to be overlaid onto the video image. Our registration method is based on image intensity and uses a simple iterative optimization scheme to maximize the mutual information between a video image and a rendering from the pre-operative data. Video images are obtained from a stereo operating microscope, with a field of view of approximately 110 X 80 mm. We have extended an existing information theoretical framework for 2D-3D registration, so that multiple video images can be registered simultaneously to the pre-operative data. Experiments were performed on video and CT images of a skull phantom. We took three video images, and our algorithm registered these individually to the 3D image. The mean projection error varied between 4.33 and 9.81 millimeters (mm), and the mean 3D error varied between 4.47 and 11.92 mm. Using our novel techniques we then registered five video views simultaneously to the 3D model. This produced an accurate and robust registration with a mean projection error of 0.68 mm and a mean 3D error of 1.05 mm.

  10. A method to generate soft shadows using a layered depth image and warping.

    PubMed

    Im, Yeon-Ho; Han, Chang-Young; Kim, Lee-Sup

    2005-01-01

    We present an image-based method for propagating area light illumination through a Layered Depth Image (LDI) to generate soft shadows from opaque and nonrefractive transparent objects. In our approach, using the depth peeling technique, we render an LDI from a reference light sample on a planar light source. Light illumination of all pixels in an LDI is then determined for all the other sample points via warping, an image-based rendering technique, which approximates ray tracing in our method. We use an image-warping equation and McMillan's warp ordering algorithm to find the intersections between rays and polygons and to find the order of intersections. Experiments for opaque and nonrefractive transparent objects are presented. Results indicate our approach generates soft shadows fast and effectively. Advantages and disadvantages of the proposed method are also discussed.

  11. Evaluation of stone volume distribution in renal collecting system as a predictor of stone-free rate after percutaneous nephrolithotomy: a retrospective single-center study.

    PubMed

    Atalay, Hasan Anıl; Canat, Lutfi; Bayraktarlı, Recep; Alkan, Ilter; Can, Osman; Altunrende, Fatih

    2017-06-23

    We analyzed our stone-free rates of PNL with regard to stone burden and its ratio to the renal collecting system volume. Data of 164 patients who underwent PNL were analyzed retrospectively. Volume segmentation of renal collecting system and stones were done using 3D segmentation software with the images obtained from CT data. Analyzed stone volume (ASV) and renal collecting system volume (RCSV) were measured and the ASV-to-RCSV ratio was calculated after the creation of a 3D surface volume rendering of renal stones and the collecting system. Univariate and multivariate statistical analyses were performed to determine factors affecting stone-free rates; also we assessed the predictive accuracy of the ASV-to-RCSV ratio using the receiving operating curve (ROC) and AUC. The stone-free rate of PNL monotherapy was 53% (164 procedures).The ASV-to-RCSV ratio and calyx number with stones were the most influential predictors of stone-free status (OR 4.15, 95% CI 2.24-7.24, <0.001, OR 2.62, 95% CI 1.38-4.97, p < 0.001, respectively). Other factors associated with the stone-free rate were maximum stone size (p < 0.029), stone surface area (p < 0.010), and stone burden volume (p < 0.001). Predictive accuracy of the ASV-to-RCSV ratio was AUC 0.76. Stone burden volume distribution in the renal collecting system, which is calculated using the 3D volume segmentation method, is a significant determinant of the stone-free rate before PCNL surgery. It could be used as a single guide variable by the clinician before renal stone surgery to predict extra requirements for stone clearance.

  12. Optoacoustic imaging in five dimensions

    NASA Astrophysics Data System (ADS)

    Deán-Ben, X. L.; Gottschalk, Sven; Fehm, Thomas F.; Razansky, Daniel

    2015-03-01

    We report on an optoacoustic imaging system capable of acquiring volumetric multispectral optoacoustic data in real time. The system is based on simultaneous acquisition of optoacoustic signals from 256 different tomographic projections by means of a spherical matrix array. Thereby, volumetric reconstructions can be done at high frame rate, only limited by the pulse repetition rate of the laser. The developed tomographic approach presents important advantages over previously reported systems that use scanning for attaining volumetric optoacoustic data. First, dynamic processes, such as the biodistribution of optical biomarkers, can be monitored in the entire volume of interest. Second, out-of-plane and motion artifacts that could degrade the image quality when imaging living specimens can be avoided. Finally, real-time 3D performance can obviously save time required for experimental and clinical observations. The feasibility of optoacoustic imaging in five dimensions, i.e. real time acquisition of volumetric datasets at multiple wavelengths, is reported. In this way, volumetric images of spectrally resolved chromophores are rendered in real time, thus offering an unparallel imaging performance among the current bio-imaging modalities. This performance is subsequently showcased by video-rate visualization of in vivo hemodynamic changes in mouse brain and handheld visualization of blood oxygenation in deep human vessels. The newly discovered capacities open new prospects for translating the optoacoustic technology into highly performing imaging modality for biomedical research and clinical practice with multiple applications envisioned, from cardiovascular and cancer diagnostics to neuroimaging and ophthalmology.

  13. Facial recognition from volume-rendered magnetic resonance imaging data.

    PubMed

    Prior, Fred W; Brunsden, Barry; Hildebolt, Charles; Nolan, Tracy S; Pringle, Michael; Vaishnavi, S Neil; Larson-Prior, Linda J

    2009-01-01

    Three-dimensional (3-D) reconstructions of computed tomography (CT) and magnetic resonance (MR) brain imaging studies are a routine component of both clinical practice and clinical and translational research. A side effect of such reconstructions is the creation of a potentially recognizable face. The Health Insurance Portability and Accountability Act of 1996 (HIPAA) Privacy Rule requires that individually identifiable health information may not be used for research unless identifiers that may be associated with the health information including "Full face photographic images and other comparable images ..." are removed (de-identification). Thus, a key question is: Are reconstructed facial images comparable to full-face photographs for the purpose of identification? To address this question, MR images were selected from existing research repositories and subjects were asked to pair an MR reconstruction with one of 40 photographs. The chance probability that an observer could match a photograph with its 3-D MR image was 1 in 40 (0.025), and we considered 4 successes out of 40 (4/40, 0.1) to indicate that a subject could identify persons' faces from their 3-D MR images. Forty percent of the subjects were able to successfully match photographs with MR images with success rates higher than the null hypothesis success rate. The Blyth-Still-Casella 95% confidence interval for the 40% success rate was 29%-52%, and the 40% success rate was significantly higher ( P < 0.001) than our null hypothesis success rate of 1 in 10 (0.10).

  14. Spatiotemporal Visualization of Time-Series Satellite-Derived CO2 Flux Data Using Volume Rendering and Gpu-Based Interpolation on a Cloud-Driven Digital Earth

    NASA Astrophysics Data System (ADS)

    Wu, S.; Yan, Y.; Du, Z.; Zhang, F.; Liu, R.

    2017-10-01

    The ocean carbon cycle has a significant influence on global climate, and is commonly evaluated using time-series satellite-derived CO2 flux data. Location-aware and globe-based visualization is an important technique for analyzing and presenting the evolution of climate change. To achieve realistic simulation of the spatiotemporal dynamics of ocean carbon, a cloud-driven digital earth platform is developed to support the interactive analysis and display of multi-geospatial data, and an original visualization method based on our digital earth is proposed to demonstrate the spatiotemporal variations of carbon sinks and sources using time-series satellite data. Specifically, a volume rendering technique using half-angle slicing and particle system is implemented to dynamically display the released or absorbed CO2 gas. To enable location-aware visualization within the virtual globe, we present a 3D particlemapping algorithm to render particle-slicing textures onto geospace. In addition, a GPU-based interpolation framework using CUDA during real-time rendering is designed to obtain smooth effects in both spatial and temporal dimensions. To demonstrate the capabilities of the proposed method, a series of satellite data is applied to simulate the air-sea carbon cycle in the China Sea. The results show that the suggested strategies provide realistic simulation effects and acceptable interactive performance on the digital earth.

  15. Real-time interactive virtual tour on the World Wide Web (WWW)

    NASA Astrophysics Data System (ADS)

    Yoon, Sanghyuk; Chen, Hai-jung; Hsu, Tom; Yoon, Ilmi

    2003-12-01

    Web-based Virtual Tour has become a desirable and demanded application, yet challenging due to the nature of web application's running environment such as limited bandwidth and no guarantee of high computation power on the client side. Image-based rendering approach has attractive advantages over traditional 3D rendering approach in such Web Applications. Traditional approach, such as VRML, requires labor-intensive 3D modeling process, high bandwidth and computation power especially for photo-realistic virtual scenes. QuickTime VR and IPIX as examples of image-based approach, use panoramic photos and the virtual scenes that can be generated from photos directly skipping the modeling process. But, these image-based approaches may require special cameras or effort to take panoramic views and provide only one fixed-point look-around and zooming in-out rather than 'walk around', that is a very important feature to provide immersive experience to virtual tourists. The Web-based Virtual Tour using Tour into the Picture employs pseudo 3D geometry with image-based rendering approach to provide viewers with immersive experience of walking around the virtual space with several snap shots of conventional photos.

  16. Neutrino Heating Drives a Supernova (Silent Animation)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    When a neutron star forms, compression creates heat that generates neutrinos. When the star’s core collapses, a shock wave propagates around the star but stalls. The neutrinos reenergize a stalled shock wave, and the convection created leads to an asymmetric explosion that shoots elements into the cosmos. The heat content, or entropy, is shown, with greater entropy represented by “warmer” hues. At center is a volume rendering of the developing explosion above the newly formed neutron star (based on a simulation with the CHIMERA code); side images of orthogonal slices through the star reveal additional detail. The movie starts 100more » milliseconds after the formation of the neutron star, depicts the shockwave’s bounce and follows astrophysical events up to 432 milliseconds after the bounce.« less

  17. Terahertz computed tomography of NASA thermal protection system materials

    NASA Astrophysics Data System (ADS)

    Roth, D. J.; Reyes-Rodriguez, S.; Zimdars, D. A.; Rauser, R. W.; Ussery, W. W.

    2012-05-01

    A terahertz (THz) axial computed tomography system has been developed that uses time domain measurements in order to form cross-sectional image slices and three dimensional volume renderings of terahertz-transparent materials. The system can inspect samples as large as 0.0283 m3 (1 ft3) with no safety concerns as for x-ray computed tomography. In this study, the THz-CT system was evaluated for its ability to detect and characterize 1) an embedded void in Space Shuttle external fuel tank thermal protection system (TPS) foam material and 2) impact damage in a TPS configuration under consideration for use in NASA's multi-purpose Orion crew module (CM). Micro-focus X-ray CT is utilized to characterize the flaws and provide a baseline for which to compare the THz CT results.

  18. CT Demonstration of Caput Medusae

    ERIC Educational Resources Information Center

    Weber, Edward C.; Vilensky, Joel A.

    2009-01-01

    Maximum intensity and volume rendered CT displays of caput medusae are provided to demonstrate both the anatomy and physiology of this portosystemic shunt associated with portal hypertension. (Contains 2 figures.)

  19. Assessment of a new biomimetic scaffold and its effects on bone formation by OCT

    NASA Astrophysics Data System (ADS)

    Yang, Ying; Aydin, Halil M.; Piskin, Erhan; El Haj, Alicia J.

    2009-02-01

    The ultimate target of bone tissue engineering is to generate functional load bearing bone. By nature, the porous volume in the trabecular bone is occupied by osseous medulla. The natural bone matrix consists of hydroxyapatite (HA) crystals precipitated along the collagen type I fibres. The mineral phase renders bone strength while collagen provides flexibility. Without mineral component, bone is very flexible and can not bear loads, whereas it is brittle in the case of mineral phase without the collagen presence. In this study, we designed and prepared a new type of scaffold which mimics the features of natural bone. The scaffold consists of three different components, a biphasic polymeric base composed of two different biodegradable polymers prepared by using dual porogen approach and bioactive agents, i.e., collagen and HA particles which are distributed throughout the matrix only in the pore surfaces. Interaction of the bioactive scaffolds possessing very high porosity and interconnected pore structures with cells were investigated in a prolonged culture period by using an osteoblastic cell line. The mineral HA particles have a slight different refractive index from the other elements such as polymeric scaffolds and cell/matrix in a tissue engineering constructs, exhibiting brighter images in OCT. Thus, OCT renders a convenient means to assess the morphology and architecture of the blank biomimetic scaffolds. This study also takes a close observation of OCT images for the cultured cell-scaffold constructs in order to assess neo-formed minerals and matrix. The OCT assessments have been compared with the results from confocal and SEM analysis.

  20. Standardized rendering from IR surveillance motion imagery

    NASA Astrophysics Data System (ADS)

    Prokoski, F. J.

    2014-06-01

    Government agencies, including defense and law enforcement, increasingly make use of video from surveillance systems and camera phones owned by non-government entities.Making advanced and standardized motion imaging technology available to private and commercial users at cost-effective prices would benefit all parties. In particular, incorporating thermal infrared into commercial surveillance systems offers substantial benefits beyond night vision capability. Face rendering is a process to facilitate exploitation of thermal infrared surveillance imagery from the general area of a crime scene, to assist investigations with and without cooperating eyewitnesses. Face rendering automatically generates greyscale representations similar to police artist sketches for faces in surveillance imagery collected from proximate locations and times to a crime under investigation. Near-realtime generation of face renderings can provide law enforcement with an investigation tool to assess witness memory and credibility, and integrate reports from multiple eyewitnesses, Renderings can be quickly disseminated through social media to warn of a person who may pose an immediate threat, and to solicit the public's help in identifying possible suspects and witnesses. Renderings are pose-standardized so as to not divulge the presence and location of eyewitnesses and surveillance cameras. Incorporation of thermal infrared imaging into commercial surveillance systems will significantly improve system performance, and reduce manual review times, at an incremental cost that will continue to decrease. Benefits to criminal justice would include improved reliability of eyewitness testimony and improved accuracy of distinguishing among minority groups in eyewitness and surveillance identifications.

  1. Hierarchical image-based rendering using texture mapping hardware

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Max, N

    1999-01-15

    Multi-layered depth images containing color and normal information for subobjects in a hierarchical scene model are precomputed with standard z-buffer hardware for six orthogonal views. These are adaptively selected according to the proximity of the viewpoint, and combined using hardware texture mapping to create ''reprojected'' output images for new viewpoints. (If a subobject is too close to the viewpoint, the polygons in the original model are rendered.) Specific z-ranges are selected from the textures with the hardware alpha test to give accurate 3D reprojection. The OpenGL color matrix is used to transform the precomputed normals into their orientations in themore » final view, for hardware shading.« less

  2. Experimental and rendering-based investigation of laser radar cross sections of small unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Laurenzis, Martin; Bacher, Emmanuel; Christnacher, Frank

    2017-12-01

    Laser imaging systems are prominent candidates for detection and tracking of small unmanned aerial vehicles (UAVs) in current and future security scenarios. Laser reflection characteristics for laser imaging (e.g., laser gated viewing) of small UAVs are investigated to determine their laser radar cross section (LRCS) by analyzing the intensity distribution of laser reflection in high resolution images. For the first time, LRCSs are determined in a combined experimental and computational approaches by high resolution laser gated viewing and three-dimensional rendering. An optimized simple surface model is calculated taking into account diffuse and specular reflectance properties based on the Oren-Nayar and the Cook-Torrance reflectance models, respectively.

  3. Role of four-dimensional echocardiography with high-definition flow imaging and spatiotemporal image correlation in detecting fetal pulmonary veins.

    PubMed

    Sun, Xue; Zhang, Ying; Fan, Miao; Wang, Yu; Wang, Meilian; Siddiqui, Faiza Amber; Sun, Wei; Sun, Feifei; Zhang, Dongyu; Lei, Wenjia; Hu, Guyue

    2017-06-01

    Prenatal diagnosis of fetal total anomalous pulmonary vein connection (TAPVC) remains challenging for most screening sonographers. The purpose of this study was to evaluate the use of four-dimensional echocardiography with high-definition flow imaging and spatiotemporal image correlation (4D-HDFI) in identifying pulmonary veins in normal and TAPVC fetuses. We retrospectively reviewed and performed 4D-HDFI in 204 normal and 12 fetuses with confirmed diagnosis of TAPVC. Cardiac volumes were available for postanalysis to obtain 4D-rendered images of the pulmonary veins. For the normal fetuses, two other traditional modalities including color Doppler and HDFI were used to detect the number of pulmonary veins and comparisons were made between each of these traditional methods and 4D-HDFI. For conventional echocardiography, HDFI modality was superior to color Doppler in detecting more pulmonary veins in normal fetuses throughout the gestational period. 4D-HDFI was the best method during the second trimester of pregnancy in identifying normal fetal pulmonary veins. 4D-HDFI images vividly depicted the figure, course, and drainage of pulmonary veins in both normal and TAPVC fetuses. HDFI and the advanced 4D-HDFI technique could facilitate identification of the anatomical features of pulmonary veins in both normal and TAPVC fetuses; 4D-HDFI therefore provides additional and more precise information than conventional echocardiography techniques. © 2017, Wiley Periodicals, Inc.

  4. Intensity-based segmentation and visualization of cells in 3D microscopic images using the GPU

    NASA Astrophysics Data System (ADS)

    Kang, Mi-Sun; Lee, Jeong-Eom; Jeon, Woong-ki; Choi, Heung-Kook; Kim, Myoung-Hee

    2013-02-01

    3D microscopy images contain abundant astronomical data, rendering 3D microscopy image processing time-consuming and laborious on a central processing unit (CPU). To solve these problems, many people crop a region of interest (ROI) of the input image to a small size. Although this reduces cost and time, there are drawbacks at the image processing level, e.g., the selected ROI strongly depends on the user and there is a loss in original image information. To mitigate these problems, we developed a 3D microscopy image processing tool on a graphics processing unit (GPU). Our tool provides efficient and various automatic thresholding methods to achieve intensity-based segmentation of 3D microscopy images. Users can select the algorithm to be applied. Further, the image processing tool provides visualization of segmented volume data and can set the scale, transportation, etc. using a keyboard and mouse. However, the 3D objects visualized fast still need to be analyzed to obtain information for biologists. To analyze 3D microscopic images, we need quantitative data of the images. Therefore, we label the segmented 3D objects within all 3D microscopic images and obtain quantitative information on each labeled object. This information can use the classification feature. A user can select the object to be analyzed. Our tool allows the selected object to be displayed on a new window, and hence, more details of the object can be observed. Finally, we validate the effectiveness of our tool by comparing the CPU and GPU processing times by matching the specification and configuration.

  5. A pitfall of the volume rendering method with 3D time-of-flight MRA: a case of a branching vessel at the aneurysm neck.

    PubMed

    Goto, Masami; Kunimatsu, Akira; Shojima, Masaaki; Abe, Osamu; Aoki, Shigeki; Hayashi, Naoto; Mori, Harushi; Ino, Kenji; Yano, Keiichi; Saito, Nobuhito; Ohtomo, Kuni

    2013-03-25

    We present a case in which the origin of the branching vessel at the aneurysm neck was observed at the wrong place on the volume rendering method (VR) with 3D time-of-flight MRA (3D-TOF-MRA) with 3-Tesla MR system. In 3D-TOF-MRA, it is often difficult to observe the origin of the branching vessel, but it is unusual for it to be observed in the wrong place. In the planning of interventional treatment and surgical procedures, false recognition, as in the unique case in the present report, is a serious problem. Decisions based only on VR with 3D-TOF-MRA can be a cause of suboptimal selection in clinical treatment.

  6. Imaging the eye fundus with real-time en-face spectral domain optical coherence tomography

    PubMed Central

    Bradu, Adrian; Podoleanu, Adrian Gh.

    2014-01-01

    Real-time display of processed en-face spectral domain optical coherence tomography (SD-OCT) images is important for diagnosis. However, due to many steps of data processing requirements, such as Fast Fourier transformation (FFT), data re-sampling, spectral shaping, apodization, zero padding, followed by software cut of the 3D volume acquired to produce an en-face slice, conventional high-speed SD-OCT cannot render an en-face OCT image in real time. Recently we demonstrated a Master/Slave (MS)-OCT method that is highly parallelizable, as it provides reflectivity values of points at depth within an A-scan in parallel. This allows direct production of en-face images. In addition, the MS-OCT method does not require data linearization, which further simplifies the processing. The computation in our previous paper was however time consuming. In this paper we present an optimized algorithm that can be used to provide en-face MS-OCT images much quicker. Using such an algorithm we demonstrate around 10 times faster production of sets of en-face OCT images than previously obtained as well as simultaneous real-time display of up to 4 en-face OCT images of 200 × 200 pixels2 from the fovea and the optic nerve of a volunteer. We also demonstrate 3D and B-scan OCT images obtained from sets of MS-OCT C-scans, i.e. with no FFT and no intermediate step of generation of A-scans. PMID:24761303

  7. Physically-Based Rendering of Particle-Based Fluids with Light Transport Effects

    NASA Astrophysics Data System (ADS)

    Beddiaf, Ali; Babahenini, Mohamed Chaouki

    2018-03-01

    Recent interactive rendering approaches aim to efficiently produce images. However, time constraints deeply affect their output accuracy and realism (many light phenomena are poorly or not supported at all). To remedy this issue, in this paper, we propose a physically-based fluid rendering approach. First, while state-of-the-art methods focus on isosurface rendering with only two refractions, our proposal (1) considers the fluid as a heterogeneous participating medium with refractive boundaries, and (2) supports both multiple refractions and scattering. Second, the proposed solution is fully particle-based in the sense that no particles transformation into a grid is required. This interesting feature makes it able to handle many particle types (water, bubble, foam, and sand). On top of that, a medium with different fluids (color, phase function, etc.) can also be rendered.

  8. Rapid Prototyping Integrated With Nondestructive Evaluation and Finite Element Analysis

    NASA Technical Reports Server (NTRS)

    Abdul-Aziz, Ali; Baaklini, George Y.

    2001-01-01

    Most reverse engineering approaches involve imaging or digitizing an object then creating a computerized reconstruction that can be integrated, in three dimensions, into a particular design environment. Rapid prototyping (RP) refers to the practical ability to build high-quality physical prototypes directly from computer aided design (CAD) files. Using rapid prototyping, full-scale models or patterns can be built using a variety of materials in a fraction of the time required by more traditional prototyping techniques (refs. 1 and 2). Many software packages have been developed and are being designed to tackle the reverse engineering and rapid prototyping issues just mentioned. For example, image processing and three-dimensional reconstruction visualization software such as Velocity2 (ref. 3) are being used to carry out the construction process of three-dimensional volume models and the subsequent generation of a stereolithography file that is suitable for CAD applications. Producing three-dimensional models of objects from computed tomography (CT) scans is becoming a valuable nondestructive evaluation methodology (ref. 4). Real components can be rendered and subjected to temperature and stress tests using structural engineering software codes. For this to be achieved, accurate high-resolution images have to be obtained via CT scans and then processed, converted into a traditional file format, and translated into finite element models. Prototyping a three-dimensional volume of a composite structure by reading in a series of two-dimensional images generated via CT and by using and integrating commercial software (e.g. Velocity2, MSC/PATRAN (ref. 5), and Hypermesh (ref. 6)) is being applied successfully at the NASA Glenn Research Center. The building process from structural modeling to the analysis level is outlined in reference 7. Subsequently, a stress analysis of a composite cooling panel under combined thermomechanical loading conditions was performed to validate this process.

  9. Dynamic three-dimensional phase-contrast technique in MRI: application to complex flow analysis around the artificial heart valve

    NASA Astrophysics Data System (ADS)

    Kim, Soo Jeong; Lee, Dong Hyuk; Song, Inchang; Kim, Nam Gook; Park, Jae-Hyeung; Kim, JongHyo; Han, Man Chung; Min, Byong Goo

    1998-07-01

    Phase-contrast (PC) method of magnetic resonance imaging (MRI) has bee used for quantitative measurements of flow velocity and volume flow rate. It is a noninvasive technique which provides an accurate two-dimensional velocity image. Moreover, Phase Contrast Cine magnetic resonance imaging combines the flow dependent contrast of PC-MRI with the ability of cardiac cine imaging to produce images throughout the cardiac cycle. However, the accuracy of the data acquired from the single through-plane velocity encoding can be reduced by the effect of flow direction, because in many practical cases flow directions are not uniform throughout the whole region of interest. In this study, we present dynamic three-dimensional velocity vector mapping method using PC-MRI which can visualize the complex flow pattern through 3D volume rendered images displayed dynamically. The direction of velocity mapping can be selected along any three orthogonal axes. By vector summation, the three maps can be combined to form a velocity vector map that determines the velocity regardless of the flow direction. At the same time, Cine method is used to observe the dynamic change of flow. We performed a phantom study to evaluate the accuracy of the suggested PC-MRI in continuous and pulsatile flow measurement. Pulsatile flow wave form is generated by the ventricular assistant device (VAD), HEMO-PULSA (Biomedlab, Seoul, Korea). We varied flow velocity, pulsatile flow wave form, and pulsing rate. The PC-MRI-derived velocities were compared with Doppler-derived results. The velocities of the two measurements showed a significant linear correlation. Dynamic three-dimensional velocity vector mapping was carried out for two cases. First, we applied to the flow analysis around the artificial heart valve in a flat phantom. We could observe the flow pattern around the valve through the 3-dimensional cine image. Next, it is applied to the complex flow inside the polymer sac that is used as ventricle in totally implantable artificial heart (TAH). As a result we could observe the flow pattern around the valves of the sac, though complex flow can not be detected correctly in the conventional phase contrast method. In addition, we could calculate the cardiac output from TAH sac by quantitative measurement of the volume of flow across the outlet valve.

  10. Multi-atlas pancreas segmentation: Atlas selection based on vessel structure.

    PubMed

    Karasawa, Ken'ichi; Oda, Masahiro; Kitasaka, Takayuki; Misawa, Kazunari; Fujiwara, Michitaka; Chu, Chengwen; Zheng, Guoyan; Rueckert, Daniel; Mori, Kensaku

    2017-07-01

    Automated organ segmentation from medical images is an indispensable component for clinical applications such as computer-aided diagnosis (CAD) and computer-assisted surgery (CAS). We utilize a multi-atlas segmentation scheme, which has recently been used in different approaches in the literature to achieve more accurate and robust segmentation of anatomical structures in computed tomography (CT) volume data. Among abdominal organs, the pancreas has large inter-patient variability in its position, size and shape. Moreover, the CT intensity of the pancreas closely resembles adjacent tissues, rendering its segmentation a challenging task. Due to this, conventional intensity-based atlas selection for pancreas segmentation often fails to select atlases that are similar in pancreas position and shape to those of the unlabeled target volume. In this paper, we propose a new atlas selection strategy based on vessel structure around the pancreatic tissue and demonstrate its application to a multi-atlas pancreas segmentation. Our method utilizes vessel structure around the pancreas to select atlases with high pancreatic resemblance to the unlabeled volume. Also, we investigate two types of applications of the vessel structure information to the atlas selection. Our segmentations were evaluated on 150 abdominal contrast-enhanced CT volumes. The experimental results showed that our approach can segment the pancreas with an average Jaccard index of 66.3% and an average Dice overlap coefficient of 78.5%. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Rendering-based video-CT registration with physical constraints for image-guided endoscopic sinus surgery

    NASA Astrophysics Data System (ADS)

    Otake, Y.; Leonard, S.; Reiter, A.; Rajan, P.; Siewerdsen, J. H.; Ishii, M.; Taylor, R. H.; Hager, G. D.

    2015-03-01

    We present a system for registering the coordinate frame of an endoscope to pre- or intra- operatively acquired CT data based on optimizing the similarity metric between an endoscopic image and an image predicted via rendering of CT. Our method is robust and semi-automatic because it takes account of physical constraints, specifically, collisions between the endoscope and the anatomy, to initialize and constrain the search. The proposed optimization method is based on a stochastic optimization algorithm that evaluates a large number of similarity metric functions in parallel on a graphics processing unit. Images from a cadaver and a patient were used for evaluation. The registration error was 0.83 mm and 1.97 mm for cadaver and patient images respectively. The average registration time for 60 trials was 4.4 seconds. The patient study demonstrated robustness of the proposed algorithm against a moderate anatomical deformation.

  12. Archeological Investigations in Cochiti Reservoir, New Mexico. Volume 3. 1976-1977 Field Seasons.

    DTIC Science & Technology

    1979-01-01

    or methods are in a constant state of flux, and will undoubtedly continue so. The present In 1959, Baumhoff and Heizer suggested that the sys- paper...marrow extraction and when as estimates rather than counts were insect bodies and rendering bone grease. parts (10-25%), cocoons/larvae/eggs (1-10%), and...A yielded rendering bone grease or making soup. The association of 40 burned bone fragments. A 500 ml sample from grid the unidentified fragments and

  13. Environmental Impact Statement/Environmental Impact Report Disposal and Reuse of Fleet and Industrial Supply Center, Oakland Vision 2000 Maritime Development. Volume I

    DTIC Science & Technology

    1997-03-01

    these historic resources, rendering them the least preferable alternatives with respect to cultural resources. 2.3.2.4 Visual Resources 1 Construction of...communication). Others measures, however, were interrupted by the decision in 1995 to close the base, an action that rendered many mitigation measures unnecessary...of North American Indians, Vol. 8 (California), pp. 485495. Edited by R. F. Heizer . Smithsonian Institute, Washington, DC. Lienkaemper, J.J. 1992

  14. Environmental Impact Statement/Environmental Impact Report for the Disposal and Reuse of Mare Island Naval Shipyard Vallejo, California. Volume 1.

    DTIC Science & Technology

    1998-04-01

    Valley (Kroeber & Heizer 1970). In 1972, the Bureau of Indian Affairs listed only 11 individuals claiming Patwin ancestry in the entire territory...facility from the dredge disposal area to the upland open space scenic resource area would render this facility visible from viewpoints with . high...take. The COE probably would not issue a permit unless the USFWS rendered a "non-jeopardy" Biological Opinion, which would incorporate mitigations for

  15. Archaeological Investigations in the Gainesville Lake Area of the Tennessee-Tombigbee Waterway. Volume V. Archaeology of the Gainesville Lake Area: Synthesis.

    DTIC Science & Technology

    1982-09-01

    frequently awkward verbage thus rendering the report more readable. Richard Walling produced the figures and made many constructive coImnts on the...the Cobbs Swamp complex (Chase 1978), had developed into the Render - son complex (Dickens 1971). By approximately A.D. 400, check and simple j...Methods in Archaeology, edited by Robert F. Heizer and Sherburne F. Cook, pp. 60-92. Viking Fund Publications in Anthropology 28. Chicago. Stephenson

  16. Detection of compression vessels in trigeminal neuralgia by surface-rendering three-dimensional reconstruction of 1.5- and 3.0-T magnetic resonance imaging.

    PubMed

    Shimizu, Masahiro; Imai, Hideaki; Kagoshima, Kaiei; Umezawa, Eriko; Shimizu, Tsuneo; Yoshimoto, Yuhei

    2013-01-01

    Surface-rendered three-dimensional (3D) 1.5-T magnetic resonance (MR) imaging is useful for presurgical simulation of microvascular decompression. This study compared the sensitivity and specificity of 1.5- and 3.0-T surface-rendered 3D MR imaging for preoperative identification of the compression vessels of trigeminal neuralgia. One hundred consecutive patients underwent microvascular decompression for trigeminal neuralgia. Forty and 60 patients were evaluated by 1.5- and 3.0-T MR imaging, respectively. Three-dimensional MR images were constructed on the basis of MR imaging, angiography, and venography data and evaluated to determine the compression vessel before surgery. MR imaging findings were compared with the microsurgical findings to compare the sensitivity and specificity of 1.5- and 3.0-T MR imaging. The agreement between MR imaging and surgical findings depended on the compression vessels. For superior cerebellar artery, 1.5- and 3.0-T MR imaging had 84.4% and 82.7% sensitivity and 100% and 100% specificity, respectively. For anterior inferior cerebellar artery, 1.5- and 3.0-T MR imaging had 33.3% and 50% sensitivity and 92.9% and 95% specificity, respectively. For the petrosal vein, 1.5- and 3.0-T MR imaging had 75% and 64.3% sensitivity and 79.2% and 78.1% specificity, respectively. Complete pain relief was obtained in 36 of 40 and 55 of 60 patients undergoing 1.5- and 3.0-T MR imaging, respectively. The present study showed that both 1.5- and 3.0-T MR imaging provided high sensitivity and specificity for preoperative assessment of the compression vessels of trigeminal neuralgia. Preoperative 3D imaging provided very high quality presurgical simulation, resulting in excellent clinical outcomes. Copyright © 2013 Elsevier Inc. All rights reserved.

  17. Dimensionality of visual complexity in computer graphics scenes

    NASA Astrophysics Data System (ADS)

    Ramanarayanan, Ganesh; Bala, Kavita; Ferwerda, James A.; Walter, Bruce

    2008-02-01

    How do human observers perceive visual complexity in images? This problem is especially relevant for computer graphics, where a better understanding of visual complexity can aid in the development of more advanced rendering algorithms. In this paper, we describe a study of the dimensionality of visual complexity in computer graphics scenes. We conducted an experiment where subjects judged the relative complexity of 21 high-resolution scenes, rendered with photorealistic methods. Scenes were gathered from web archives and varied in theme, number and layout of objects, material properties, and lighting. We analyzed the subject responses using multidimensional scaling of pooled subject responses. This analysis embedded the stimulus images in a two-dimensional space, with axes that roughly corresponded to "numerosity" and "material / lighting complexity". In a follow-up analysis, we derived a one-dimensional complexity ordering of the stimulus images. We compared this ordering with several computable complexity metrics, such as scene polygon count and JPEG compression size, and did not find them to be very correlated. Understanding the differences between these measures can lead to the design of more efficient rendering algorithms in computer graphics.

  18. Patient-specific bronchoscopy visualization through BRDF estimation and disocclusion correction.

    PubMed

    Chung, Adrian J; Deligianni, Fani; Shah, Pallav; Wells, Athol; Yang, Guang-Zhong

    2006-04-01

    This paper presents an image-based method for virtual bronchoscope with photo-realistic rendering. The technique is based on recovering bidirectional reflectance distribution function (BRDF) parameters in an environment where the choice of viewing positions, directions, and illumination conditions are restricted. Video images of bronchoscopy examinations are combined with patient-specific three-dimensional (3-D) computed tomography data through two-dimensional (2-D)/3-D registration and shading model parameters are then recovered by exploiting the restricted lighting configurations imposed by the bronchoscope. With the proposed technique, the recovered BRDF is used to predict the expected shading intensity, allowing a texture map independent of lighting conditions to be extracted from each video frame. To correct for disocclusion artefacts, statistical texture synthesis was used to recreate the missing areas. New views not present in the original bronchoscopy video are rendered by evaluating the BRDF with different viewing and illumination parameters. This allows free navigation of the acquired 3-D model with enhanced photo-realism. To assess the practical value of the proposed technique, a detailed visual scoring that involves both real and rendered bronchoscope images is conducted.

  19. Endocardial left ventricle feature tracking and reconstruction from tri-plane trans-esophageal echocardiography data

    NASA Astrophysics Data System (ADS)

    Dangi, Shusil; Ben-Zikri, Yehuda K.; Cahill, Nathan; Schwarz, Karl Q.; Linte, Cristian A.

    2015-03-01

    Two-dimensional (2D) ultrasound (US) has been the clinical standard for over two decades for monitoring and assessing cardiac function and providing support via intra-operative visualization and guidance for minimally invasive cardiac interventions. Developments in three-dimensional (3D) image acquisition and transducer design and technology have revolutionized echocardiography imaging enabling both real-time 3D trans-esophageal and intra-cardiac image acquisition. However, in most cases the clinicians do not access the entire 3D image volume when analyzing the data, rather they focus on several key views that render the cardiac anatomy of interest during the US imaging exam. This approach enables image acquisition at a much higher spatial and temporal resolution. Two such common approaches are the bi-plane and tri-plane data acquisition protocols; as their name states, the former comprises two orthogonal image views, while the latter depicts the cardiac anatomy based on three co-axially intersecting views spaced at 600 to one another. Since cardiac anatomy is continuously changing, the intra-operative anatomy depicted using real-time US imaging also needs to be updated by tracking the key features of interest and endocardial left ventricle (LV) boundaries. Therefore, rapid automatic feature tracking in US images is critical for three reasons: 1) to perform cardiac function assessment; 2) to identify location of surgical targets for accurate tool to target navigation and on-target instrument positioning; and 3) to enable pre- to intra-op image registration as a means to fuse pre-op CT or MR images used during planning with intra-operative images for enhanced guidance. In this paper we utilize monogenic filtering, graph-cut based segmentation and robust spline smoothing in a combined work flow to process the acquired tri-plane TEE time series US images and demonstrate robust and accurate tracking of the LV endocardial features. We reconstruct the endocardial LV geometry using the tri-plane contours and spline interpolation, and assess the accuracy of the proposed work flow against gold-standard results from the GE Echopac PC clinical software according to quantitative clinical LV characterization parameters, such as the length, circumference, area and volume. Our proposed combined work flow leads to consistent, rapid and automated identification of the LV endocardium, suitable for intra-operative applications and "on-the-fly" computer-assisted assessment of ejection fraction for cardiac function monitoring.Two-dimensional (2D) ultrasound (US) has been the clinical standard for over two decades for monitoring and assessing cardiac function and providing support via intra-operative visualization and guidance for minimally invasive cardiac interventions. Developments in three-dimensional (3D) image acquisition and transducer design and technology have revolutionized echocardiography imaging enabling both real-time 3D trans-esophageal and intra-cardiac image acquisition. However, in most cases the clinicians do not access the entire 3D image volume when analyzing the data, rather they focus on several key views that render the cardiac anatomy of interest during the US imaging exam. This approach enables image acquisition at a much higher spatial and temporal resolution. Two such common approaches are the bi-plane and tri-plane data acquisition protocols; as their name states, the former comprises two orthogonal image views, while the latter depicts the cardiac anatomy based on three co-axially intersecting views spaced at 600 to one another. Since cardiac anatomy is continuously changing, the intra-operative anatomy depicted using real-time US imaging also needs to be updated by tracking the key features of interest and endocardial left ventricle (LV) boundaries. Therefore, rapid automatic feature tracking in US images is critical for three reasons: 1) to perform cardiac function assessment; 2) to identify location of surgical targets for accurate tool to target navigation and on-target instrument positioning; and 3) to enable pre- to intra-op image registration as a means to fuse pre-op CT or MR images used during planning with intra-operative images for enhanced guidance. In this paper we utilize monogenic filtering, graph-cut based segmentation and robust spline smoothing in a combined work flow to process the acquired tri-plane TEE time series US images and demonstrate robust and accurate tracking of the LV endocardial features. We reconstruct the endocardial LV geometry using the tri-plane contours and spline interpolation, and assess the accuracy of the proposed work flow against gold-standard results from the GE Echopac PC clinical software according to quantitative clinical LV characterization parameters, such as the length, circumference, area and volume. Our proposed combined work flow leads to consistent, rapid and automated identification of the LV endocardium, suitable for intra-operative applications and on-the- y" computer-assisted assessment of ejection fraction for cardiac function monitoring.

  20. Human tooth and root canal morphology reconstruction using magnetic resonance imaging.

    PubMed

    Drăgan, Oana Carmen; Fărcăşanu, Alexandru Ştefan; Câmpian, Radu Septimiu; Turcu, Romulus Valeriu Flaviu

    2016-01-01

    Visualization of the internal and external root canal morphology is very important for a successful endodontic treatment; however, it seems to be difficult considering the small size of the tooth and the complexity of the root canal system. Film-based or digital conventional radiographic techniques as well as cone beam computed tomography provide limited information on the dental pulp anatomy or have harmful effects. A new non-invasive diagnosis tool is magnetic resonance imaging, due to its ability of imaging both hard and soft tissues. The aim of this study was to demonstrate magnetic resonance imaging to be a useful tool for imaging the anatomic conditions of the external and internal root canal morphology for endodontic purposes. The endodontic system of one freshly extracted wisdom tooth, chosen for its well-known anatomical variations, was mechanically shaped using a hybrid technique. After its preparation, the tooth was immersed into a recipient with saline solution and magnetic resonance imaged immediately. A Bruker Biospec magnetic resonance imaging scanner operated at 7.04 Tesla and based on Avance III radio frequency technology was used. InVesalius software was employed for the 3D reconstruction of the tooth scanned volume. The current ex-vivo experiment shows the accurate 3D volume rendered reconstruction of the internal and external morphology of a human extracted and endodontically treated tooth using a dataset of images acquired by magnetic resonance imaging. The external lingual and vestibular views of the tooth as well as the occlusal view of the pulp chamber, the access cavity, the distal canal opening on the pulp chamber floor, the coronal third of the root canals, the degree of root separation and the apical fusion of the two mesial roots, details of the apical region, root canal curvatures, furcal region and interradicular root grooves could be clearly bordered. Magnetic resonance imaging offers 3D image datasets with more information than the conventional radiographic techniques. Due to its ability of imaging both hard and soft dental tissues, magnetic resonance imaging can be successfully used as a 3D diagnostic imaging technique in dentistry. When choosing the imaging method, dental clinicians should weight the benefit-risk ratio, taking into account the costs associated to magnetic resonance imaging and the harmful effects of ionizing radiations when cone beam computed tomography or conventional x-ray are used.

  1. Expanding the Interaction Lexicon for 3D Graphics

    DTIC Science & Technology

    2001-11-01

    believe that extending it to work with image-based rendering engines is straightforward. I could modify plenoptic image editing [Seitz] to allow...M. Seitz and Kiriakos N. Kutulakos. Plenoptic Image Editing. International Conference on Computer Vision ‘98, pages 17-24. [ShapeCapture

  2. Visual Systems for Interactive Exploration and Mining of Large-Scale Neuroimaging Data Archives

    PubMed Central

    Bowman, Ian; Joshi, Shantanu H.; Van Horn, John D.

    2012-01-01

    While technological advancements in neuroimaging scanner engineering have improved the efficiency of data acquisition, electronic data capture methods will likewise significantly expedite the populating of large-scale neuroimaging databases. As they do and these archives grow in size, a particular challenge lies in examining and interacting with the information that these resources contain through the development of compelling, user-driven approaches for data exploration and mining. In this article, we introduce the informatics visualization for neuroimaging (INVIZIAN) framework for the graphical rendering of, and dynamic interaction with the contents of large-scale neuroimaging data sets. We describe the rationale behind INVIZIAN, detail its development, and demonstrate its usage in examining a collection of over 900 T1-anatomical magnetic resonance imaging (MRI) image volumes from across a diverse set of clinical neuroimaging studies drawn from a leading neuroimaging database. Using a collection of cortical surface metrics and means for examining brain similarity, INVIZIAN graphically displays brain surfaces as points in a coordinate space and enables classification of clusters of neuroanatomically similar MRI images and data mining. As an initial step toward addressing the need for such user-friendly tools, INVIZIAN provides a highly unique means to interact with large quantities of electronic brain imaging archives in ways suitable for hypothesis generation and data mining. PMID:22536181

  3. Visualizing the anatomical-functional correlation of the human brain

    NASA Astrophysics Data System (ADS)

    Chang, YuKuang; Rockwood, Alyn P.; Reiman, Eric M.

    1995-04-01

    Three-dimensional tomographic images obtained from different modalities or from the same modality at different times provide complementary information. For example, while PET shows brain function, images from MRI identify anatomical structures. In this paper, we investigate the problem of displaying available information about structures and function together. Several steps are described to achieve our goal. These include segmentation of the data, registration, resampling, and display. Segmentation is used to identify brain tissue from surrounding tissues, especially in the MRI data. Registration aligns the different modalities as closely as possible. Resampling arises from the registration since two data sets do not usually correspond and the rendering method is most easily achieved if the data correspond to the same grid used in display. We combine several techniques to display the data. MRI data is reconstructed from 2D slices into 3D structures from which isosurfaces are extracted and represented by approximating polygonalizations. These are then displayed using standard graphics pipelines including shaded and transparent images. PET data measures the qualitative rates of cerebral glucose utilization or oxygen consumption. PET image is best displayed as a volume of luminous particles. The combination of both display methods allows the viewer to compare the functional information contained in the PET data with the anatomically more precise MRI data.

  4. Advancing ovarian folliculometry with selective plane illumination microscopy

    NASA Astrophysics Data System (ADS)

    Lin, Hsiao-Chun Amy; Dutta, Rahul; Mandal, Subhamoy; Kind, Alexander; Schnieke, Angelika; Razansky, Daniel

    2016-12-01

    Determination of ovarian status and follicle monitoring are common methods of diagnosing female infertility. We evaluated the suitability of selective plane illumination microscopy (SPIM) for the study of ovarian follicles. The large field of view and fast acquisition speed of our SPIM system enables rendering of volumetric image stacks from intact whole porcine ovarian follicles, clearly visualizing follicular features including follicle volume and average diameter (70 μm-2.5 mm), their spherical asymmetry parameters, size of developing cumulus oophorus complexes (40 μm-110 μm), and follicular wall thickness (90 μm-120 μm). Follicles at all developmental stages were identified. A distribution of the theca thickness was measured for each follicle, and a relationship between these distributions and the stages of follicular development was discerned. The ability of the system to non-destructively generate sub-cellular resolution 3D images of developing follicles, with excellent image contrast and high throughput capacity compared to conventional histology, suggests that it can be used to monitor follicular development and identify structural abnormalities indicative of ovarian ailments. Accurate folliculometric measurements provided by SPIM images can immensely help the understanding of ovarian physiology and provide important information for the proper management of ovarian diseases.

  5. Evaluation of MR scanning, image registration, and image processing methods to visualize cortical veins for neurosurgery

    NASA Astrophysics Data System (ADS)

    Noordmans, Herke J.; Rutten, G. J. M.; Willems, Peter W. A.; Viergever, Max A.

    2000-04-01

    The visualization of brain vessels on the cortex helps the neurosurgeon in two ways: to avoid blood vessels when specifying the trepanation entry, and to overcome errors in the surgical navigation system due to brain shift. We compared 3D T1, MR, 3D T1 MR with gadolinium contrast, MR venography as scanning techniques, mutual information as registration technique, and thresholding and multi-vessel enhancement as image processing techniques. We evaluated the volume rendered results based on their quality and correspondence with photos took during surgery. It appears that with 3D T1 MR scans, gadolinium is required to show cortical veins. The visibility of small cortical veins is strongly enhanced by subtracting a 3D T1 MR baseline scan, which should be registered to the scan with gadolinium contrast, even when the scans are made during the same session. Multi-vessel enhancement helps to clarify the view on small vessels by reducing noise level, but strikingly does not reveal more. MR venography does show intracerebral veins with high detail, but is, as is, unsuited to show cortical veins due to the low contrast with CSF.

  6. Image processing, geometric modeling and data management for development of a virtual bone surgery system.

    PubMed

    Niu, Qiang; Chi, Xiaoyi; Leu, Ming C; Ochoa, Jorge

    2008-01-01

    This paper describes image processing, geometric modeling and data management techniques for the development of a virtual bone surgery system. Image segmentation is used to divide CT scan data into different segments representing various regions of the bone. A region-growing algorithm is used to extract cortical bone and trabecular bone structures systematically and efficiently. Volume modeling is then used to represent the bone geometry based on the CT scan data. Material removal simulation is achieved by continuously performing Boolean subtraction of the surgical tool model from the bone model. A quadtree-based adaptive subdivision technique is developed to handle the large set of data in order to achieve the real-time simulation and visualization required for virtual bone surgery. A Marching Cubes algorithm is used to generate polygonal faces from the volumetric data. Rendering of the generated polygons is performed with the publicly available VTK (Visualization Tool Kit) software. Implementation of the developed techniques consists of developing a virtual bone-drilling software program, which allows the user to manipulate a virtual drill to make holes with the use of a PHANToM device on a bone model derived from real CT scan data.

  7. Hybrid-coded 3D structured illumination imaging with Bayesian estimation (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Chen, Hsi-Hsun; Luo, Yuan; Singh, Vijay R.

    2016-03-01

    Light induced fluorescent microscopy has long been developed to observe and understand the object at microscale, such as cellular sample. However, the transfer function of lense-based imaging system limits the resolution so that the fine and detailed structure of sample cannot be identified clearly. The techniques of resolution enhancement are fascinated to break the limit of resolution for objective given. In the past decades, the resolution enhancement imaging has been investigated through variety of strategies, including photoactivated localization microscopy (PALM), stochastic optical reconstruction microscopy (STORM), stimulated emission depletion (STED), and structure illuminated microscopy (SIM). In those methods, only SIM can intrinsically improve the resolution limit for a system without taking the structure properties of object into account. In this paper, we develop a SIM associated with Bayesian estimation, furthermore, with optical sectioning capability rendered from HiLo processing, resulting the high resolution through 3D volume. This 3D SIM can provide the optical sectioning and resolution enhancement performance, and be robust to noise owing to the Data driven Bayesian estimation reconstruction proposed. For validating the 3D SIM, we show our simulation result of algorithm, and the experimental result demonstrating the 3D resolution enhancement.

  8. Wavelet-Based Interpolation and Representation of Non-Uniformly Sampled Spacecraft Mission Data

    NASA Technical Reports Server (NTRS)

    Bose, Tamal

    2000-01-01

    A well-documented problem in the analysis of data collected by spacecraft instruments is the need for an accurate, efficient representation of the data set. The data may suffer from several problems, including additive noise, data dropouts, an irregularly-spaced sampling grid, and time-delayed sampling. These data irregularities render most traditional signal processing techniques unusable, and thus the data must be interpolated onto an even grid before scientific analysis techniques can be applied. In addition, the extremely large volume of data collected by scientific instrumentation presents many challenging problems in the area of compression, visualization, and analysis. Therefore, a representation of the data is needed which provides a structure which is conducive to these applications. Wavelet representations of data have already been shown to possess excellent characteristics for compression, data analysis, and imaging. The main goal of this project is to develop a new adaptive filtering algorithm for image restoration and compression. The algorithm should have low computational complexity and a fast convergence rate. This will make the algorithm suitable for real-time applications. The algorithm should be able to remove additive noise and reconstruct lost data samples from images.

  9. Geometric modeling of the temporal bone for cochlea implant simulation

    NASA Astrophysics Data System (ADS)

    Todd, Catherine A.; Naghdy, Fazel; O'Leary, Stephen

    2004-05-01

    The first stage in the development of a clinically valid surgical simulator for training otologic surgeons in performing cochlea implantation is presented. For this purpose, a geometric model of the temporal bone has been derived from a cadaver specimen using the biomedical image processing software package Analyze (AnalyzeDirect, Inc) and its three-dimensional reconstruction is examined. Simulator construction begins with registration and processing of a Computer Tomography (CT) medical image sequence. Important anatomical structures of the middle and inner ear are identified and segmented from each scan in a semi-automated threshold-based approach. Linear interpolation between image slices produces a three-dimensional volume dataset: the geometrical model. Artefacts are effectively eliminated using a semi-automatic seeded region-growing algorithm and unnecessary bony structures are removed. Once validated by an Ear, Nose and Throat (ENT) specialist, the model may be imported into the Reachin Application Programming Interface (API) (Reachin Technologies AB) for visual and haptic rendering associated with a virtual mastoidectomy. Interaction with the model is realized with haptics interfacing, providing the user with accurate torque and force feedback. Electrode array insertion into the cochlea will be introduced in the final stage of design.

  10. 3D optical coherence tomography image registration for guiding cochlear implant insertion

    NASA Astrophysics Data System (ADS)

    Cheon, Gyeong-Woo; Jeong, Hyun-Woo; Chalasani, Preetham; Chien, Wade W.; Iordachita, Iulian; Taylor, Russell; Niparko, John; Kang, Jin U.

    2014-03-01

    In cochlear implant surgery, an electrode array is inserted into the cochlear canal to restore hearing to a person who is profoundly deaf or significantly hearing impaired. One critical part of the procedure is the insertion of the electrode array, which looks like a thin wire, into the cochlear canal. Although X-ray or computed tomography (CT) could be used as a reference to evaluate the pathway of the whole electrode array, there is no way to depict the intra-cochlear canal and basal turn intra-operatively to help guide insertion of the electrode array. Optical coherent tomography (OCT) is a highly effective way of visualizing internal structures of cochlea. Swept source OCT (SSOCT) having center wavelength of 1.3 micron and 2D Galvonometer mirrors was used to achieve 7-mm depth 3-D imaging. Graphics processing unit (GPU), OpenGL, C++ and C# were integrated for real-time volumetric rendering simultaneously. The 3D volume images taken by the OCT system were assembled and registered which could be used to guide a cochlear implant. We performed a feasibility study using both dry and wet temporal bones and the result is presented.

  11. Automatic cell identification and visualization using digital holographic microscopy with head mounted augmented reality devices.

    PubMed

    O'Connor, Timothy; Rawat, Siddharth; Markman, Adam; Javidi, Bahram

    2018-03-01

    We propose a compact imaging system that integrates an augmented reality head mounted device with digital holographic microscopy for automated cell identification and visualization. A shearing interferometer is used to produce holograms of biological cells, which are recorded using customized smart glasses containing an external camera. After image acquisition, segmentation is performed to isolate regions of interest containing biological cells in the field-of-view, followed by digital reconstruction of the cells, which is used to generate a three-dimensional (3D) pseudocolor optical path length profile. Morphological features are extracted from the cell's optical path length map, including mean optical path length, coefficient of variation, optical volume, projected area, projected area to optical volume ratio, cell skewness, and cell kurtosis. Classification is performed using the random forest classifier, support vector machines, and K-nearest neighbor, and the results are compared. Finally, the augmented reality device displays the cell's pseudocolor 3D rendering of its optical path length profile, extracted features, and the identified cell's type or class. The proposed system could allow a healthcare worker to quickly visualize cells using augmented reality smart glasses and extract the relevant information for rapid diagnosis. To the best of our knowledge, this is the first report on the integration of digital holographic microscopy with augmented reality devices for automated cell identification and visualization.

  12. Validation of the Five-Phase Method for Simulating Complex Fenestration Systems with Radiance against Field Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geisler-Moroder, David; Lee, Eleanor S.; Ward, Gregory J.

    2016-08-29

    The Five-Phase Method (5-pm) for simulating complex fenestration systems with Radiance is validated against field measurements. The capability of the method to predict workplane illuminances, vertical sensor illuminances, and glare indices derived from captured and rendered high dynamic range (HDR) images is investigated. To be able to accurately represent the direct sun part of the daylight not only in sensor point simulations, but also in renderings of interior scenes, the 5-pm calculation procedure was extended. The validation shows that the 5-pm is superior to the Three-Phase Method for predicting horizontal and vertical illuminance sensor values as well as glare indicesmore » derived from rendered images. Even with input data from global and diffuse horizontal irradiance measurements only, daylight glare probability (DGP) values can be predicted within 10% error of measured values for most situations.« less

  13. Ultralow-dose CT of the craniofacial bone for navigated surgery using adaptive statistical iterative reconstruction and model-based iterative reconstruction: 2D and 3D image quality.

    PubMed

    Widmann, Gerlig; Schullian, Peter; Gassner, Eva-Maria; Hoermann, Romed; Bale, Reto; Puelacher, Wolfgang

    2015-03-01

    OBJECTIVE. The purpose of this article is to evaluate 2D and 3D image quality of high-resolution ultralow-dose CT images of the craniofacial bone for navigated surgery using adaptive statistical iterative reconstruction (ASIR) and model-based iterative reconstruction (MBIR) in comparison with standard filtered backprojection (FBP). MATERIALS AND METHODS. A formalin-fixed human cadaver head was scanned using a clinical reference protocol at a CT dose index volume of 30.48 mGy and a series of five ultralow-dose protocols at 3.48, 2.19, 0.82, 0.44, and 0.22 mGy using FBP and ASIR at 50% (ASIR-50), ASIR at 100% (ASIR-100), and MBIR. Blinded 2D axial and 3D volume-rendered images were compared with each other by three readers using top-down scoring. Scores were analyzed per protocol or dose and reconstruction. All images were compared with the FBP reference at 30.48 mGy. A nonparametric Mann-Whitney U test was used. Statistical significance was set at p < 0.05. RESULTS. For 2D images, the FBP reference at 30.48 mGy did not statistically significantly differ from ASIR-100 at 3.48 mGy, ASIR-100 at 2.19 mGy, and MBIR at 0.82 mGy. MBIR at 2.19 and 3.48 mGy scored statistically significantly better than the FBP reference (p = 0.032 and 0.001, respectively). For 3D images, the FBP reference at 30.48 mGy did not statistically significantly differ from all reconstructions at 3.48 mGy; FBP and ASIR-100 at 2.19 mGy; FBP, ASIR-100, and MBIR at 0.82 mGy; MBIR at 0.44 mGy; and MBIR at 0.22 mGy. CONCLUSION. MBIR (2D and 3D) and ASIR-100 (2D) may significantly improve subjective image quality of ultralow-dose images and may allow more than 90% dose reductions.

  14. Light transport on path-space manifolds

    NASA Astrophysics Data System (ADS)

    Jakob, Wenzel Alban

    The pervasive use of computer-generated graphics in our society has led to strict demands on their visual realism. Generally, users of rendering software want their images to look, in various ways, "real", which has been a key driving force towards methods that are based on the physics of light transport. Until recently, industrial practice has relied on a different set of methods that had comparatively little rigorous grounding in physics---but within the last decade, advances in rendering methods and computing power have come together to create a sudden and dramatic shift, in which physics-based methods that were formerly thought impractical have become the standard tool. As a consequence, considerable attention is now devoted towards making these methods as robust as possible. In this context, robustness refers to an algorithm's ability to process arbitrary input without large increases of the rendering time or degradation of the output image. One particularly challenging aspect of robustness entails simulating the precise interaction of light with all the materials that comprise the input scene. This dissertation focuses on one specific group of materials that has fundamentally been the most important source of difficulties in this process. Specular materials, such as glass windows, mirrors or smooth coatings (e.g. on finished wood), account for a significant percentage of the objects that surround us every day. It is perhaps surprising, then, that it is not well-understood how they can be accommodated within the theoretical framework that underlies some of the most sophisticated rendering methods available today. Many of these methods operate using a theoretical framework known as path space integration. But this framework makes no provisions for specular materials: to date, it is not clear how to write down a path space integral involving something as simple as a piece of glass. Although implementations can in practice still render these materials by side-stepping limitations of the theory, they often suffer from unusably slow convergence; improvements to this situation have been hampered by the lack of a thorough theoretical understanding. We address these problems by developing a new theory of path-space light transport which, for the first time, cleanly incorporates specular scattering into the standard framework. Most of the results obtained in the analysis of the ideally smooth case can also be generalized to rendering of glossy materials and volumetric scattering so that this dissertation also provides a powerful new set of tools for dealing with them. The basis of our approach is that each specular material interaction locally collapses the dimension of the space of light paths so that all relevant paths lie on a submanifold of path space. We analyze the high-dimensional differential geometry of this submanifold and use the resulting information to construct an algorithm that is able to "walk" around on it using a simple and efficient equation-solving iteration. This manifold walking algorithm then constitutes the key operation of a new type of Markov Chain Monte Carlo (MCMC) rendering method that computes lighting through very general families of paths that can involve arbitrary combinations of specular, near-specular, glossy, and diffuse surface interactions as well as isotropic or highly anisotropic volume scattering. We demonstrate our implementation on a range of challenging scenes and evaluate it against previous methods.

  15. [Design of visualized medical images network and web platform based on MeVisLab].

    PubMed

    Xiang, Jun; Ye, Qing; Yuan, Xun

    2017-04-01

    With the trend of the development of "Internet +", some further requirements for the mobility of medical images have been required in the medical field. In view of this demand, this paper presents a web-based visual medical imaging platform. First, the feasibility of medical imaging is analyzed and technical points. CT (Computed Tomography) or MRI (Magnetic Resonance Imaging) images are reconstructed three-dimensionally by MeVisLab and packaged as X3D (Extensible 3D Graphics) files shown in the present paper. Then, the B/S (Browser/Server) system specially designed for 3D image is designed by using the HTML 5 and WebGL rendering engine library, and the X3D image file is parsed and rendered by the system. The results of this study showed that the platform was suitable for multiple operating systems to realize the platform-crossing and mobilization of medical image data. The development of medical imaging platform is also pointed out in this paper. It notes that web application technology will not only promote the sharing of medical image data, but also facilitate image-based medical remote consultations and distance learning.

  16. Dual modality virtual colonoscopy workstation: design, implementation, and preliminary evaluation

    NASA Astrophysics Data System (ADS)

    Chen, Dongqing; Meissner, Michael

    2006-03-01

    The aim of this study is to develop a virtual colonoscopy (VC) workstation that supports both CT (computed tomography) and MR (magnetic resonance) imaging procedures. The workflow should be optimized and be able to take advantage of both image modalities. The technological break through is at the real-time volume rendering of spatial-intensity-inhomogeneous MR images to achieve high quality 3D endoluminal view. VC aims at visualizing CT or MR tomography images for detection of colonic polyp and lesion. It is also called as CT/MR colonography based on the imaging modality that is employed. The published results of large scale clinical trial demonstrated more than 90% of sensitivity on polyp detection for certain CT colonography (CTC) workstation. A drawback of the CT colonoscopy is the radiation exposure. MR colonography (MRC) is free from the X-ray radiation. It achieved almost 100% specificity for polyp detection in published trials. The better tissue contrast in MR image allows the accurate diagnosis of inflammatory bowel disease also, which is usually difficult in CTC. At present, most of the VC workstations are designed for CT examination. They are not able to display multi-sequence MR series concurrently in a single application. The automatic correlation between 2D and 3D view is not available due to the difficulty of 3D model building for MR images. This study aims at enhancing a commercial VC product that was successfully used for CTC to equally support dark-lumen protocol MR procedure also.

  17. Analyzer-based imaging technique in tomography of cartilage and metal implants: a study at the ESRF

    PubMed Central

    COAN, Paola; MOLLENHAUER, Juergen; WAGNER, Andreas; Muehleman, Carol; BRAVIN, Alberto

    2009-01-01

    Monitoring the progression of osteoarthritis (OA) and the effects of therapy during clinical trials is still a challenge for present clinical imaging techniques since they present intrinsic limitations and can be sensitive only in case of advanced OA stages. In very severe cases, partial or complete joint replacement surgery is the only solution for reducing pain and restoring the joint functions. Poor imaging quality in practically all medical imaging technologies with respect to joint surfaces and to metal implant imaging calls for the development of new techniques that are sensitive to stages preceding the point of irreversible damage of the cartilage tissue. In this scenario, X-ray phase contrast modalities could play an important role since they can provide improved contrast compared to conventional absorption radiography, with a similar or even reduced tissue radiation dose. In this study, the Analyzer-based imaging (ABI), a technique sensitive to the X-ray refraction and permitting a high scatter rejection, has been successfully applied in-vitro on excised human synovial joints and sheep implants. Pathological and healthy joints as well as metal implants have been imaged in projection and computed tomography ABI mode at high resolution and clinically compatible doses (< 10 mGy). Volume rendering and segmentation permitted visualization of the cartilage from volumetric CT-scans. Results demonstrate that ABI can provide an unequivocal non-invasive diagnosis of the state of disease of the joint and be considered a new tool in orthopaedic research. PMID:18584983

  18. Unilateral empyema impacts the assessment of regional lung ventilation by electrical impedance tomography.

    PubMed

    Bläser, D; Pulletz, S; Becher, T; Schädler, D; Elke, G; Weiler, N; Frerichs, I

    2014-06-01

    Several studies have shown the ability of electrical impedance tomography (EIT) to assess regional ventilation distribution in human lungs. Fluid accumulation in the pleural space as in empyema, typically occurring on one chest side, may influence the distribution of ventilation and the corresponding EIT findings. The aim of our study was to examine this effect on the assessment of regional ventilation by EIT. Six patients suffering from unilateral empyema and intubated with a double-lumen endotracheal tube were studied. EIT data were acquired during volume-controlled ventilation with bilateral (tidal volume (V(T)): 800 ml) and unilateral ventilation (V(T): 400 ml) of the right and left lungs. Mean tidal amplitudes of the EIT signal were calculated in all image pixels. The sums of these values, expressed as relative impedance change (rel. ΔZ), were then determined in whole images and functionally defined regions-of-interest (ROI). The sums of rel. ΔZ calculated during the two cases of one-lung ventilation either on the affected or unaffected side were significantly smaller than during bilateral ventilation. However, in contrast to previous findings in patients with no pleural pathology, very low values of rel. ΔZ were found when the lung on the affected side was ventilated. ROI-based analysis rendered higher values than the whole-image analysis in this case, nonetheless, the values were significantly smaller than when the unaffected side was ventilated in spite of identical VT. In conclusion, our results indicate that the presence of empyema may affect the quantitative evaluation of regional lung ventilation by EIT.

  19. Application of MPVR and TL-VR with 64-row MDCT in neonates with congenital EA and distal TEF.

    PubMed

    Wen, Yang; Peng, Yun; Zhai, Ren-You; Li, Ying-Zi

    2011-03-28

    To assess the application of multiple planar volume reconstruction (MPVR) and three-dimensional (3D) transparency lung volume rendering (TL-VR) with 64-row multidetector-row computed tomography (MDCT) in neonates with congenital esophageal atresia (EA) and distal tracheoesophageal fistula (TEF). Twenty neonates (17 boys, 3 girls) with EA and distal TEF at a mean age of 4.6 d (range 1-16 d) were enrolled in this study. A helical scan of 64-row MDCT was performed at the 64 mm × 0.625 mm collimation. EA and TEF were reconstructed with MPVR and TL-VR, respectively. Initial diagnosis of EA was made by chest radiography showing the inserted catheter in the proximal blind-ended esophageal pouch. Manifestations of MDCT images were compared with the findings at surgery. MDCT showed the proximal and distal esophageal pouches in 20 cases. No significant difference was observed in gaps between the proximal and distal esophageal pouches detected by MPVR and TL-VR. The lengths of gaps between the proximal and distal esophageal pouches detected by MPVR and TL-VR correlated well with the findings at surgery (R = 0.87, P < 0.001). The images of MPVR revealed the orifice of TEF in 13 cases, while TL-VR images showed the orifice of TEF in 4 cases. EA and distal TEF can be reconstructed using MPVR and TL-VR of 64-row MDCT, which is a noninvasive technique to demonstrate the distal esophageal pouches and inter-pouch distance in neonates with EA and distal TEF.

  20. A new method of morphological comparison for bony reconstructive surgery: maxillary reconstruction using scapular tip bone

    NASA Astrophysics Data System (ADS)

    Chan, Harley; Gilbert, Ralph W.; Pagedar, Nitin A.; Daly, Michael J.; Irish, Jonathan C.; Siewerdsen, Jeffrey H.

    2010-02-01

    esthetic appearance is one of the most important factors for reconstructive surgery. The current practice of maxillary reconstruction chooses radial forearm, fibula or iliac rest osteocutaneous to recreate three-dimensional complex structures of the palate and maxilla. However, these bone flaps lack shape similarity to the palate and result in a less satisfactory esthetic. Considering similarity factors and vasculature advantages, reconstructive surgeons recently explored the use of scapular tip myo-osseous free flaps to restore the excised site. We have developed a new method that quantitatively evaluates the morphological similarity of the scapula tip bone and palate based on a diagnostic volumetric computed tomography (CT) image. This quantitative result was further interpreted as a color map that rendered on the surface of a three-dimensional computer model. For surgical planning, this color interpretation could potentially assist the surgeon to maximize the orientation of the bone flaps for best fit of the reconstruction site. With approval from the Research Ethics Board (REB) of the University Health Network, we conducted a retrospective analysis with CT image obtained from 10 patients. Each patient had a CT scans including the maxilla and chest on the same day. Based on this image set, we simulated total, subtotal and hemi palate reconstruction. The procedure of simulation included volume segmentation, conversing the segmented volume to a stereo lithography (STL) model, manual registration, computation of minimum geometric distances and curvature between STL model. Across the 10 patients data, we found the overall root-mean-square (RMS) conformance was 3.71+/- 0.16 mm

  1. Visualization of risk structures for interactive planning of image guided radiofrequency ablation of liver tumors

    NASA Astrophysics Data System (ADS)

    Rieder, Christian; Schwier, Michael; Weihusen, Andreas; Zidowitz, Stephan; Peitgen, Heinz-Otto

    2009-02-01

    Image guided radiofrequency ablation (RFA) is becoming a standard procedure as a minimally invasive method for tumor treatment in the clinical routine. The visualization of pathological tissue and potential risk structures like vessels or important organs gives essential support in image guided pre-interventional RFA planning. In this work our aim is to present novel visualization techniques for interactive RFA planning to support the physician with spatial information of pathological structures as well as the finding of trajectories without harming vitally important tissue. Furthermore, we illustrate three-dimensional applicator models of different manufactures combined with corresponding ablation areas in homogenous tissue, as specified by the manufacturers, to enhance the estimated amount of cell destruction caused by ablation. The visualization techniques are embedded in a workflow oriented application, designed for the use in the clinical routine. To allow a high-quality volume rendering we integrated a visualization method using the fuzzy c-means algorithm. This method automatically defines a transfer function for volume visualization of vessels without the need of a segmentation mask. However, insufficient visualization results of the displayed vessels caused by low data quality can be improved using local vessel segmentation in the vicinity of the lesion. We also provide an interactive segmentation technique of liver tumors for the volumetric measurement and for the visualization of pathological tissue combined with anatomical structures. In order to support coagulation estimation with respect to the heat-sink effect of the cooling blood flow which decreases thermal ablation, a numerical simulation of the heat distribution is provided.

  2. New impressive capabilities of SE-workbench for EO/IR real-time rendering of animated scenarios including flares

    NASA Astrophysics Data System (ADS)

    Le Goff, Alain; Cathala, Thierry; Latger, Jean

    2015-10-01

    To provide technical assessments of EO/IR flares and self-protection systems for aircraft, DGA Information superiority resorts to synthetic image generation to model the operational battlefield of an aircraft, as viewed by EO/IR threats. For this purpose, it completed the SE-Workbench suite from OKTAL-SE with functionalities to predict a realistic aircraft IR signature and is yet integrating the real-time EO/IR rendering engine of SE-Workbench called SE-FAST-IR. This engine is a set of physics-based software and libraries that allows preparing and visualizing a 3D scene for the EO/IR domain. It takes advantage of recent advances in GPU computing techniques. The recent past evolutions that have been performed concern mainly the realistic and physical rendering of reflections, the rendering of both radiative and thermal shadows, the use of procedural techniques for the managing and the rendering of very large terrains, the implementation of Image- Based Rendering for dynamic interpolation of plume static signatures and lastly for aircraft the dynamic interpolation of thermal states. The next step is the representation of the spectral, directional, spatial and temporal signature of flares by Lacroix Defense using OKTAL-SE technology. This representation is prepared from experimental data acquired during windblast tests and high speed track tests. It is based on particle system mechanisms to model the different components of a flare. The validation of a flare model will comprise a simulation of real trials and a comparison of simulation outputs to experimental results concerning the flare signature and above all the behavior of the stimulated threat.

  3. Education Catching Up with Science: Preparing Students for Three-Dimensional Literacy in Cell Biology

    PubMed Central

    Kramer, IJsbrand M.; Dahmani, Hassen-Reda; Delouche, Pamina; Bidabe, Marissa; Schneeberger, Patricia

    2012-01-01

    The large number of experimentally determined molecular structures has led to the development of a new semiotic system in the life sciences, with increasing use of accurate molecular representations. To determine how this change impacts students’ learning, we incorporated image tests into our introductory cell biology course. Groups of students used a single text dealing with signal transduction, which was supplemented with images made in one of three iconographic styles. Typically, we employed realistic renderings, using computer-generated Protein Data Bank (PDB) structures; realistic-schematic renderings, using shapes inspired by PDB structures; or schematic renderings, using simple geometric shapes to represent cellular components. The control group received a list of keywords. When students were asked to draw and describe the process in their own style and to reply to multiple-choice questions, the three iconographic approaches equally improved the overall outcome of the tests (relative to keywords). Students found the three approaches equally useful but, when asked to select a preferred style, they largely favored a realistic-schematic style. When students were asked to annotate “raw” realistic images, both keywords and schematic representations failed to prepare them for this task. We conclude that supplementary images facilitate the comprehension process and despite their visual clutter, realistic representations do not hinder learning in an introductory course. PMID:23222839

  4. Education catching up with science: preparing students for three-dimensional literacy in cell biology.

    PubMed

    Kramer, Ijsbrand M; Dahmani, Hassen-Reda; Delouche, Pamina; Bidabe, Marissa; Schneeberger, Patricia

    2012-01-01

    The large number of experimentally determined molecular structures has led to the development of a new semiotic system in the life sciences, with increasing use of accurate molecular representations. To determine how this change impacts students' learning, we incorporated image tests into our introductory cell biology course. Groups of students used a single text dealing with signal transduction, which was supplemented with images made in one of three iconographic styles. Typically, we employed realistic renderings, using computer-generated Protein Data Bank (PDB) structures; realistic-schematic renderings, using shapes inspired by PDB structures; or schematic renderings, using simple geometric shapes to represent cellular components. The control group received a list of keywords. When students were asked to draw and describe the process in their own style and to reply to multiple-choice questions, the three iconographic approaches equally improved the overall outcome of the tests (relative to keywords). Students found the three approaches equally useful but, when asked to select a preferred style, they largely favored a realistic-schematic style. When students were asked to annotate "raw" realistic images, both keywords and schematic representations failed to prepare them for this task. We conclude that supplementary images facilitate the comprehension process and despite their visual clutter, realistic representations do not hinder learning in an introductory course.

  5. Patient-specific coronary territory maps

    NASA Astrophysics Data System (ADS)

    Beliveau, Pascale; Setser, Randolph; Cheriet, Farida; O'Donnell, Thomas

    2007-03-01

    It is standard practice for physicians to rely on empirical, population based models to define the relationship between regions of left ventricular (LV) myocardium and the coronary arteries which supply them with blood. Physicians use these models to infer the presence and location of disease within the coronary arteries based on the condition of the myocardium within their distribution (which can be established non-invasively using imaging techniques such as ultrasound or magnetic resonance imaging). However, coronary artery anatomy often varies from the assumed model distribution in the individual patient; thus, a non-invasive method to determine the correspondence between coronary artery anatomy and LV myocardium would have immediate clinical impact. This paper introduces an image-based rendering technique for visualizing maps of coronary distribution in a patient-specific approach. From an image volume derived from computed tomography (CT) images, a segmentation of the LV epicardial surface, as well as the paths of the coronary arteries, is obtained. These paths form seed points for a competitive region growing algorithm applied to the surface of the LV. A ray casting procedure in spherical coordinates from the center of the LV is then performed. The cast rays are mapped to a two-dimensional circular based surface forming our coronary distribution map. We applied our technique to a patient with known coronary artery disease and a qualitative evaluation by an expert in coronary cardiac anatomy showed promising results.

  6. Time-Optimized High-Resolution Readout-Segmented Diffusion Tensor Imaging

    PubMed Central

    Reishofer, Gernot; Koschutnig, Karl; Langkammer, Christian; Porter, David; Jehna, Margit; Enzinger, Christian; Keeling, Stephen; Ebner, Franz

    2013-01-01

    Readout-segmented echo planar imaging with 2D navigator-based reacquisition is an uprising technique enabling the sampling of high-resolution diffusion images with reduced susceptibility artifacts. However, low signal from the small voxels and long scan times hamper the clinical applicability. Therefore, we introduce a regularization algorithm based on total variation that is applied directly on the entire diffusion tensor. The spatially varying regularization parameter is determined automatically dependent on spatial variations in signal-to-noise ratio thus, avoiding over- or under-regularization. Information about the noise distribution in the diffusion tensor is extracted from the diffusion weighted images by means of complex independent component analysis. Moreover, the combination of those features enables processing of the diffusion data absolutely user independent. Tractography from in vivo data and from a software phantom demonstrate the advantage of the spatially varying regularization compared to un-regularized data with respect to parameters relevant for fiber-tracking such as Mean Fiber Length, Track Count, Volume and Voxel Count. Specifically, for in vivo data findings suggest that tractography results from the regularized diffusion tensor based on one measurement (16 min) generates results comparable to the un-regularized data with three averages (48 min). This significant reduction in scan time renders high resolution (1×1×2.5 mm3) diffusion tensor imaging of the entire brain applicable in a clinical context. PMID:24019951

  7. Three Dimensional Sheaf of Ultrasound Planes Reconstruction (SOUPR) of Ablated Volumes

    PubMed Central

    Ingle, Atul; Varghese, Tomy

    2014-01-01

    This paper presents an algorithm for three dimensional reconstruction of tumor ablations using ultrasound shear wave imaging with electrode vibration elastography. Radiofrequency ultrasound data frames are acquired over imaging planes that form a subset of a sheaf of planes sharing a common axis of intersection. Shear wave velocity is estimated separately on each imaging plane using a piecewise linear function fitting technique with a fast optimization routine. An interpolation algorithm then computes velocity maps on a fine grid over a set of C-planes that are perpendicular to the axis of the sheaf. A full three dimensional rendering of the ablation can then be created from this stack of C-planes; hence the name “Sheaf Of Ultrasound Planes Reconstruction” or SOUPR. The algorithm is evaluated through numerical simulations and also using data acquired from a tissue mimicking phantom. Reconstruction quality is gauged using contrast and contrast-to-noise ratio measurements and changes in quality from using increasing number of planes in the sheaf are quantified. The highest contrast of 5 dB is seen between the stiffest and softest regions of the phantom. Under certain idealizing assumptions on the true shape of the ablation, good reconstruction quality while maintaining fast processing rate can be obtained with as few as 6 imaging planes suggesting that the method is suited for parsimonious data acquisitions with very few sparsely chosen imaging planes. PMID:24808405

  8. Three-dimensional sheaf of ultrasound planes reconstruction (SOUPR) of ablated volumes.

    PubMed

    Ingle, Atul; Varghese, Tomy

    2014-08-01

    This paper presents an algorithm for 3-D reconstruction of tumor ablations using ultrasound shear wave imaging with electrode vibration elastography. Radio-frequency ultrasound data frames are acquired over imaging planes that form a subset of a sheaf of planes sharing a common axis of intersection. Shear wave velocity is estimated separately on each imaging plane using a piecewise linear function fitting technique with a fast optimization routine. An interpolation algorithm then computes velocity maps on a fine grid over a set of C-planes that are perpendicular to the axis of the sheaf. A full 3-D rendering of the ablation can then be created from this stack of C-planes; hence the name "Sheaf Of Ultrasound Planes Reconstruction" or SOUPR. The algorithm is evaluated through numerical simulations and also using data acquired from a tissue mimicking phantom. Reconstruction quality is gauged using contrast and contrast-to-noise ratio measurements and changes in quality from using increasing number of planes in the sheaf are quantified. The highest contrast of 5 dB is seen between the stiffest and softest regions of the phantom. Under certain idealizing assumptions on the true shape of the ablation, good reconstruction quality while maintaining fast processing rate can be obtained with as few as six imaging planes suggesting that the method is suited for parsimonious data acquisitions with very few sparsely chosen imaging planes.

  9. Topological Galleries: A High Level User Interface for Topology Controlled Volume Rendering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacCarthy, Brian; Carr, Hamish; Weber, Gunther H.

    2011-06-30

    Existing topological interfaces to volume rendering are limited by their reliance on sophisticated knowledge of topology by the user. We extend previous work by describing topological galleries, an interface for novice users that is based on the design galleries approach. We report three contributions: an interface based on hierarchical thumbnail galleries to display the containment relationships between topologically identifiable features, the use of the pruning hierarchy instead of branch decomposition for contour tree simplification, and drag-and-drop transfer function assignment for individual components. Initial results suggest that this approach suffers from limitations due to rapid drop-off of feature size in themore » pruning hierarchy. We explore these limitations by providing statistics of feature size as function of depth in the pruning hierarchy of the contour tree.« less

  10. LIPS database with LIPService: a microscopic image database of intracellular structures in Arabidopsis guard cells.

    PubMed

    Higaki, Takumi; Kutsuna, Natsumaro; Hasezawa, Seiichiro

    2013-05-16

    Intracellular configuration is an important feature of cell status. Recent advances in microscopic imaging techniques allow us to easily obtain a large number of microscopic images of intracellular structures. In this circumstance, automated microscopic image recognition techniques are of extreme importance to future phenomics/visible screening approaches. However, there was no benchmark microscopic image dataset for intracellular organelles in a specified plant cell type. We previously established the Live Images of Plant Stomata (LIPS) database, a publicly available collection of optical-section images of various intracellular structures of plant guard cells, as a model system of environmental signal perception and transduction. Here we report recent updates to the LIPS database and the establishment of a database table, LIPService. We updated the LIPS dataset and established a new interface named LIPService to promote efficient inspection of intracellular structure configurations. Cell nuclei, microtubules, actin microfilaments, mitochondria, chloroplasts, endoplasmic reticulum, peroxisomes, endosomes, Golgi bodies, and vacuoles can be filtered using probe names or morphometric parameters such as stomatal aperture. In addition to the serial optical sectional images of the original LIPS database, new volume-rendering data for easy web browsing of three-dimensional intracellular structures have been released to allow easy inspection of their configurations or relationships with cell status/morphology. We also demonstrated the utility of the new LIPS image database for automated organelle recognition of images from another plant cell image database with image clustering analyses. The updated LIPS database provides a benchmark image dataset for representative intracellular structures in Arabidopsis guard cells. The newly released LIPService allows users to inspect the relationship between organellar three-dimensional configurations and morphometrical parameters.

  11. Interactive Volume Rendering of Diffusion Tensor Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hlawitschka, Mario; Weber, Gunther; Anwander, Alfred

    As 3D volumetric images of the human body become an increasingly crucial source of information for the diagnosis and treatment of a broad variety of medical conditions, advanced techniques that allow clinicians to efficiently and clearly visualize volumetric images become increasingly important. Interaction has proven to be a key concept in analysis of medical images because static images of 3D data are prone to artifacts and misunderstanding of depth. Furthermore, fading out clinically irrelevant aspects of the image while preserving contextual anatomical landmarks helps medical doctors to focus on important parts of the images without becoming disoriented. Our goal wasmore » to develop a tool that unifies interactive manipulation and context preserving visualization of medical images with a special focus on diffusion tensor imaging (DTI) data. At each image voxel, DTI provides a 3 x 3 tensor whose entries represent the 3D statistical properties of water diffusion locally. Water motion that is preferential to specific spatial directions suggests structural organization of the underlying biological tissue; in particular, in the human brain, the naturally occuring diffusion of water in the axon portion of neurons is predominantly anisotropic along the longitudinal direction of the elongated, fiber-like axons [MMM+02]. This property has made DTI an emerging source of information about the structural integrity of axons and axonal connectivity between brain regions, both of which are thought to be disrupted in a broad range of medical disorders including multiple sclerosis, cerebrovascular disease, and autism [Mos02, FCI+01, JLH+99, BGKM+04, BJB+03].« less

  12. Color analysis and image rendering of woodblock prints with oil-based ink

    NASA Astrophysics Data System (ADS)

    Horiuchi, Takahiko; Tanimoto, Tetsushi; Tominaga, Shoji

    2012-01-01

    This paper proposes a method for analyzing the color characteristics of woodblock prints having oil-based ink and rendering realistic images based on camera data. The analysis results of woodblock prints show some characteristic features in comparison with oil paintings: 1) A woodblock print can be divided into several cluster areas, each with similar surface spectral reflectance; and 2) strong specular reflection from the influence of overlapping paints arises only in specific cluster areas. By considering these properties, we develop an effective rendering algorithm by modifying our previous algorithm for oil paintings. A set of surface spectral reflectances of a woodblock print is represented by using only a small number of average surface spectral reflectances and the registered scaling coefficients, whereas the previous algorithm for oil paintings required surface spectral reflectances of high dimension at all pixels. In the rendering process, in order to reproduce the strong specular reflection in specific cluster areas, we use two sets of parameters in the Torrance-Sparrow model for cluster areas with or without strong specular reflection. An experiment on a woodblock printing with oil-based ink was performed to demonstrate the feasibility of the proposed method.

  13. Integrating the visualization concept of the medical imaging interaction toolkit (MITK) into the XIP-Builder visual programming environment

    NASA Astrophysics Data System (ADS)

    Wolf, Ivo; Nolden, Marco; Schwarz, Tobias; Meinzer, Hans-Peter

    2010-02-01

    The Medical Imaging Interaction Toolkit (MITK) and the eXtensible Imaging Platform (XIP) both aim at facilitating the development of medical imaging applications, but provide support on different levels. MITK offers support from the toolkit level, whereas XIP comes with a visual programming environment. XIP is strongly based on Open Inventor. Open Inventor with its scene graph-based rendering paradigm was not specifically designed for medical imaging, but focuses on creating dedicated visualizations. MITK has a visualization concept with a model-view-controller like design that assists in implementing multiple, consistent views on the same data, which is typically required in medical imaging. In addition, MITK defines a unified means of describing position, orientation, bounds, and (if required) local deformation of data and views, supporting e.g. images acquired with gantry tilt and curved reformations. The actual rendering is largely delegated to the Visualization Toolkit (VTK). This paper presents an approach of how to integrate the visualization concept of MITK with XIP, especially into the XIP-Builder. This is a first step of combining the advantages of both platforms. It enables experimenting with algorithms in the XIP visual programming environment without requiring a detailed understanding of Open Inventor. Using MITK-based add-ons to XIP, any number of data objects (images, surfaces, etc.) produced by algorithms can simply be added to an MITK DataStorage object and rendered into any number of slice-based (2D) or 3D views. Both MITK and XIP are open-source C++ platforms. The extensions presented in this paper will be available from www.mitk.org.

  14. Use of an imaging device after nonablative radiofrequency (Pellevé): treatment of periorbital rhytids.

    PubMed

    Javate, Reynaldo M; Grantoza, Charlene L; Buyucan, Kathleen Faye N

    2014-01-01

    To use the Canfield Reveal imager in objective photo documentation of the effect of nonablative radiofrequency (Pellevé) treatment on periorbital rhytids. This is a prospective cohort study. Twelve patients underwent 1 to 2 sessions of nonablative radiofrequency (Pellevé) treatment over the periorbital region. aged 30 to 60 years old, minimal tissue laxity, and shallow wrinkle development. Standardized reproducible photographs (left, frontal, right views) with use of the Canfield Reveal imager's superimposition feature were taken of each patient prior, immediately after application, and on 2nd, 4th, 6th, and 8th week follow up. Brow elevation was measured in the pre- and posttreatment photographs with the use of the Canfield Reveal imager and rendered the photographs in 3-dimensional images. Comparison of the pre- and posttreatment photographs taken via the Canfield Reveal imager showed reduction in the wrinkles, smoothening, and tightening of the eyelid and the periorbital tissue. Patients exhibited an average increase of 2.05 mm (p<0.001) of eyebrow lift and 0.98 mm (p<0.001) of superior eyelid crease elevation immediately after treatment. Eight weeks after, average brow elevation was measured at 3.52 mm (p<0.001) and crease elevation at 1.84 mm (p<0.001). The 3-dimensional imaging feature rendered in normal skin tone, and shades of gray showed softening of fine lines and crow's feet after treatment. Furthermore, it also rendered in color relief that highlighted the changes seen with depressions noted to decrease after treatment. The Canfield Reveal imager can be used in the objective photodocumentation of subtle and modest effects of nonablative radiofrequency (Pellevé) treatment to the periorbital region.

  15. Fetal Urinary Tract Anomalies: Review of Pathophysiology, Imaging, and Management.

    PubMed

    Mileto, Achille; Itani, Malak; Katz, Douglas S; Siebert, Joseph R; Dighe, Manjiri K; Dubinsky, Theodore J; Moshiri, Mariam

    2018-05-01

    Common fetal anomalies of the kidneys and urinary tract encompass a complex spectrum of abnormalities that can be detected prenatally by ultrasound. Common fetal anomalies of the kidneys and urinary tract can affect amniotic fluid volume production with the development of oligohydramnios or anhydramnios, resulting in fetal pulmonary hypoplasia and, potentially, abnormal development of other fetal structures. We provide an overview of common fetal anomalies of the kidneys and urinary tract with an emphasis on sonographic patterns as well as pathologic and postnatal correlation, along with brief recommendations for postnatal management. Of note, we render an updated classification of fetal abnormalities of the kidneys and urinary tract based on the presence or absence of associated urinary tract dilation. In addition, we review the 2014 classification of urinary tract dilation based on the Linthicum multidisciplinary consensus panel.

  16. Disrupting the old order of imaging.

    PubMed

    Jha, Saurabh; Lexa, Frank J

    2013-06-01

    The purpose of this article is to expand on the economic concepts of creative destruction and disruptive innovation to imagine scenarios in which diagnostic imaging modalities and certain imaging paradigms can be rendered obsolete. Potential disrupters of imaging are novel drugs, clinical trials, accurate biomarkers, and government regulations. A taxonomic schema can be used to better predict the decline of certain imaging modalities.

  17. Modeling Images of Natural 3D Surfaces: Overview and Potential Applications

    NASA Technical Reports Server (NTRS)

    Jalobeanu, Andre; Kuehnel, Frank; Stutz, John

    2004-01-01

    Generative models of natural images have long been used in computer vision. However, since they only describe the of 2D scenes, they fail to capture all the properties of the underlying 3D world. Even though such models are sufficient for many vision tasks a 3D scene model is when it comes to inferring a 3D object or its characteristics. In this paper, we present such a generative model, incorporating both a multiscale surface prior model for surface geometry and reflectance, and an image formation process model based on realistic rendering, the computation of the posterior model parameter densities, and on the critical aspects of the rendering. We also how to efficiently invert the model within a Bayesian framework. We present a few potential applications, such as asteroid modeling and Planetary topography recovery, illustrated by promising results on real images.

  18. Three Dimensional Projection Environment for Molecular Design and Surgical Simulation

    DTIC Science & Technology

    2011-08-01

    bypasses the cumbersome meshing process . The deformation model is only comprised of mass nodes, which are generated by sampling the object volume before...force should minimize the penetration volume, the haptic feedback force is derived directly. Additionally, a post- processing technique is developed to...render distinct physi-cal tissue properties across different interaction areas. The proposed approach does not require any pre- processing and is

  19. Cornea and ocular lens visualized with three-dimensional confocal microscopy

    NASA Astrophysics Data System (ADS)

    Masters, Barry R.

    1992-08-01

    This paper demonstrates the advantages of three-dimensional reconstruction of the cornea and the ocular crystalline lens by confocal microscopy and volume rendering computer techniques. The advantages of noninvasive observation of ocular structures in living, unstained, unfixed tissue include the following: the tissue is in a natural living state without the artifacts of fixation, mechanical sectioning, and staining; the three-dimensional structure can be observed from any view point and quantitatively analyzed; the dynamics of morphological changes can be studied; and the use of confocal microscopic observation results in a reduction of the number of animals required for ocular morphometric studies. The main advantage is that the dynamic morphology of ocular structures can be investigated in living ocular tissue. A laser scanning confocal microscope was used in the reflected light mode to obtain the two- dimensional images from the cornea and the ocular lens of a freshly enucleated rabbit eye. The light source was an argon ion laser with 488 nm wavelength. The microscope objective was a Leitz 25X, NA 0.6 water immersion lens. The 400 micron thick cornea was optically sectioned into 133, three micron sections. The semi-transparent cornea and the in-situ ocular lens was visualized as high resolution, high contrast two-dimensional images. The under sampling resulted in a three-dimensional visualization rendering in which the corneal thickness (z-axis) is compressed. The structures observed in the cornea include: superficial epithelial cells and their nuclei, basal epithelial cells and their `beaded' cell borders, basal lamina, nerve plexus, nerve fibers, free nerve endings in the basal epithelial cells, nuclei of stromal keratocytes, and endothelial cells. The structures observed in the in-situ ocular lens include: lens capsule, lens epithelial cells, and individual lens fibers.

  20. Professional efficiencies for diagnostic imaging services rendered by different physicians: analysis of recent medicare multiple procedure payment reduction policy.

    PubMed

    Duszak, Richard; Silva, Ezequiel; Kim, Angela J; Barr, Robert M; Donovan, William D; Kassing, Pamela; McGinty, Geraldine; Allen, Bibb

    2013-09-01

    The aim of this study was to quantify potential physician work efficiencies and appropriate multiple procedure payment reductions for different same-session diagnostic imaging studies interpreted by different physicians in the same group practice. Medicare Resource-Based Relative Value Scale data were analyzed to determine the relative contributions of various preservice, intraservice, and postservice physician diagnostic imaging work activities. An expert panel quantified potential duplications in professional work activities when separate examinations were performed during the same session by different physicians within the same group practice. Maximum potential work duplications for various imaging modalities were calculated and compared with those used as the basis of CMS payment policy. No potential intraservice work duplication was identified when different examination interpretations were rendered by different physicians in the same group practice. When multiple interpretations within the same modality were rendered by different physicians, maximum potential duplicated preservice and postservice activities ranged from 5% (radiography, fluoroscopy, and nuclear medicine) to 13.6% (CT). Maximum mean potential duplicated work relative value units ranged from 0.0049 (radiography and fluoroscopy) to 0.0413 (CT). This equates to overall potential total work reductions ranging from 1.39% (nuclear medicine) to 2.73% (CT). Across all modalities, this corresponds to maximum Medicare professional component physician fee reductions of 1.23 ± 0.38% (range, 0.95%-1.87%) for services within the same modality, much less than an order of magnitude smaller than those implemented by CMS. For services from different modalities, potential duplications were too small to quantify. Although potential efficiencies exist in physician preservice and postservice work when same-session, same-modality imaging services are rendered by different physicians in the same group practice, these are relatively minuscule and have been grossly overestimated by current CMS payment policy. Greater transparency and methodologic rigor in government payment policy development are warranted. Copyright © 2013 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  1. Medical imaging systems

    DOEpatents

    Frangioni, John V

    2013-06-25

    A medical imaging system provides simultaneous rendering of visible light and diagnostic or functional images. The system may be portable, and may include adapters for connecting various light sources and cameras in open surgical environments or laparascopic or endoscopic environments. A user interface provides control over the functionality of the integrated imaging system. In one embodiment, the system provides a tool for surgical pathology.

  2. Computed tomographic venography for varicose veins of the lower extremities: prospective comparison of 80-kVp and conventional 120-kVp protocols.

    PubMed

    Cho, Eun-Suk; Kim, Joo Hee; Kim, Sungjun; Yu, Jeong-Sik; Chung, Jae-Joon; Yoon, Choon-Sik; Lee, Hyeon-Kyeong; Lee, Kyung Hee

    2012-01-01

    To prospectively investigate the feasibility of an 80-kilovolt (peak) (kVp) protocol in computed tomographic venography for varicose veins of the lower extremities by comparison with conventional 120-kVp protocol. Attenuation values and signal-to-noise ratio of iodine contrast medium (CM) were determined in a water phantom for 2 tube voltages (80 kVp and 120 kVp). Among 100 patients, 50 patients were scanned with 120 kVp and 150 effective milliampere second (mAs(eff)), and the other 50 patients were scanned with 80 kVp and 390 mAs(eff) after the administration of 1.7-mL/kg CM (370 mg of iodine per milliliter). The 2 groups were compared for venous attenuation, contrast-to-noise ratio, and subjective degree of venous enhancement, image noise, and overall diagnostic image quality. In the phantom, the attenuation value and signal-to-noise ratio value for iodine CM at 80 kVp were 63.8% and 33.0% higher, respectively, than those obtained at 120 kVp. The mean attenuation of the measured veins of the lower extremities was 148.3 Hounsfield units (HU) for the 80-kVp protocol and 94.8 HU for the 120-kVp protocol. Contrast-to-noise ratio was also significantly higher with the 80-kVp protocol. The overall diagnostic image quality of the 3-dimensional volume-rendered images was good with both protocols. The subjective score for venous enhancement was higher at the 80-kVp protocol. The mean volume computed tomography dose index of the 80-kVp (5.6 mGy) protocol was 23.3% lower than that of the 120-kVp (7.3 mGy) protocol. The use of the 80-kVp protocol improved overall venous attenuation, especially in perforating vein, and provided similarly high diagnostic image quality with a lower radiation dose when compared to the conventional 120-kVp protocol.

  3. Comparing the Microsoft Kinect to a traditional mouse for adjusting the viewed tissue densities of three-dimensional anatomical structures

    NASA Astrophysics Data System (ADS)

    Juhnke, Bethany; Berron, Monica; Philip, Adriana; Williams, Jordan; Holub, Joseph; Winer, Eliot

    2013-03-01

    Advancements in medical image visualization in recent years have enabled three-dimensional (3D) medical images to be volume-rendered from magnetic resonance imaging (MRI) and computed tomography (CT) scans. Medical data is crucial for patient diagnosis and medical education, and analyzing these three-dimensional models rather than two-dimensional (2D) slices would enable more efficient analysis by surgeons and physicians, especially non-radiologists. An interaction device that is intuitive, robust, and easily learned is necessary to integrate 3D modeling software into the medical community. The keyboard and mouse configuration does not readily manipulate 3D models because these traditional interface devices function within two degrees of freedom, not the six degrees of freedom presented in three dimensions. Using a familiar, commercial-off-the-shelf (COTS) device for interaction would minimize training time and enable maximum usability with 3D medical images. Multiple techniques are available to manipulate 3D medical images and provide doctors more innovative ways of visualizing patient data. One such example is windowing. Windowing is used to adjust the viewed tissue density of digital medical data. A software platform available at the Virtual Reality Applications Center (VRAC), named Isis, was used to visualize and interact with the 3D representations of medical data. In this paper, we present the methodology and results of a user study that examined the usability of windowing 3D medical imaging using a Kinect™ device compared to a traditional mouse.

  4. High-frequency annular array with coaxial illumination for dual-modality ultrasonic and photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Filoux, Erwan; Sampathkumar, Ashwin; Chitnis, Parag V.; Aristizábal, Orlando; Ketterling, Jeffrey A.

    2013-05-01

    This paper presents a combined ultrasound and photoacoustic (PA) imaging (PAI) system used to obtain high-quality, co-registered images of mouse-embryo anatomy and vasculature. High-frequency ultrasound (HFU, >20 MHz) is utilized to obtain high-resolution anatomical images of small animals while PAI provides high-contrast images of the vascular network. The imaging system is based on a 40 MHz, 5-element, 6 mm aperture annular-array transducer with a 800 μm diameter hole through its central element. The transducer was integrated in a cage-plate assembly allowing for a collimated laser beam to pass through the hole so that the optical and acoustic beams were collinear. The assembly was mounted on a two-axis, motorized stage to enable the simultaneous acquisition of co-registered HFU and PA volumetric data. Data were collected from all five elements in receive and a synthetic-focusing algorithm was applied in post-processing to beamform the data and increase the spatial resolution and depth-of-field (DOF) of the HFU and PA images. Phantom measurements showed that the system could achieve high-resolution images (down to 90 μm for HFU and 150 μm for PAI) and a large DOF of >8 mm. Volume renderings of a mouse embryo showed that the scanner allowed for visualizing morphologically precise anatomy of the entire embryo along with corresponding co-registered vasculature. Major head vessels, such as the superior sagittal sinus or rostral vein, were clearly identified as well as limb bud vasculature.

  5. A simple rapid process for semi-automated brain extraction from magnetic resonance images of the whole mouse head.

    PubMed

    Delora, Adam; Gonzales, Aaron; Medina, Christopher S; Mitchell, Adam; Mohed, Abdul Faheem; Jacobs, Russell E; Bearer, Elaine L

    2016-01-15

    Magnetic resonance imaging (MRI) is a well-developed technique in neuroscience. Limitations in applying MRI to rodent models of neuropsychiatric disorders include the large number of animals required to achieve statistical significance, and the paucity of automation tools for the critical early step in processing, brain extraction, which prepares brain images for alignment and voxel-wise statistics. This novel timesaving automation of template-based brain extraction ("skull-stripping") is capable of quickly and reliably extracting the brain from large numbers of whole head images in a single step. The method is simple to install and requires minimal user interaction. This method is equally applicable to different types of MR images. Results were evaluated with Dice and Jacquard similarity indices and compared in 3D surface projections with other stripping approaches. Statistical comparisons demonstrate that individual variation of brain volumes are preserved. A downloadable software package not otherwise available for extraction of brains from whole head images is included here. This software tool increases speed, can be used with an atlas or a template from within the dataset, and produces masks that need little further refinement. Our new automation can be applied to any MR dataset, since the starting point is a template mask generated specifically for that dataset. The method reliably and rapidly extracts brain images from whole head images, rendering them useable for subsequent analytical processing. This software tool will accelerate the exploitation of mouse models for the investigation of human brain disorders by MRI. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Implementation of an oblique-sectioning visualization tool for line-of-sight stereotactic neurosurgical navigation using the AVW toolkit

    NASA Astrophysics Data System (ADS)

    Bates, Lisa M.; Hanson, Dennis P.; Kall, Bruce A.; Meyer, Frederic B.; Robb, Richard A.

    1998-06-01

    An important clinical application of biomedical imaging and visualization techniques is provision of image guided neurosurgical planning and navigation techniques using interactive computer display systems in the operating room. Current systems provide interactive display of orthogonal images and 3D surface or volume renderings integrated with and guided by the location of a surgical probe. However, structures in the 'line-of-sight' path which lead to the surgical target cannot be directly visualized, presenting difficulty in obtaining full understanding of the 3D volumetric anatomic relationships necessary for effective neurosurgical navigation below the cortical surface. Complex vascular relationships and histologic boundaries like those found in artereovenous malformations (AVM's) also contribute to the difficulty in determining optimal approaches prior to actual surgical intervention. These difficulties demonstrate the need for interactive oblique imaging methods to provide 'line-of-sight' visualization. Capabilities for 'line-of- sight' interactive oblique sectioning are present in several current neurosurgical navigation systems. However, our implementation is novel, in that it utilizes a completely independent software toolkit, AVW (A Visualization Workshop) developed at the Mayo Biomedical Imaging Resource, integrated with a current neurosurgical navigation system, the COMPASS stereotactic system at Mayo Foundation. The toolkit is a comprehensive, C-callable imaging toolkit containing over 500 optimized imaging functions and structures. The powerful functionality and versatility of the AVW imaging toolkit provided facile integration and implementation of desired interactive oblique sectioning using a finite set of functions. The implementation of the AVW-based code resulted in higher-level functions for complete 'line-of-sight' visualization.

  7. Creating 3D visualizations of MRI data: A brief guide.

    PubMed

    Madan, Christopher R

    2015-01-01

    While magnetic resonance imaging (MRI) data is itself 3D, it is often difficult to adequately present the results papers and slides in 3D. As a result, findings of MRI studies are often presented in 2D instead. A solution is to create figures that include perspective and can convey 3D information; such figures can sometimes be produced by standard functional magnetic resonance imaging (fMRI) analysis packages and related specialty programs. However, many options cannot provide functionality such as visualizing activation clusters that are both cortical and subcortical (i.e., a 3D glass brain), the production of several statistical maps with an identical perspective in the 3D rendering, or animated renderings. Here I detail an approach for creating 3D visualizations of MRI data that satisfies all of these criteria. Though a 3D 'glass brain' rendering can sometimes be difficult to interpret, they are useful in showing a more overall representation of the results, whereas the traditional slices show a more local view. Combined, presenting both 2D and 3D representations of MR images can provide a more comprehensive view of the study's findings.

  8. Creating 3D visualizations of MRI data: A brief guide

    PubMed Central

    Madan, Christopher R.

    2015-01-01

    While magnetic resonance imaging (MRI) data is itself 3D, it is often difficult to adequately present the results papers and slides in 3D. As a result, findings of MRI studies are often presented in 2D instead. A solution is to create figures that include perspective and can convey 3D information; such figures can sometimes be produced by standard functional magnetic resonance imaging (fMRI) analysis packages and related specialty programs. However, many options cannot provide functionality such as visualizing activation clusters that are both cortical and subcortical (i.e., a 3D glass brain), the production of several statistical maps with an identical perspective in the 3D rendering, or animated renderings. Here I detail an approach for creating 3D visualizations of MRI data that satisfies all of these criteria. Though a 3D ‘glass brain’ rendering can sometimes be difficult to interpret, they are useful in showing a more overall representation of the results, whereas the traditional slices show a more local view. Combined, presenting both 2D and 3D representations of MR images can provide a more comprehensive view of the study’s findings. PMID:26594340

  9. The PICWidget

    NASA Technical Reports Server (NTRS)

    Norris, Jeffrey; Fox, Jason; Rabe, Kenneth; Shu, I-Hsiang; Powell, Mark

    2007-01-01

    The Plug-in Image Component Widget (PICWidget) is a software component for building digital imaging applications. The component is part of a methodology described in GIS Methodology for Planning Planetary-Rover Operations (NPO-41812), which appears elsewhere in this issue of NASA Tech Briefs. Planetary rover missions return a large number and wide variety of image data products that vary in complexity in many ways. Supported by a powerful, flexible image-data-processing pipeline, the PICWidget can process and render many types of imagery, including (but not limited to) thumbnail, subframed, downsampled, stereoscopic, and mosaic images; images coregistred with orbital data; and synthetic red/green/blue images. The PICWidget is capable of efficiently rendering images from data representing many more pixels than are available at a computer workstation where the images are to be displayed. The PICWidget is implemented as an Eclipse plug-in using the Standard Widget Toolkit, which provides a straightforward interface for re-use of the PICWidget in any number of application programs built upon the Eclipse application framework. Because the PICWidget is tile-based and performs aggressive tile caching, it has flexibility to perform faster or slower, depending whether more or less memory is available.

  10. MRI-compatible pipeline for three-dimensional MALDI imaging mass spectrometry using PAXgene fixation.

    PubMed

    Oetjen, Janina; Aichler, Michaela; Trede, Dennis; Strehlow, Jan; Berger, Judith; Heldmann, Stefan; Becker, Michael; Gottschalk, Michael; Kobarg, Jan Hendrik; Wirtz, Stefan; Schiffler, Stefan; Thiele, Herbert; Walch, Axel; Maass, Peter; Alexandrov, Theodore

    2013-09-02

    MALDI imaging mass spectrometry (MALDI-imaging) has emerged as a spatially-resolved label-free bioanalytical technique for direct analysis of biological samples and was recently introduced for analysis of 3D tissue specimens. We present a new experimental and computational pipeline for molecular analysis of tissue specimens which integrates 3D MALDI-imaging, magnetic resonance imaging (MRI), and histological staining and microscopy, and evaluate the pipeline by applying it to analysis of a mouse kidney. To ensure sample integrity and reproducible sectioning, we utilized the PAXgene fixation and paraffin embedding and proved its compatibility with MRI. Altogether, 122 serial sections of the kidney were analyzed using MALDI-imaging, resulting in a 3D dataset of 200GB comprised of 2million spectra. We show that elastic image registration better compensates for local distortions of tissue sections. The computational analysis of 3D MALDI-imaging data was performed using our spatial segmentation pipeline which determines regions of distinct molecular composition and finds m/z-values co-localized with these regions. For facilitated interpretation of 3D distribution of ions, we evaluated isosurfaces providing simplified visualization. We present the data in a multimodal fashion combining 3D MALDI-imaging with the MRI volume rendering and with light microscopic images of histologically stained sections. Our novel experimental and computational pipeline for 3D MALDI-imaging can be applied to address clinical questions such as proteomic analysis of the tumor morphologic heterogeneity. Examining the protein distribution as well as the drug distribution throughout an entire tumor using our pipeline will facilitate understanding of the molecular mechanisms of carcinogenesis. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. Signature modelling and radiometric rendering equations in infrared scene simulation systems

    NASA Astrophysics Data System (ADS)

    Willers, Cornelius J.; Willers, Maria S.; Lapierre, Fabian

    2011-11-01

    The development and optimisation of modern infrared systems necessitates the use of simulation systems to create radiometrically realistic representations (e.g. images) of infrared scenes. Such simulation systems are used in signature prediction, the development of surveillance and missile sensors, signal/image processing algorithm development and aircraft self-protection countermeasure system development and evaluation. Even the most cursory investigation reveals a multitude of factors affecting the infrared signatures of realworld objects. Factors such as spectral emissivity, spatial/volumetric radiance distribution, specular reflection, reflected direct sunlight, reflected ambient light, atmospheric degradation and more, all affect the presentation of an object's instantaneous signature. The signature is furthermore dynamically varying as a result of internal and external influences on the object, resulting from the heat balance comprising insolation, internal heat sources, aerodynamic heating (airborne objects), conduction, convection and radiation. In order to accurately render the object's signature in a computer simulation, the rendering equations must therefore account for all the elements of the signature. In this overview paper, the signature models, rendering equations and application frameworks of three infrared simulation systems are reviewed and compared. The paper first considers the problem of infrared scene simulation in a framework for simulation validation. This approach provides concise definitions and a convenient context for considering signature models and subsequent computer implementation. The primary radiometric requirements for an infrared scene simulator are presented next. The signature models and rendering equations implemented in OSMOSIS (Belgian Royal Military Academy), DIRSIG (Rochester Institute of Technology) and OSSIM (CSIR & Denel Dynamics) are reviewed. In spite of these three simulation systems' different application focus areas, their underlying physics-based approach is similar. The commonalities and differences between the different systems are investigated, in the context of their somewhat different application areas. The application of an infrared scene simulation system towards the development of imaging missiles and missile countermeasures are briefly described. Flowing from the review of the available models and equations, recommendations are made to further enhance and improve the signature models and rendering equations in infrared scene simulators.

  12. Wireless live streaming video of laparoscopic surgery: a bandwidth analysis for handheld computers.

    PubMed

    Gandsas, Alex; McIntire, Katherine; George, Ivan M; Witzke, Wayne; Hoskins, James D; Park, Adrian

    2002-01-01

    Over the last six years, streaming media has emerged as a powerful tool for delivering multimedia content over networks. Concurrently, wireless technology has evolved, freeing users from desktop boundaries and wired infrastructures. At the University of Kentucky Medical Center, we have integrated these technologies to develop a system that can wirelessly transmit live surgery from the operating room to a handheld computer. This study establishes the feasibility of using our system to view surgeries and describes the effect of bandwidth on image quality. A live laparoscopic ventral hernia repair was transmitted to a single handheld computer using five encoding speeds at a constant frame rate, and the quality of the resulting streaming images was evaluated. No video images were rendered when video data were encoded at 28.8 kilobytes per second (Kbps), the slowest encoding bitrate studied. The highest quality images were rendered at encoding speeds greater than or equal to 150 Kbps. Of note, a 15 second transmission delay was experienced using all four encoding schemes that rendered video images. We believe that the wireless transmission of streaming video to handheld computers has tremendous potential to enhance surgical education. For medical students and residents, the ability to view live surgeries, lectures, courses and seminars on handheld computers means a larger number of learning opportunities. In addition, we envision that wireless enabled devices may be used to telemonitor surgical procedures. However, bandwidth availability and streaming delay are major issues that must be addressed before wireless telementoring becomes a reality.

  13. The impact of substance use on brain structure in people at high risk of developing schizophrenia.

    PubMed

    Welch, Killian A; McIntosh, Andrew M; Job, Dominic E; Whalley, Heather C; Moorhead, Thomas W; Hall, Jeremy; Owens, David G C; Lawrie, Stephen M; Johnstone, Eve C

    2011-09-01

    Ventricular enlargement and reduced prefrontal volume are consistent findings in schizophrenia. Both are present in first episode subjects and may be detectable before the onset of clinical disorder. Substance misuse is more common in people with schizophrenia and is associated with similar brain abnormalities. We employ a prospective cohort study with nested case control comparison design to investigate the association between substance misuse, brain abnormality, and subsequent schizophrenia. Substance misuse history, imaging data, and clinical information were collected on 147 subjects at high risk of schizophrenia and 36 controls. Regions exhibiting a significant relationship between level of use of alcohol, cannabis or tobacco, and structure volume were identified. Multivariate regression then elucidated the relationship between level of substance use and structure volumes while accounting for correlations between these variables and correcting for potential confounders. Finally, we established whether substance misuse was associated with later risk of schizophrenia. Increased ventricular volume was associated with alcohol and cannabis use in a dose-dependent manner. Alcohol consumption was associated with reduced frontal lobe volume. Multiple regression analyses found both alcohol and cannabis were significant predictors of these abnormalities when simultaneously entered into the statistical model. Alcohol and cannabis misuse were associated with an increased subsequent risk of schizophrenia. We provide prospective evidence that use of cannabis or alcohol by people at high genetic risk of schizophrenia is associated with brain abnormalities and later risk of psychosis. A family history of schizophrenia may render the brain particularly sensitive to the risk-modifying effects of these substances.

  14. Maxillary distraction osteogenesis in the adolescent cleft patient: three-dimensional computed tomography analysis of linear and volumetric changes over five years.

    PubMed

    Chen, Philip Kuo-Ting; Por, Yong-Chen; Liou, Eric Jein-Wein; Chang, Frank Chun-Shin

    2011-07-01

    To assess the results of maxillary distraction osteogenesis with the Rigid External Distraction System using three-dimensional computed tomography scan volume-rendered images with respect to stability and facial growth at three time frames: preoperative (T0), 1-year postoperative (T1), and 5-years postoperative (T2). Retrospective analysis. Tertiary. A total of 12 patients with severe cleft maxillary hypoplasia were treated between June 30, 1997, and July 15, 1998. The mean age at surgery was 11 years 1 month. Le Fort I maxillary distraction osteogenesis. Distraction was started 2 to 5 days postsurgery at a rate of 1 mm per day. The consolidation period was 3 months. No face mask was used. A paired t test was used for statistical analysis. Overjet, ANB, and SNA and maxillary, pterygoid, and mandibular volumes. From T0 to T1, there were statistically significant increments of overjet, ANB, and SNA and maxillary, pterygoid, and mandibular volumes. The T1 to T2 period demonstrated a reduction of overjet (30.07%) and ANB (54.42%). The maxilla showed a stable SNA and a small but statistically significant advancement of the ANS point. There was a significant increase in the mandibular volume. However, there was no significant change in the maxillary and pterygoid volumes. Maxillary distraction osteogenesis demonstrated linear and volumetric maxillary growth during the distraction phase without clinically significant continued growth thereafter. Overcorrection is required to take into account recurrence of midface retrusion over the long term.

  15. Hybrid Parallelism for Volume Rendering on Large-, Multi-, and Many-Core Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Howison, Mark; Bethel, E. Wes; Childs, Hank

    2012-01-01

    With the computing industry trending towards multi- and many-core processors, we study how a standard visualization algorithm, ray-casting volume rendering, can benefit from a hybrid parallelism approach. Hybrid parallelism provides the best of both worlds: using distributed-memory parallelism across a large numbers of nodes increases available FLOPs and memory, while exploiting shared-memory parallelism among the cores within each node ensures that each node performs its portion of the larger calculation as efficiently as possible. We demonstrate results from weak and strong scaling studies, at levels of concurrency ranging up to 216,000, and with datasets as large as 12.2 trillion cells.more » The greatest benefit from hybrid parallelism lies in the communication portion of the algorithm, the dominant cost at higher levels of concurrency. We show that reducing the number of participants with a hybrid approach significantly improves performance.« less

  16. Quantification of human body fat tissue percentage by MRI.

    PubMed

    Müller, Hans-Peter; Raudies, Florian; Unrath, Alexander; Neumann, Heiko; Ludolph, Albert C; Kassubek, Jan

    2011-01-01

    The MRI-based evaluation of the quantity and regional distribution of adipose tissue is one objective measure in the investigation of obesity. The aim of this article was to report a comprehensive and automatic analytical method for the determination of the volumes of subcutaneous fat tissue (SFT) and visceral fat tissue (VFT) in either the whole human body or selected slices or regions of interest. Using an MRI protocol in an examination position that was convenient for volunteers and patients with severe diseases, 22 healthy subjects were examined. The software platform was able to merge MRI scans of several body regions acquired in separate acquisitions. Through a cascade of image processing steps, SFT and VFT volumes were calculated. Whole-body SFT and VFT distributions, as well as fat distributions of defined body slices, were analysed in detail. Complete three-dimensional datasets were analysed in a reproducible manner with as few operator-dependent interventions as possible. In order to determine the SFT volume, the ARTIS (Adapted Rendering for Tissue Intensity Segmentation) algorithm was introduced. The advantage of the ARTIS algorithm was the delineation of SFT volumes in regions in which standard region grow techniques fail. Using the ARTIS algorithm, an automatic SFT volume detection was feasible. MRI data analysis was able to determine SFT and VFT volume percentages using new analytical strategies. With the techniques described, it was possible to detect changes in SFT and VFT percentages of the whole body and selected regions. The techniques presented in this study are likely to be of use in obesity-related investigations, as well as in the examination of longitudinal changes in weight during various medical conditions. Copyright © 2010 John Wiley & Sons, Ltd.

  17. Volumetric Visualization of Human Skin

    NASA Astrophysics Data System (ADS)

    Kawai, Toshiyuki; Kurioka, Yoshihiro

    We propose a modeling and rendering technique of human skin, which can provide realistic color, gloss and translucency for various applications in computer graphics. Our method is based on volumetric representation of the structure inside of the skin. Our model consists of the stratum corneum and three layers of pigments. The stratum corneum has also layered structure in which the incident light is reflected, refracted and diffused. Each layer of pigment has carotene, melanin or hemoglobin. The density distributions of pigments which define the color of each layer can be supplied as one of the voxel values. Surface normals of upper-side voxels are fluctuated to produce bumps and lines on the skin. We apply ray tracing approach to this model to obtain the rendered image. Multiple scattering in the stratum corneum, reflective and absorptive spectrum of pigments are considered. We also consider Fresnel term to calculate the specular component for glossy surface of skin. Some examples of rendered images are shown, which can successfully visualize a human skin.

  18. Rendering of HDR content on LDR displays: an objective approach

    NASA Astrophysics Data System (ADS)

    Krasula, Lukáš; Narwaria, Manish; Fliegel, Karel; Le Callet, Patrick

    2015-09-01

    Dynamic range compression (or tone mapping) of HDR content is an essential step towards rendering it on traditional LDR displays in a meaningful way. This is however non-trivial and one of the reasons is that tone mapping operators (TMOs) usually need content-specific parameters to achieve the said goal. While subjective TMO parameter adjustment is the most accurate, it may not be easily deployable in many practical applications. Its subjective nature can also influence the comparison of different operators. Thus, there is a need for objective TMO parameter selection to automate the rendering process. To that end, we investigate into a new objective method for TMO parameters optimization. Our method is based on quantification of contrast reversal and naturalness. As an important advantage, it does not require any prior knowledge about the input HDR image and works independently on the used TMO. Experimental results using a variety of HDR images and several popular TMOs demonstrate the value of our method in comparison to default TMO parameter settings.

  19. Virtual environments from panoramic images

    NASA Astrophysics Data System (ADS)

    Chapman, David P.; Deacon, Andrew

    1998-12-01

    A number of recent projects have demonstrated the utility of Internet-enabled image databases for the documentation of complex, inaccessible and potentially hazardous environments typically encountered in the petrochemical and nuclear industries. Unfortunately machine vision and image processing techniques have not, to date, enabled the automatic extraction geometrical data from such images and thus 3D CAD modeling remains an expensive and laborious manual activity. Recent developments in panoramic image capture and presentation offer an alternative intermediate deliverable which, in turn, offers some of the benefits of a 3D model at a fraction of the cost. Panoramic image display tools such as Apple's QuickTime VR (QTVR) and Live Spaces RealVR provide compelling and accessible digital representations of the real world and justifiably claim to 'put the reality in Virtual Reality.' This paper will demonstrate how such technologies can be customized, extended and linked to facility management systems delivered over a corporate intra-net to enable end users to become familiar with remote sites and extract simple dimensional data. In addition strategies for the integration of such images with documents gathered from 2D or 3D CAD and Process and Instrumentation Diagrams (P&IDs) will be described as will techniques for precise 'As-Built' modeling using the calibrated images from which panoramas have been derived and the use of textures from these images to increase the realism of rendered scenes. A number of case studies relating to both nuclear and process engineering will demonstrate the extent to which such solution are scaleable in order to deal with the very large volumes of image data required to fully document the large, complex facilities typical of these industry sectors.

  20. Cardiac Cycle Dependent Left Atrial Dynamics: Implications for Catheter Ablation of Atrial Fibrillation

    PubMed Central

    Patel, Amit R.; Fatemi, Omid; Norton, Patrick T.; West, J. Jason; Helms, Adam S.; Kramer, Christopher M.; Ferguson, John D.

    2008-01-01

    Background Left atrial volume (LAV) determines prognosis and response to therapy in atrial fibrillation. Integration of electro-anatomical maps with 3D-images rendered from CT and MRI is used to facilitate atrial fibrillation ablation. Objectives We measured LAV changes and regional motion during the cardiac cycle that might affect the accuracy of image integration and determined their relationship to standard LAV measurements. Methods MRI was performed in thirty patients with paroxysmal atrial fibrillation. Left atrial time-volume curves were generated and used to divide the left atrial function (LAEF) into pumping (PEF) and conduit (CEF) fractions and to determine the maximum LAV (LAMAX) and the pre-atrial contraction volume (PACV). LAV was measured using an MRI angiogram and traditional geometric models from echocardiography (area-length and ellipsoid). The in-plane displacement of the pulmonary veins, anterior left atrium, mitral annulus, and left atrial appendage was measured. Results LAMAX was 107±36ml and occurred at 42±5% of the RR interval. PACV was 86 ±34ml and occurred at 81±4% of the RR interval. LAEF was 45±10% and PEF was 31±10%. LAV measurements made from the MRI angiogram, area-length and ellipsoid models underestimated LAMAX by 21±25ml, 16±26ml, and 35±22ml, respectively. The anterior LA, mitral annulus, and left atrial appendage were significantly displaced during the cardiac cycle (8.8±2.0mm, 13.2±3.8mm, and 10.2±3.4mm, respectively); the pulmonary veins were not. Conclusions LAV changes significantly during the cardiac cycle and substantial regional variation in left atrial motion exists. Standard measurements of left atrial volume significantly underestimate LAMAX when compared to the gold standard measure of 3D-volumetrics. PMID:18486563

  1. US Army Armor Reference Data in Three Volumes. Volume I. The Army Division.

    DTIC Science & Technology

    1981-01-01

    dental treatment ASSIGNMENT Organic Armored Division, TOE 17 (d) Optometrc services CAPABILITIES a Provides the following combat service support to a...Support Command. Infantry Division (Mechaniied TOE 29-ft 1 Provides expedient dental treatment CAPABILITIES a Provides medical staff services, including g...administration, and supervision of and f Provides expedient dental treatment plan, mrt division level ol4 uii novel medicaf support rendered by

  2. FluoroSim: A Visual Problem-Solving Environment for Fluorescence Microscopy

    PubMed Central

    Quammen, Cory W.; Richardson, Alvin C.; Haase, Julian; Harrison, Benjamin D.; Taylor, Russell M.; Bloom, Kerry S.

    2010-01-01

    Fluorescence microscopy provides a powerful method for localization of structures in biological specimens. However, aspects of the image formation process such as noise and blur from the microscope's point-spread function combine to produce an unintuitive image transformation on the true structure of the fluorescing molecules in the specimen, hindering qualitative and quantitative analysis of even simple structures in unprocessed images. We introduce FluoroSim, an interactive fluorescence microscope simulator that can be used to train scientists who use fluorescence microscopy to understand the artifacts that arise from the image formation process, to determine the appropriateness of fluorescence microscopy as an imaging modality in an experiment, and to test and refine hypotheses of model specimens by comparing the output of the simulator to experimental data. FluoroSim renders synthetic fluorescence images from arbitrary geometric models represented as triangle meshes. We describe three rendering algorithms on graphics processing units for computing the convolution of the specimen model with a microscope's point-spread function and report on their performance. We also discuss several cases where the microscope simulator has been used to solve real problems in biology. PMID:20431698

  3. Four dimensional hybrid ultrasound and optoacoustic imaging via passive element optical excitation in a hand-held probe

    NASA Astrophysics Data System (ADS)

    Fehm, Thomas Felix; Deán-Ben, Xosé Luís; Razansky, Daniel

    2014-10-01

    Ultrasonography and optoacoustic imaging share powerful advantages related to the natural aptitude for real-time image rendering with high resolution, the hand-held operation, and lack of ionizing radiation. The two methods also possess very different yet highly complementary advantages of the mechanical and optical contrast in living tissues. Nonetheless, efficient integration of these modalities remains challenging owing to the fundamental differences in the underlying physical contrast, optimal signal acquisition, and image reconstruction approaches. We report on a method for hybrid acquisition and reconstruction of three-dimensional pulse-echo ultrasound and optoacoustic images in real time based on passive ultrasound generation with an optical absorber, thus avoiding the hardware complexity of active ultrasound generation. In this way, complete hybrid datasets are generated with a single laser interrogation pulse, resulting in simultaneous rendering of ultrasound and optoacoustic images at an unprecedented rate of 10 volumetric frames per second. Performance is subsequently showcased in phantom experiments and in-vivo measurements from a healthy human volunteer, confirming general clinical applicability of the method.

  4. Four dimensional hybrid ultrasound and optoacoustic imaging via passive element optical excitation in a hand-held probe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fehm, Thomas Felix; Razansky, Daniel, E-mail: dr@tum.de; Faculty of Medicine, Technische Universität München, Munich

    2014-10-27

    Ultrasonography and optoacoustic imaging share powerful advantages related to the natural aptitude for real-time image rendering with high resolution, the hand-held operation, and lack of ionizing radiation. The two methods also possess very different yet highly complementary advantages of the mechanical and optical contrast in living tissues. Nonetheless, efficient integration of these modalities remains challenging owing to the fundamental differences in the underlying physical contrast, optimal signal acquisition, and image reconstruction approaches. We report on a method for hybrid acquisition and reconstruction of three-dimensional pulse-echo ultrasound and optoacoustic images in real time based on passive ultrasound generation with an opticalmore » absorber, thus avoiding the hardware complexity of active ultrasound generation. In this way, complete hybrid datasets are generated with a single laser interrogation pulse, resulting in simultaneous rendering of ultrasound and optoacoustic images at an unprecedented rate of 10 volumetric frames per second. Performance is subsequently showcased in phantom experiments and in-vivo measurements from a healthy human volunteer, confirming general clinical applicability of the method.« less

  5. BioVEC: a program for biomolecule visualization with ellipsoidal coarse-graining.

    PubMed

    Abrahamsson, Erik; Plotkin, Steven S

    2009-09-01

    Biomolecule Visualization with Ellipsoidal Coarse-graining (BioVEC) is a tool for visualizing molecular dynamics simulation data while allowing coarse-grained residues to be rendered as ellipsoids. BioVEC reads in configuration files, which may be output from molecular dynamics simulations that include orientation output in either quaternion or ANISOU format, and can render frames of the trajectory in several common image formats for subsequent concatenation into a movie file. The BioVEC program is written in C++, uses the OpenGL API for rendering, and is open source. It is lightweight, allows for user-defined settings for and texture, and runs on either Windows or Linux platforms.

  6. Acoustic Holographic Rendering with Two-dimensional Metamaterial-based Passive Phased Array

    PubMed Central

    Xie, Yangbo; Shen, Chen; Wang, Wenqi; Li, Junfei; Suo, Dingjie; Popa, Bogdan-Ioan; Jing, Yun; Cummer, Steven A.

    2016-01-01

    Acoustic holographic rendering in complete analogy with optical holography are useful for various applications, ranging from multi-focal lensing, multiplexed sensing and synthesizing three-dimensional complex sound fields. Conventional approaches rely on a large number of active transducers and phase shifting circuits. In this paper we show that by using passive metamaterials as subwavelength pixels, holographic rendering can be achieved without cumbersome circuitry and with only a single transducer, thus significantly reducing system complexity. Such metamaterial-based holograms can serve as versatile platforms for various advanced acoustic wave manipulation and signal modulation, leading to new possibilities in acoustic sensing, energy deposition and medical diagnostic imaging. PMID:27739472

  7. CSSG: Interactive Realism in Graphics with Complex Materials

    DTIC Science & Technology

    2010-09-28

    period (April 22, 2009 to June 30, 2010): Greg Nichols, Jeremy Shopf, and Chris Wyman, "Hierarchical Image-Space Radiosity for Interactive...Image-Space Radiosity for Interactive Global Illumination," paper presentation at the Eurographics Symposium on Rendering. Girona, Spain. June

  8. Computer-based analysis of microvascular alterations in a mouse model for Alzheimer's disease

    NASA Astrophysics Data System (ADS)

    Heinzer, Stefan; Müller, Ralph; Stampanoni, Marco; Abela, Rafael; Meyer, Eric P.; Ulmann-Schuler, Alexandra; Krucker, Thomas

    2007-03-01

    Vascular factors associated with Alzheimer's disease (AD) have recently gained increased attention. To investigate changes in vascular, particularly microvascular architecture, we developed a hierarchical imaging framework to obtain large-volume, high-resolution 3D images from brains of transgenic mice modeling AD. In this paper, we present imaging and data analysis methods which allow compiling unique characteristics from several hundred gigabytes of image data. Image acquisition is based on desktop micro-computed tomography (µCT) and local synchrotron-radiation µCT (SRµCT) scanning with a nominal voxel size of 16 µm and 1.4 µm, respectively. Two visualization approaches were implemented: stacks of Z-buffer projections for fast data browsing, and progressive-mesh based surface rendering for detailed 3D visualization of the large datasets. In a first step, image data was assessed visually via a Java client connected to a central database. Identified characteristics of interest were subsequently quantified using global morphometry software. To obtain even deeper insight into microvascular alterations, tree analysis software was developed providing local morphometric parameters such as number of vessel segments or vessel tortuosity. In the context of ever increasing image resolution and large datasets, computer-aided analysis has proven both powerful and indispensable. The hierarchical approach maintains the context of local phenomena, while proper visualization and morphometry provide the basis for detailed analysis of the pathology related to structure. Beyond analysis of microvascular changes in AD this framework will have significant impact considering that vascular changes are involved in other neurodegenerative diseases as well as in cancer, cardiovascular disease, asthma, and arthritis.

  9. Correlative cryo-fluorescence light microscopy and cryo-electron tomography of Streptomyces.

    PubMed

    Koning, Roman I; Celler, Katherine; Willemse, Joost; Bos, Erik; van Wezel, Gilles P; Koster, Abraham J

    2014-01-01

    Light microscopy and electron microscopy are complementary techniques that in a correlative approach enable identification and targeting of fluorescently labeled structures in situ for three-dimensional imaging at nanometer resolution. Correlative imaging allows electron microscopic images to be positioned in a broader temporal and spatial context. We employed cryo-correlative light and electron microscopy (cryo-CLEM), combining cryo-fluorescence light microscopy and cryo-electron tomography, on vitrified Streptomyces bacteria to study cell division. Streptomycetes are mycelial bacteria that grow as long hyphae and reproduce via sporulation. On solid media, Streptomyces subsequently form distinct aerial mycelia where cell division leads to the formation of unigenomic spores which separate and disperse to form new colonies. In liquid media, only vegetative hyphae are present divided by noncell separating crosswalls. Their multicellular life style makes them exciting model systems for the study of bacterial development and cell division. Complex intracellular structures have been visualized with transmission electron microscopy. Here, we describe the methods for cryo-CLEM that we applied for studying Streptomyces. These methods include cell growth, fluorescent labeling, cryo-fixation by vitrification, cryo-light microscopy using a Linkam cryo-stage, image overlay and relocation, cryo-electron tomography using a Titan Krios, and tomographic reconstruction. Additionally, methods for segmentation, volume rendering, and visualization of the correlative data are described. © 2014 Elsevier Inc. All rights reserved.

  10. A high-resolution 3D ultrasonic system for rapid evaluation of the anterior and posterior segment.

    PubMed

    Peyman, Gholam A; Ingram, Charles P; Montilla, Leonardo G; Witte, Russell S

    2012-01-01

    Traditional ultrasound imaging systems for ophthalmology employ slow, mechanical scanning of a single-element ultrasound transducer. The goal was to demonstrate rapid examination of the anterior and posterior segment with a three-dimensional (3D) commercial ultrasound system incorporating high-resolution linear probe arrays. The 3D images of the porcine eye were generated in approximately 10 seconds by scanning one of two commercial linear arrays (25- and 50-MHz). Healthy enucleated pig eyes were compared with those with induced injury or placement of a foreign material (eg, metal). Rapid, volumetric imaging was also demonstrated in one human eye in vivo. The 50-MHz probe provided exquisite volumetric images of the anterior segment at a depth up to 15 mm and axial resolution of 30 μm. The 25-MHz probe provided a larger field of view (lateral X depth: 20 × 30 mm), sufficient for capturing the entire anterior and posterior segments of the pig eye, at a resolution of 60 μm. A 50-MHz scan through the human eyelid illustrated detailed structures of the Meibomian glands, cilia, cornea, and anterior segment back to the posterior capsule. The 3D system with its high-frequency ultrasound arrays, fast data acquisition, and volume rendering capability shows promise for investigating anterior and posterior structures of the eye. Copyright 2012, SLACK Incorporated.

  11. On-demand rendering of an oblique slice through 3D volumetric data using JPEG2000 client-server framework

    NASA Astrophysics Data System (ADS)

    Joshi, Rajan L.

    2006-03-01

    In medical imaging, the popularity of image capture modalities such as multislice CT and MRI is resulting in an exponential increase in the amount of volumetric data that needs to be archived and transmitted. At the same time, the increased data is taxing the interpretation capabilities of radiologists. One of the workflow strategies recommended for radiologists to overcome the data overload is the use of volumetric navigation. This allows the radiologist to seek a series of oblique slices through the data. However, it might be inconvenient for a radiologist to wait until all the slices are transferred from the PACS server to a client, such as a diagnostic workstation. To overcome this problem, we propose a client-server architecture based on JPEG2000 and JPEG2000 Interactive Protocol (JPIP) for rendering oblique slices through 3D volumetric data stored remotely at a server. The client uses the JPIP protocol for obtaining JPEG2000 compressed data from the server on an as needed basis. In JPEG2000, the image pixels are wavelet-transformed and the wavelet coefficients are grouped into precincts. Based on the positioning of the oblique slice, compressed data from only certain precincts is needed to render the slice. The client communicates this information to the server so that the server can transmit only relevant compressed data. We also discuss the use of caching on the client side for further reduction in bandwidth requirements. Finally, we present simulation results to quantify the bandwidth savings for rendering a series of oblique slices.

  12. Impact of reconstruction parameters on quantitative I-131 SPECT

    NASA Astrophysics Data System (ADS)

    van Gils, C. A. J.; Beijst, C.; van Rooij, R.; de Jong, H. W. A. M.

    2016-07-01

    Radioiodine therapy using I-131 is widely used for treatment of thyroid disease or neuroendocrine tumors. Monitoring treatment by accurate dosimetry requires quantitative imaging. The high energy photons however render quantitative SPECT reconstruction challenging, potentially requiring accurate correction for scatter and collimator effects. The goal of this work is to assess the effectiveness of various correction methods on these effects using phantom studies. A SPECT/CT acquisition of the NEMA IEC body phantom was performed. Images were reconstructed using the following parameters: (1) without scatter correction, (2) with triple energy window (TEW) scatter correction and (3) with Monte Carlo-based scatter correction. For modelling the collimator-detector response (CDR), both (a) geometric Gaussian CDRs as well as (b) Monte Carlo simulated CDRs were compared. Quantitative accuracy, contrast to noise ratios and recovery coefficients were calculated, as well as the background variability and the residual count error in the lung insert. The Monte Carlo scatter corrected reconstruction method was shown to be intrinsically quantitative, requiring no experimentally acquired calibration factor. It resulted in a more accurate quantification of the background compartment activity density compared with TEW or no scatter correction. The quantification error relative to a dose calibrator derived measurement was found to be  <1%,-26% and 33%, respectively. The adverse effects of partial volume were significantly smaller with the Monte Carlo simulated CDR correction compared with geometric Gaussian or no CDR modelling. Scatter correction showed a small effect on quantification of small volumes. When using a weighting factor, TEW correction was comparable to Monte Carlo reconstruction in all measured parameters, although this approach is clinically impractical since this factor may be patient dependent. Monte Carlo based scatter correction including accurately simulated CDR modelling is the most robust and reliable method to reconstruct accurate quantitative iodine-131 SPECT images.

  13. [Research on Three-dimensional Medical Image Reconstruction and Interaction Based on HTML5 and Visualization Toolkit].

    PubMed

    Gao, Peng; Liu, Peng; Su, Hongsen; Qiao, Liang

    2015-04-01

    Integrating visualization toolkit and the capability of interaction, bidirectional communication and graphics rendering which provided by HTML5, we explored and experimented on the feasibility of remote medical image reconstruction and interaction in pure Web. We prompted server-centric method which did not need to download the big medical data to local connections and avoided considering network transmission pressure and the three-dimensional (3D) rendering capability of client hardware. The method integrated remote medical image reconstruction and interaction into Web seamlessly, which was applicable to lower-end computers and mobile devices. Finally, we tested this method in the Internet and achieved real-time effects. This Web-based 3D reconstruction and interaction method, which crosses over internet terminals and performance limited devices, may be useful for remote medical assistant.

  14. Sensor fusion for synthetic vision

    NASA Technical Reports Server (NTRS)

    Pavel, M.; Larimer, J.; Ahumada, A.

    1991-01-01

    Display methodologies are explored for fusing images gathered by millimeter wave sensors with images rendered from an on-board terrain data base to facilitate visually guided flight and ground operations in low visibility conditions. An approach to fusion based on multiresolution image representation and processing is described which facilitates fusion of images differing in resolution within and between images. To investigate possible fusion methods, a workstation-based simulation environment is being developed.

  15. Pseudo-shading technique in the two-dimensional domain: a post-processing algorithm for enhancing the Z-buffer of a three-dimensional binary image.

    PubMed

    Tan, A C; Richards, R

    1989-01-01

    Three-dimensional (3D) medical graphics is becoming popular in clinical use on tomographic scanners. Research work in 3D reconstructive display of computerized tomography (CT) and magnetic resonance imaging (MRI) scans on conventional computers has produced many so-called pseudo-3D images. The quality of these images depends on the rendering algorithm, the coarseness of the digitized object, the number of grey levels and the image screen resolution. CT and MRI data are fundamentally voxel based and they produce images that are coarse because of the resolution of the data acquisition system. 3D images produced by the Z-buffer depth shading technique suffer loss of detail when complex objects with fine textural detail need to be displayed. Attempts have been made to improve the display of voxel objects, and existing techniques have shown the improvement possible using these post-processing algorithms. The improved rendering technique works on the Z-buffer image to generate a shaded image using a single light source in any direction. The effectiveness of the technique in generating a shaded image has been shown to be a useful means of presenting 3D information for clinical use.

  16. Carpal bone movements in gripping action of the giant panda (Ailuropoda melanoleuca)

    PubMed Central

    ENDO, HIDEKI; SASAKI, MOTOKI; HAYASHI, YOSHIHIRO; KOIE, HIROSHI; YAMAYA, YOSHIKI; KIMURA, JUNPEI

    2001-01-01

    The movement of the carpal bones in gripping was clarified in the giant panda (Ailuropoda melanoleuca) by means of macroscopic anatomy, computed tomography (CT) and related 3-dimensional (3-D) volume rendering techniques. In the gripping action, 3-D CT images demonstrated that the radial and 4th carpal bones largely rotate or flex to the radial and ulnar sides respectively. This indicates that these carpal bones on both sides enable the panda to flex the palm from the forearm and to grasp objects by the manipulation mechanism that includes the radial sesamoid. In the macroscopic observations, we found that the smooth articulation surfaces are enlarged between the radial carpal and the radius on the radial side, and between the 4th and ulnar carpals on the ulnar side. The panda skilfully grasps using a double pincer-like apparatus with the huge radial sesamoid and accessory carpal. PMID:11273049

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Narayan, Tarun C.; Hayee, Fariah; Baldi, Andrea

    Many energy storage materials undergo large volume changes during charging and discharging. The resulting stresses often lead to defect formation in the bulk, but less so in nanosized systems. Here, we capture in real time the mechanism of one such transformation—the hydrogenation of single-crystalline palladium nanocubes from 15 to 80 nm—to better understand the reason for this durability. First, using environmental scanning transmission electron microscopy, we monitor the hydrogen absorption process in real time with 3 nm resolution. Then, using dark-field imaging, we structurally examine the reaction intermediates with 1 nm resolution. The reaction proceeds through nucleation and growth ofmore » the new phase in corners of the nanocubes. As the hydrogenated phase propagates across the particles, portions of the lattice misorient by 1.5%, diminishing crystal quality. Once transformed, all the particles explored return to a pristine state. As a result, the nanoparticles’ ability to remove crystallographic imperfections renders them more durable than their bulk counterparts.« less

  18. Modeling Complex Biological Flows in Multi-Scale Systems using the APDEC Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trebotich, D

    We have developed advanced numerical algorithms to model biological fluids in multiscale flow environments using the software framework developed under the SciDAC APDEC ISIC. The foundation of our computational effort is an approach for modeling DNA-laden fluids as ''bead-rod'' polymers whose dynamics are fully coupled to an incompressible viscous solvent. The method is capable of modeling short range forces and interactions between particles using soft potentials and rigid constraints. Our methods are based on higher-order finite difference methods in complex geometry with adaptivity, leveraging algorithms and solvers in the APDEC Framework. Our Cartesian grid embedded boundary approach to incompressible viscousmore » flow in irregular geometries has also been interfaced to a fast and accurate level-sets method within the APDEC Framework for extracting surfaces from volume renderings of medical image data and used to simulate cardio-vascular and pulmonary flows in critical anatomies.« less

  19. A Case-Based Study with Radiologists Performing Diagnosis Tasks in Virtual Reality.

    PubMed

    Venson, José Eduardo; Albiero Berni, Jean Carlo; Edmilson da Silva Maia, Carlos; Marques da Silva, Ana Maria; Cordeiro d'Ornellas, Marcos; Maciel, Anderson

    2017-01-01

    In radiology diagnosis, medical images are most often visualized slice by slice. At the same time, the visualization based on 3D volumetric rendering of the data is considered useful and has increased its field of application. In this work, we present a case-based study with 16 medical specialists to assess the diagnostic effectiveness of a Virtual Reality interface in fracture identification over 3D volumetric reconstructions. We developed a VR volume viewer compatible with both the Oculus Rift and handheld-based head mounted displays (HMDs). We then performed user experiments to validate the approach in a diagnosis environment. In addition, we assessed the subjects' perception of the 3D reconstruction quality, ease of interaction and ergonomics, and also the users opinion on how VR applications can be useful in healthcare. Among other results, we have found a high level of effectiveness of the VR interface in identifying superficial fractures on head CTs.

  20. Modeling complex biological flows in multi-scale systems using the APDEC framework

    NASA Astrophysics Data System (ADS)

    Trebotich, David

    2006-09-01

    We have developed advanced numerical algorithms to model biological fluids in multiscale flow environments using the software framework developed under the SciDAC APDEC ISIC. The foundation of our computational effort is an approach for modeling DNA laden fluids as ''bead-rod'' polymers whose dynamics are fully coupled to an incompressible viscous solvent. The method is capable of modeling short range forces and interactions between particles using soft potentials and rigid constraints. Our methods are based on higher-order finite difference methods in complex geometry with adaptivity, leveraging algorithms and solvers in the APDEC Framework. Our Cartesian grid embedded boundary approach to incompressible viscous flow in irregular geometries has also been interfaced to a fast and accurate level-sets method within the APDEC Framework for extracting surfaces from volume renderings of medical image data and used to simulate cardio-vascular and pulmonary flows in critical anatomies.

Top