Sample records for complex 3d renderings

  1. Scalable Multi-Platform Distribution of Spatial 3d Contents

    NASA Astrophysics Data System (ADS)

    Klimke, J.; Hagedorn, B.; Döllner, J.

    2013-09-01

    Virtual 3D city models provide powerful user interfaces for communication of 2D and 3D geoinformation. Providing high quality visualization of massive 3D geoinformation in a scalable, fast, and cost efficient manner is still a challenging task. Especially for mobile and web-based system environments, software and hardware configurations of target systems differ significantly. This makes it hard to provide fast, visually appealing renderings of 3D data throughout a variety of platforms and devices. Current mobile or web-based solutions for 3D visualization usually require raw 3D scene data such as triangle meshes together with textures delivered from server to client, what makes them strongly limited in terms of size and complexity of the models they can handle. In this paper, we introduce a new approach for provisioning of massive, virtual 3D city models on different platforms namely web browsers, smartphones or tablets, by means of an interactive map assembled from artificial oblique image tiles. The key concept is to synthesize such images of a virtual 3D city model by a 3D rendering service in a preprocessing step. This service encapsulates model handling and 3D rendering techniques for high quality visualization of massive 3D models. By generating image tiles using this service, the 3D rendering process is shifted from the client side, which provides major advantages: (a) The complexity of the 3D city model data is decoupled from data transfer complexity (b) the implementation of client applications is simplified significantly as 3D rendering is encapsulated on server side (c) 3D city models can be easily deployed for and used by a large number of concurrent users, leading to a high degree of scalability of the overall approach. All core 3D rendering techniques are performed on a dedicated 3D rendering server, and thin-client applications can be compactly implemented for various devices and platforms.

  2. a Quadtree Organization Construction and Scheduling Method for Urban 3d Model Based on Weight

    NASA Astrophysics Data System (ADS)

    Yao, C.; Peng, G.; Song, Y.; Duan, M.

    2017-09-01

    The increasement of Urban 3D model precision and data quantity puts forward higher requirements for real-time rendering of digital city model. Improving the organization, management and scheduling of 3D model data in 3D digital city can improve the rendering effect and efficiency. This paper takes the complexity of urban models into account, proposes a Quadtree construction and scheduling rendering method for Urban 3D model based on weight. Divide Urban 3D model into different rendering weights according to certain rules, perform Quadtree construction and schedule rendering according to different rendering weights. Also proposed an algorithm for extracting bounding box extraction based on model drawing primitives to generate LOD model automatically. Using the algorithm proposed in this paper, developed a 3D urban planning&management software, the practice has showed the algorithm is efficient and feasible, the render frame rate of big scene and small scene are both stable at around 25 frames.

  3. Openwebglobe 2: Visualization of Complex 3D-GEODATA in the (mobile) Webbrowser

    NASA Astrophysics Data System (ADS)

    Christen, M.

    2016-06-01

    Providing worldwide high resolution data for virtual globes consists of compute and storage intense tasks for processing data. Furthermore, rendering complex 3D-Geodata, such as 3D-City models with an extremely high polygon count and a vast amount of textures at interactive framerates is still a very challenging task, especially on mobile devices. This paper presents an approach for processing, caching and serving massive geospatial data in a cloud-based environment for large scale, out-of-core, highly scalable 3D scene rendering on a web based virtual globe. Cloud computing is used for processing large amounts of geospatial data and also for providing 2D and 3D map data to a large amount of (mobile) web clients. In this paper the approach for processing, rendering and caching very large datasets in the currently developed virtual globe "OpenWebGlobe 2" is shown, which displays 3D-Geodata on nearly every device.

  4. Complex adaptation-based LDR image rendering for 3D image reconstruction

    NASA Astrophysics Data System (ADS)

    Lee, Sung-Hak; Kwon, Hyuk-Ju; Sohng, Kyu-Ik

    2014-07-01

    A low-dynamic tone-compression technique is developed for realistic image rendering that can make three-dimensional (3D) images similar to realistic scenes by overcoming brightness dimming in the 3D display mode. The 3D surround provides varying conditions for image quality, illuminant adaptation, contrast, gamma, color, sharpness, and so on. In general, gain/offset adjustment, gamma compensation, and histogram equalization have performed well in contrast compression; however, as a result of signal saturation and clipping effects, image details are removed and information is lost on bright and dark areas. Thus, an enhanced image mapping technique is proposed based on space-varying image compression. The performance of contrast compression is enhanced with complex adaptation in a 3D viewing surround combining global and local adaptation. Evaluating local image rendering in view of tone and color expression, noise reduction, and edge compensation confirms that the proposed 3D image-mapping model can compensate for the loss of image quality in the 3D mode.

  5. See-Through Imaging of Laser-Scanned 3d Cultural Heritage Objects Based on Stochastic Rendering of Large-Scale Point Clouds

    NASA Astrophysics Data System (ADS)

    Tanaka, S.; Hasegawa, K.; Okamoto, N.; Umegaki, R.; Wang, S.; Uemura, M.; Okamoto, A.; Koyamada, K.

    2016-06-01

    We propose a method for the precise 3D see-through imaging, or transparent visualization, of the large-scale and complex point clouds acquired via the laser scanning of 3D cultural heritage objects. Our method is based on a stochastic algorithm and directly uses the 3D points, which are acquired using a laser scanner, as the rendering primitives. This method achieves the correct depth feel without requiring depth sorting of the rendering primitives along the line of sight. Eliminating this need allows us to avoid long computation times when creating natural and precise 3D see-through views of laser-scanned cultural heritage objects. The opacity of each laser-scanned object is also flexibly controllable. For a laser-scanned point cloud consisting of more than 107 or 108 3D points, the pre-processing requires only a few minutes, and the rendering can be executed at interactive frame rates. Our method enables the creation of cumulative 3D see-through images of time-series laser-scanned data. It also offers the possibility of fused visualization for observing a laser-scanned object behind a transparent high-quality photographic image placed in the 3D scene. We demonstrate the effectiveness of our method by applying it to festival floats of high cultural value. These festival floats have complex outer and inner 3D structures and are suitable for see-through imaging.

  6. 3D cinematic rendering of the calvarium, maxillofacial structures, and skull base: preliminary observations.

    PubMed

    Rowe, Steven P; Zinreich, S James; Fishman, Elliot K

    2018-06-01

    Three-dimensional (3D) visualizations of volumetric data from CT have gained widespread clinical acceptance and are an important method for evaluating complex anatomy and pathology. Recently, cinematic rendering (CR), a new 3D visualization methodology, has become available. CR utilizes a lighting model that allows for the production of photorealistic images from isotropic voxel data. Given how new this technique is, studies to evaluate its clinical utility and any potential advantages or disadvantages relative to other 3D methods such as volume rendering have yet to be published. In this pictorial review, we provide examples of normal calvarial, maxillofacial, and skull base anatomy and pathological conditions that highlight the potential for CR images to aid in patient evaluation and treatment planning. The highly detailed images and nuanced shadowing that are intrinsic to CR are well suited to the display of the complex anatomy in this region of the body. We look forward to studies with CR that will ascertain the ultimate value of this methodology to evaluate calvarium, maxillofacial, and skull base morphology as well as other complex anatomic structures.

  7. Intraoperative utilization of advanced imaging modalities in a complex kidney stone case: a pilot case study.

    PubMed

    Christiansen, Andrew R; Shorti, Rami M; Smith, Cory D; Prows, William C; Bishoff, Jay T

    2018-05-01

    Despite the increasing use of advanced 3D imaging techniques and 3D printing, these techniques have not yet been comprehensively compared in a surgical setting. The purpose of this study is to explore the effectiveness of five different advanced imaging modalities during a complex renal surgical procedure. A patient with a horseshoe kidney and multiple large, symptomatic stones that had failed Extracorporeal Shock Wave Lithotripsy (ESWL) and ureteroscopy treatment was used for this evaluation. CT data were used to generate five different imaging modalities, including a 3D printed model, three different volume rendered models, and a geometric CAD model. A survey was used to evaluate the quality and breadth of the imaging modalities during four different phases of the laparoscopic procedure. In the case of a complex kidney procedure, the CAD model, 3D print, volume render on an autostereoscopic 3D display, interactive and basic volume render models demonstrated added insight and complemented the surgical procedure. CAD manual segmentation allowed tissue layers and/or kidney stones to be made colorful and semi-transparent, allowing easier navigation through abnormal vasculature. The 3D print allowed for simultaneous visualization of renal pelvis and surrounding vasculature. Our preliminary exploration indicates that various advanced imaging modalities, when properly utilized and supported during surgery, can be useful in complementing the CT data and laparoscopic display. This study suggests that various imaging modalities, such as ones utilized in this case, can be beneficial intraoperatively depending on the surgical step involved and may be more helpful than 3D printed models. We also present factors to consider when evaluating advanced imaging modalities during complex surgery.

  8. The physics of volume rendering

    NASA Astrophysics Data System (ADS)

    Peters, Thomas

    2014-11-01

    Radiation transfer is an important topic in several physical disciplines, probably most prominently in astrophysics. Computer scientists use radiation transfer, among other things, for the visualization of complex data sets with direct volume rendering. In this article, I point out the connection between physical radiation transfer and volume rendering, and I describe an implementation of direct volume rendering in the astrophysical radiation transfer code RADMC-3D. I show examples for the use of this module on analytical models and simulation data.

  9. Research on Visualization of Ground Laser Radar Data Based on Osg

    NASA Astrophysics Data System (ADS)

    Huang, H.; Hu, C.; Zhang, F.; Xue, H.

    2018-04-01

    Three-dimensional (3D) laser scanning is a new advanced technology integrating light, machine, electricity, and computer technologies. It can conduct 3D scanning to the whole shape and form of space objects with high precision. With this technology, you can directly collect the point cloud data of a ground object and create the structure of it for rendering. People use excellent 3D rendering engine to optimize and display the 3D model in order to meet the higher requirements of real time realism rendering and the complexity of the scene. OpenSceneGraph (OSG) is an open source 3D graphics engine. Compared with the current mainstream 3D rendering engine, OSG is practical, economical, and easy to expand. Therefore, OSG is widely used in the fields of virtual simulation, virtual reality, science and engineering visualization. In this paper, a dynamic and interactive ground LiDAR data visualization platform is constructed based on the OSG and the cross-platform C++ application development framework Qt. In view of the point cloud data of .txt format and the triangulation network data file of .obj format, the functions of 3D laser point cloud and triangulation network data display are realized. It is proved by experiments that the platform is of strong practical value as it is easy to operate and provides good interaction.

  10. Comparison of mandibular first molar mesial root canal morphology using micro-computed tomography and clearing technique.

    PubMed

    Kim, Yeun; Perinpanayagam, Hiran; Lee, Jong-Ki; Yoo, Yeon-Jee; Oh, Soram; Gu, Yu; Lee, Seung-Pyo; Chang, Seok Woo; Lee, Woocheol; Baek, Seung-Ho; Zhu, Qiang; Kum, Kee-Yeon

    2015-08-01

    Micro-computed tomography (MCT) with alternative image reformatting techniques shows complex and detailed root canal anatomy. This study compared two-dimensional (2D) and 3D MCT image reformatting with standard tooth clearing for studying mandibular first molar mesial root canal morphology. Extracted human mandibular first molar mesial roots (n=31) were scanned by MCT (Skyscan 1172). 2D thin-slab minimum intensity projection (TS-MinIP) and 3D volume rendered images were constructed. The same teeth were then processed by clearing and staining. For each root, images obtained from clearing, 2D, 3D and combined 2D and 3D techniques were examined independently by four endodontists and categorized according to Vertucci's classification. Fine anatomical structures such as accessory canals, intercanal communications and loops were also identified. Agreement among the four techniques for Vertucci's classification was 45.2% (14/31). The most frequent were Vertucci's type IV and then type II, although many had complex configurations that were non-classifiable. Generally, complex canal systems were more clearly visible in MCT images than with standard clearing and staining. Fine anatomical structures such as intercanal communications, accessory canals and loops were mostly detected with a combination of 2D TS-MinIP and 3D volume-rendering MCT images. Canal configurations and fine anatomic structures were more clearly observed in the combined 2D and 3D MCT images than the clearing technique. The frequency of non-classifiable configurations demonstrated the complexity of mandibular first molar mesial root canal anatomy.

  11. The three-dimensional Event-Driven Graphics Environment (3D-EDGE)

    NASA Technical Reports Server (NTRS)

    Freedman, Jeffrey; Hahn, Roger; Schwartz, David M.

    1993-01-01

    Stanford Telecom developed the Three-Dimensional Event-Driven Graphics Environment (3D-EDGE) for NASA GSFC's (GSFC) Communications Link Analysis and Simulation System (CLASS). 3D-EDGE consists of a library of object-oriented subroutines which allow engineers with little or no computer graphics experience to programmatically manipulate, render, animate, and access complex three-dimensional objects.

  12. Quality improving techniques for free-viewpoint DIBR

    NASA Astrophysics Data System (ADS)

    Do, Luat; Zinger, Sveta; de With, Peter H. N.

    2010-02-01

    Interactive free-viewpoint selection applied to a 3D multi-view signal is a possible attractive feature of the rapidly developing 3D TV media. This paper explores a new rendering algorithm that computes a free-viewpoint based on depth image warping between two reference views from existing cameras. We have developed three quality enhancing techniques that specifically aim at solving the major artifacts. First, resampling artifacts are filled in by a combination of median filtering and inverse warping. Second, contour artifacts are processed while omitting warping of edges at high discontinuities. Third, we employ a depth signal for more accurate disocclusion inpainting. We obtain an average PSNR gain of 3 dB and 4.5 dB for the 'Breakdancers' and 'Ballet' sequences, respectively, compared to recently published results. While experimenting with synthetic data, we observe that the rendering quality is highly dependent on the complexity of the scene. Moreover, experiments are performed using compressed video from surrounding cameras. The overall system quality is dominated by the rendering quality and not by coding.

  13. Human factors guidelines for applications of 3D perspectives: a literature review

    NASA Astrophysics Data System (ADS)

    Dixon, Sharon; Fitzhugh, Elisabeth; Aleva, Denise

    2009-05-01

    Once considered too processing-intense for general utility, application of the third dimension to convey complex information is facilitated by the recent proliferation of technological advancements in computer processing, 3D displays, and 3D perspective (2.5D) renderings within a 2D medium. The profusion of complex and rapidly-changing dynamic information being conveyed in operational environments has elevated interest in possible military applications of 3D technologies. 3D can be a powerful mechanism for clearer information portrayal, facilitating rapid and accurate identification of key elements essential to mission performance and operator safety. However, implementation of 3D within legacy systems can be costly, making integration prohibitive. Therefore, identifying which tasks may benefit from 3D or 2.5D versus simple 2D visualizations is critical. Unfortunately, there is no "bible" of human factors guidelines for usability optimization of 2D, 2.5D, or 3D visualizations nor for determining which display best serves a particular application. Establishing such guidelines would provide an invaluable tool for designers and operators. Defining issues common to each will enhance design effectiveness. This paper presents the results of an extensive review of open source literature addressing 3D information displays, with particular emphasis on comparison of true 3D with 2D and 2.5D representations and their utility for military tasks. Seventy-five papers are summarized, highlighting militarily relevant applications of 3D visualizations and 2.5D perspective renderings. Based on these findings, human factors guidelines for when and how to use these visualizations, along with recommendations for further research are discussed.

  14. The new generation of OpenGL support in ROOT

    NASA Astrophysics Data System (ADS)

    Tadel, M.

    2008-07-01

    OpenGL has been promoted to become the main 3D rendering engine of the ROOT framework. This required a major re-modularization of OpenGL support on all levels, from basic window-system specific interface to medium-level object-representation and top-level scene management. This new architecture allows seamless integration of external scene-graph libraries into the ROOT OpenGL viewer as well as inclusion of ROOT 3D scenes into external GUI and OpenGL-based 3D-rendering frameworks. Scene representation was removed from inside of the viewer, allowing scene-data to be shared among several viewers and providing for a natural implementation of multi-view canvas layouts. The object-graph traversal infrastructure allows free mixing of 3D and 2D-pad graphics and makes implementation of ROOT canvas in pure OpenGL possible. Scene-elements representing ROOT objects trigger automatic instantiation of user-provided rendering-objects based on the dictionary information and class-naming convention. Additionally, a finer, per-object control over scene-updates is available to the user, allowing overhead-free maintenance of dynamic 3D scenes and creation of complex real-time animations. User-input handling was modularized as well, making it easy to support application-specific scene navigation, selection handling and tool management.

  15. Processing-in-Memory Enabled Graphics Processors for 3D Rendering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Chenhao; Song, Shuaiwen; Wang, Jing

    2017-02-06

    The performance of 3D rendering of Graphics Processing Unit that convents 3D vector stream into 2D frame with 3D image effects significantly impact users’ gaming experience on modern computer systems. Due to the high texture throughput in 3D rendering, main memory bandwidth becomes a critical obstacle for improving the overall rendering performance. 3D stacked memory systems such as Hybrid Memory Cube (HMC) provide opportunities to significantly overcome the memory wall by directly connecting logic controllers to DRAM dies. Based on the observation that texel fetches significantly impact off-chip memory traffic, we propose two architectural designs to enable Processing-In-Memory based GPUmore » for efficient 3D rendering.« less

  16. From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data

    PubMed Central

    Tsai, Wen-Ting; Hassan, Ahmed; Sarkar, Purbasha; Correa, Joaquin; Metlagel, Zoltan; Jorgens, Danielle M.; Auer, Manfred

    2014-01-01

    Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g., signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets. PMID:25145678

  17. Plenoptic layer-based modeling for image based rendering.

    PubMed

    Pearson, James; Brookes, Mike; Dragotti, Pier Luigi

    2013-09-01

    Image based rendering is an attractive alternative to model based rendering for generating novel views because of its lower complexity and potential for photo-realistic results. To reduce the number of images necessary for alias-free rendering, some geometric information for the 3D scene is normally necessary. In this paper, we present a fast automatic layer-based method for synthesizing an arbitrary new view of a scene from a set of existing views. Our algorithm takes advantage of the knowledge of the typical structure of multiview data to perform occlusion-aware layer extraction. In addition, the number of depth layers used to approximate the geometry of the scene is chosen based on plenoptic sampling theory with the layers placed non-uniformly to account for the scene distribution. The rendering is achieved using a probabilistic interpolation approach and by extracting the depth layer information on a small number of key images. Numerical results demonstrate that the algorithm is fast and yet is only 0.25 dB away from the ideal performance achieved with the ground-truth knowledge of the 3D geometry of the scene of interest. This indicates that there are measurable benefits from following the predictions of plenoptic theory and that they remain true when translated into a practical system for real world data.

  18. Interpreting three-dimensional structures from two-dimensional images: a web-based interactive 3D teaching model of surgical liver anatomy

    PubMed Central

    Crossingham, Jodi L; Jenkinson, Jodie; Woolridge, Nick; Gallinger, Steven; Tait, Gordon A; Moulton, Carol-Anne E

    2009-01-01

    Background: Given the increasing number of indications for liver surgery and the growing complexity of operations, many trainees in surgical, imaging and related subspecialties require a good working knowledge of the complex intrahepatic anatomy. Computed tomography (CT), the most commonly used liver imaging modality, enhances our understanding of liver anatomy, but comprises a two-dimensional (2D) representation of a complex 3D organ. It is challenging for trainees to acquire the necessary skills for converting these 2D images into 3D mental reconstructions because learning opportunities are limited and internal hepatic anatomy is complicated, asymmetrical and variable. We have created a website that uses interactive 3D models of the liver to assist trainees in understanding the complex spatial anatomy of the liver and to help them create a 3D mental interpretation of this anatomy when viewing CT scans. Methods: Computed tomography scans were imported into DICOM imaging software (OsiriX™) to obtain 3D surface renderings of the liver and its internal structures. Using these 3D renderings as a reference, 3D models of the liver surface and the intrahepatic structures, portal veins, hepatic veins, hepatic arteries and the biliary system were created using 3D modelling software (Cinema 4D™). Results: Using current best practices for creating multimedia tools, a unique, freely available, online learning resource has been developed, entitled Visual Interactive Resource for Teaching, Understanding And Learning Liver Anatomy (VIRTUAL Liver) (http://pie.med.utoronto.ca/VLiver). This website uses interactive 3D models to provide trainees with a constructive resource for learning common liver anatomy and liver segmentation, and facilitates the development of the skills required to mentally reconstruct a 3D version of this anatomy from 2D CT scans. Discussion: Although the intended audience for VIRTUAL Liver consists of residents in various medical and surgical specialties, the website will also be useful for other health care professionals (i.e. radiologists, nurses, hepatologists, radiation oncologists, family doctors) and educators because it provides a comprehensive resource for teaching liver anatomy. PMID:19816618

  19. Advanced texture filtering: a versatile framework for reconstructing multi-dimensional image data on heterogeneous architectures

    NASA Astrophysics Data System (ADS)

    Zellmann, Stefan; Percan, Yvonne; Lang, Ulrich

    2015-01-01

    Reconstruction of 2-d image primitives or of 3-d volumetric primitives is one of the most common operations performed by the rendering components of modern visualization systems. Because this operation is often aided by GPUs, reconstruction is typically restricted to first-order interpolation. With the advent of in situ visualization, the assumption that rendering algorithms are in general executed on GPUs is however no longer adequate. We thus propose a framework that provides versatile texture filtering capabilities: up to third-order reconstruction using various types of cubic filtering and interpolation primitives; cache-optimized algorithms that integrate seamlessly with GPGPU rendering or with software rendering that was optimized for cache-friendly "Structure of Array" (SoA) access patterns; a memory management layer (MML) that gracefully hides the complexities of extra data copies necessary for memory access optimizations such as swizzling, for rendering on GPGPUs, or for reconstruction schemes that rely on pre-filtered data arrays. We prove the effectiveness of our software architecture by integrating it into and validating it using the open source direct volume rendering (DVR) software DeskVOX.

  20. An efficient depth map preprocessing method based on structure-aided domain transform smoothing for 3D view generation

    PubMed Central

    Ma, Liyan; Qiu, Bo; Cui, Mingyue; Ding, Jianwei

    2017-01-01

    Depth image-based rendering (DIBR), which is used to render virtual views with a color image and the corresponding depth map, is one of the key techniques in the 2D to 3D conversion process. Due to the absence of knowledge about the 3D structure of a scene and its corresponding texture, DIBR in the 2D to 3D conversion process, inevitably leads to holes in the resulting 3D image as a result of newly-exposed areas. In this paper, we proposed a structure-aided depth map preprocessing framework in the transformed domain, which is inspired by recently proposed domain transform for its low complexity and high efficiency. Firstly, our framework integrates hybrid constraints including scene structure, edge consistency and visual saliency information in the transformed domain to improve the performance of depth map preprocess in an implicit way. Then, adaptive smooth localization is cooperated and realized in the proposed framework to further reduce over-smoothness and enhance optimization in the non-hole regions. Different from the other similar methods, the proposed method can simultaneously achieve the effects of hole filling, edge correction and local smoothing for typical depth maps in a united framework. Thanks to these advantages, it can yield visually satisfactory results with less computational complexity for high quality 2D to 3D conversion. Numerical experimental results demonstrate the excellent performances of the proposed method. PMID:28407027

  1. An interactive display system for large-scale 3D models

    NASA Astrophysics Data System (ADS)

    Liu, Zijian; Sun, Kun; Tao, Wenbing; Liu, Liman

    2018-04-01

    With the improvement of 3D reconstruction theory and the rapid development of computer hardware technology, the reconstructed 3D models are enlarging in scale and increasing in complexity. Models with tens of thousands of 3D points or triangular meshes are common in practical applications. Due to storage and computing power limitation, it is difficult to achieve real-time display and interaction with large scale 3D models for some common 3D display software, such as MeshLab. In this paper, we propose a display system for large-scale 3D scene models. We construct the LOD (Levels of Detail) model of the reconstructed 3D scene in advance, and then use an out-of-core view-dependent multi-resolution rendering scheme to realize the real-time display of the large-scale 3D model. With the proposed method, our display system is able to render in real time while roaming in the reconstructed scene and 3D camera poses can also be displayed. Furthermore, the memory consumption can be significantly decreased via internal and external memory exchange mechanism, so that it is possible to display a large scale reconstructed scene with over millions of 3D points or triangular meshes in a regular PC with only 4GB RAM.

  2. Application of three-dimensional rendering in joint-related ganglion cysts.

    PubMed

    Spinner, Robert J; Edwards, Phillip K; Amrami, Kimberly K

    2006-05-01

    The origin of para-articular cysts is poorly understood and controversial. The relatively common, simple (extraneural) cysts are presumed to be derived from joints, although joint connections are not always established. Rarer complex cysts are thought by many to form de novo within nerves (intraneural ganglion cysts) or within vessels (adventitial cysts) (degenerative theory). We believe that these simple and complex ganglion cysts are joint-related (articular theory). Joint connections are often not readily appreciated with routine imaging or at surgery. Not identifying and/or treating joint connections frequently leads to cyst recurrence. More sophisticated imaging may enhance visualization of these joint connections. We created a 3D rendering technique to assess potential joint connections of simple and complex cysts localized to the knee and superior tibiofibular joints in patients with fibular (peroneal) neuropathy. Two- and three-dimensional data sets from MRI examinations were segmented semiautomatically by signal intensity with further refinement based on interaction with the user to identify specific anatomic structures, such as small nerves and vessels on serial images. The bone, cysts, nerves, and vessels were each assigned different color representations, and 3D renderings were created in ANALYZE using the data sets closest to isotropic (voxel with equal length in all dimensions) resolution as the primary background rendering. We selected four cases to illustrate the spectrum of pathology. In all of these cases, we demonstrated joint connections and correlated imaging and operative findings. Surgery addressing the cyst and the joint connection resulted in excellent outcomes; postoperative MRIs done more than 6 months later confirmed that there was no recurrence. In addition to highlighting the important relationship of these cysts to neighboring anatomic structures, this 3D technique allows visualization of "occult" connections not readily appreciated with standard MR imaging. We believe that these joint-related cysts have a common pathogenesis; they dissect through a capsular rent and follow the path of least resistance; they may form simple cysts by dissecting out into the soft tissue, or more complex cysts by dissecting within the epineurium of nerves or adventitia of vessels (along an articular branch), or various combinations of all of these types of cysts. Understanding the pathogenesis for cyst formation will improve surgical management and outcomes. We have adapted this 3D technique to enhance the visualization of cysts occurring at other joints.

  3. Real-time volume rendering of 4D image using 3D texture mapping

    NASA Astrophysics Data System (ADS)

    Hwang, Jinwoo; Kim, June-Sic; Kim, Jae Seok; Kim, In Young; Kim, Sun Il

    2001-05-01

    Four dimensional image is 3D volume data that varies with time. It is used to express deforming or moving object in virtual surgery of 4D ultrasound. It is difficult to render 4D image by conventional ray-casting or shear-warp factorization methods because of their time-consuming rendering time or pre-processing stage whenever the volume data are changed. Even 3D texture mapping is used, repeated volume loading is also time-consuming in 4D image rendering. In this study, we propose a method to reduce data loading time using coherence between currently loaded volume and previously loaded volume in order to achieve real time rendering based on 3D texture mapping. Volume data are divided into small bricks and each brick being loaded is tested for similarity to one which was already loaded in memory. If the brick passed the test, it is defined as 3D texture by OpenGL functions. Later, the texture slices of the brick are mapped into polygons and blended by OpenGL blending functions. All bricks undergo this test. Continuously deforming fifty volumes are rendered in interactive time with SGI ONYX. Real-time volume rendering based on 3D texture mapping is currently available on PC.

  4. Perception of 3D spatial relations for 3D displays

    NASA Astrophysics Data System (ADS)

    Rosen, Paul; Pizlo, Zygmunt; Hoffmann, Christoph; Popescu, Voicu S.

    2004-05-01

    We test perception of 3D spatial relations in 3D images rendered by a 3D display (Perspecta from Actuality Systems) and compare it to that of a high-resolution flat panel display. 3D images provide the observer with such depth cues as motion parallax and binocular disparity. Our 3D display is a device that renders a 3D image by displaying, in rapid succession, radial slices through the scene on a rotating screen. The image is contained in a glass globe and can be viewed from virtually any direction. In the psychophysical experiment several families of 3D objects are used as stimuli: primitive shapes (cylinders and cuboids), and complex objects (multi-story buildings, cars, and pieces of furniture). Each object has at least one plane of symmetry. On each trial an object or its "distorted" version is shown at an arbitrary orientation. The distortion is produced by stretching an object in a random direction by 40%. This distortion must eliminate the symmetry of an object. The subject's task is to decide whether or not the presented object is distorted under several viewing conditions (monocular/binocular, with/without motion parallax, and near/far). The subject's performance is measured by the discriminability d', which is a conventional dependent variable in signal detection experiments.

  5. [Computer-assisted operational planning for pediatric abdominal surgery. 3D-visualized MRI with volume rendering].

    PubMed

    Günther, P; Tröger, J; Holland-Cunz, S; Waag, K L; Schenk, J P

    2006-08-01

    Exact surgical planning is necessary for complex operations of pathological changes in anatomical structures of the pediatric abdomen. 3D visualization and computer-assisted operational planning based on CT data are being increasingly used for difficult operations in adults. To minimize radiation exposure and for better soft tissue contrast, sonography and MRI are the preferred diagnostic methods in pediatric patients. Because of manifold difficulties 3D visualization of these MRI data has not been realized so far, even though the field of embryonal malformations and tumors could benefit from this.A newly developed and modified raycasting-based powerful 3D volume rendering software (VG Studio Max 1.2) for the planning of pediatric abdominal surgery is presented. With the help of specifically developed algorithms, a useful surgical planning system is demonstrated. Thanks to the easy handling and high-quality visualization with enormous gain of information, the presented system is now an established part of routine surgical planning.

  6. Video coding for 3D-HEVC based on saliency information

    NASA Astrophysics Data System (ADS)

    Yu, Fang; An, Ping; Yang, Chao; You, Zhixiang; Shen, Liquan

    2016-11-01

    As an extension of High Efficiency Video Coding ( HEVC), 3D-HEVC has been widely researched under the impetus of the new generation coding standard in recent years. Compared with H.264/AVC, its compression efficiency is doubled while keeping the same video quality. However, its higher encoding complexity and longer encoding time are not negligible. To reduce the computational complexity and guarantee the subjective quality of virtual views, this paper presents a novel video coding method for 3D-HEVC based on the saliency informat ion which is an important part of Human Visual System (HVS). First of all, the relationship between the current coding unit and its adjacent units is used to adjust the maximum depth of each largest coding unit (LCU) and determine the SKIP mode reasonably. Then, according to the saliency informat ion of each frame image, the texture and its corresponding depth map will be divided into three regions, that is, salient area, middle area and non-salient area. Afterwards, d ifferent quantization parameters will be assigned to different regions to conduct low complexity coding. Finally, the compressed video will generate new view point videos through the renderer tool. As shown in our experiments, the proposed method saves more bit rate than other approaches and achieves up to highest 38% encoding time reduction without subjective quality loss in compression or rendering.

  7. Comparison of alternative image reformatting techniques in micro-computed tomography and tooth clearing for detailed canal morphology.

    PubMed

    Lee, Ki-Wook; Kim, Yeun; Perinpanayagam, Hiran; Lee, Jong-Ki; Yoo, Yeon-Jee; Lim, Sang-Min; Chang, Seok Woo; Ha, Byung-Hyun; Zhu, Qiang; Kum, Kee-Yeon

    2014-03-01

    Micro-computed tomography (MCT) shows detailed root canal morphology that is not seen with traditional tooth clearing. However, alternative image reformatting techniques in MCT involving 2-dimensional (2D) minimum intensity projection (MinIP) and 3-dimensional (3D) volume-rendering reconstruction have not been directly compared with clearing. The aim was to compare alternative image reformatting techniques in MCT with tooth clearing on the mesiobuccal (MB) root of maxillary first molars. Eighteen maxillary first molar MB roots were scanned, and 2D MinIP and 3D volume-rendered images were reconstructed. Subsequently, the same MB roots were processed by traditional tooth clearing. Images from 2D, 3D, 2D + 3D, and clearing techniques were assessed by 4 endodontists to classify canal configuration and to identify fine anatomic structures such as accessory canals, intercanal communications, and loops. All image reformatting techniques in MCT showed detailed configurations and numerous fine structures, such that none were classified as simple type I or II canals; several were classified as types III and IV according to Weine classification or types IV, V, and VI according to Vertucci; and most were nonclassifiable because of their complexity. The clearing images showed less detail, few fine structures, and numerous type I canals. Classification of canal configuration was in 100% intraobserver agreement for all 18 roots visualized by any of the image reformatting techniques in MCT but for only 4 roots (22.2%) classified according to Weine and 6 (33.3%) classified according to Vertucci, when using the clearing technique. The combination of 2D MinIP and 3D volume-rendered images showed the most detailed canal morphology and fine anatomic structures. Copyright © 2014 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  8. A high-level 3D visualization API for Java and ImageJ.

    PubMed

    Schmid, Benjamin; Schindelin, Johannes; Cardona, Albert; Longair, Mark; Heisenberg, Martin

    2010-05-21

    Current imaging methods such as Magnetic Resonance Imaging (MRI), Confocal microscopy, Electron Microscopy (EM) or Selective Plane Illumination Microscopy (SPIM) yield three-dimensional (3D) data sets in need of appropriate computational methods for their analysis. The reconstruction, segmentation and registration are best approached from the 3D representation of the data set. Here we present a platform-independent framework based on Java and Java 3D for accelerated rendering of biological images. Our framework is seamlessly integrated into ImageJ, a free image processing package with a vast collection of community-developed biological image analysis tools. Our framework enriches the ImageJ software libraries with methods that greatly reduce the complexity of developing image analysis tools in an interactive 3D visualization environment. In particular, we provide high-level access to volume rendering, volume editing, surface extraction, and image annotation. The ability to rely on a library that removes the low-level details enables concentrating software development efforts on the algorithm implementation parts. Our framework enables biomedical image software development to be built with 3D visualization capabilities with very little effort. We offer the source code and convenient binary packages along with extensive documentation at http://3dviewer.neurofly.de.

  9. [Hybrid 3-D rendering of the thorax and surface-based virtual bronchoscopy in surgical and interventional therapy control].

    PubMed

    Seemann, M D; Gebicke, K; Luboldt, W; Albes, J M; Vollmar, J; Schäfer, J F; Beinert, T; Englmeier, K H; Bitzer, M; Claussen, C D

    2001-07-01

    The aim of this study was to demonstrate the possibilities of a hybrid rendering method, the combination of a color-coded surface and volume rendering method, with the feasibility of performing surface-based virtual endoscopy with different representation models in the operative and interventional therapy control of the chest. In 6 consecutive patients with partial lung resection (n = 2) and lung transplantation (n = 4) a thin-section spiral computed tomography of the chest was performed. The tracheobronchial system and the introduced metallic stents were visualized using a color-coded surface rendering method. The remaining thoracic structures were visualized using a volume rendering method. For virtual bronchoscopy, the tracheobronchial system was visualized using a triangle surface model, a shaded-surface model and a transparent shaded-surface model. The hybrid 3D visualization uses the advantages of both the color-coded surface and volume rendering methods and facilitates a clear representation of the tracheobronchial system and the complex topographical relationship of morphological and pathological changes without loss of diagnostic information. Performing virtual bronchoscopy with the transparent shaded-surface model facilitates a reasonable to optimal, simultaneous visualization and assessment of the surface structure of the tracheobronchial system and the surrounding mediastinal structures and lesions. Hybrid rendering relieve the morphological assessment of anatomical and pathological changes without the need for time-consuming detailed analysis and presentation of source images. Performing virtual bronchoscopy with a transparent shaded-surface model offers a promising alternative to flexible fiberoptic bronchoscopy.

  10. Elasticity-based three dimensional ultrasound real-time volume rendering

    NASA Astrophysics Data System (ADS)

    Boctor, Emad M.; Matinfar, Mohammad; Ahmad, Omar; Rivaz, Hassan; Choti, Michael; Taylor, Russell H.

    2009-02-01

    Volumetric ultrasound imaging has not gained wide recognition, despite the availability of real-time 3D ultrasound scanners and the anticipated potential of 3D ultrasound imaging in diagnostic and interventional radiology. Their use, however, has been hindered by the lack of real-time visualization methods that are capable of producing high quality 3D rendering of the target/surface of interest. Volume rendering is a known visualization method, which can display clear surfaces out of the acquired volumetric data, and has an increasing number of applications utilizing CT and MRI data. The key element of any volume rendering pipeline is the ability to classify the target/surface of interest by setting an appropriate opacity function. Practical and successful real-time 3D ultrasound volume rendering can be achieved in Obstetrics and Angio applications where setting these opacity functions can be done rapidly, and reliably. Unfortunately, 3D ultrasound volume rendering of soft tissues is a challenging task due to the presence of significant amount of noise and speckle. Recently, several research groups have shown the feasibility of producing 3D elasticity volume from two consecutive 3D ultrasound scans. This report describes a novel volume rendering pipeline utilizing elasticity information. The basic idea is to compute B-mode voxel opacity from the rapidly calculated strain values, which can also be mixed with conventional gradient based opacity function. We have implemented the volume renderer using GPU unit, which gives an update rate of 40 volume/sec.

  11. High Performance GPU-Based Fourier Volume Rendering.

    PubMed

    Abdellah, Marwan; Eldeib, Ayman; Sharawi, Amr

    2015-01-01

    Fourier volume rendering (FVR) is a significant visualization technique that has been used widely in digital radiography. As a result of its (N (2)log⁡N) time complexity, it provides a faster alternative to spatial domain volume rendering algorithms that are (N (3)) computationally complex. Relying on the Fourier projection-slice theorem, this technique operates on the spectral representation of a 3D volume instead of processing its spatial representation to generate attenuation-only projections that look like X-ray radiographs. Due to the rapid evolution of its underlying architecture, the graphics processing unit (GPU) became an attractive competent platform that can deliver giant computational raw power compared to the central processing unit (CPU) on a per-dollar-basis. The introduction of the compute unified device architecture (CUDA) technology enables embarrassingly-parallel algorithms to run efficiently on CUDA-capable GPU architectures. In this work, a high performance GPU-accelerated implementation of the FVR pipeline on CUDA-enabled GPUs is presented. This proposed implementation can achieve a speed-up of 117x compared to a single-threaded hybrid implementation that uses the CPU and GPU together by taking advantage of executing the rendering pipeline entirely on recent GPU architectures.

  12. Improvement of the Correlative AFM and ToF-SIMS Approach Using an Empirical Sputter Model for 3D Chemical Characterization.

    PubMed

    Terlier, T; Lee, J; Lee, K; Lee, Y

    2018-02-06

    Technological progress has spurred the development of increasingly sophisticated analytical devices. The full characterization of structures in terms of sample volume and composition is now highly complex. Here, a highly improved solution for 3D characterization of samples, based on an advanced method for 3D data correction, is proposed. Traditionally, secondary ion mass spectrometry (SIMS) provides the chemical distribution of sample surfaces. Combining successive sputtering with 2D surface projections enables a 3D volume rendering to be generated. However, surface topography can distort the volume rendering by necessitating the projection of a nonflat surface onto a planar image. Moreover, the sputtering is highly dependent on the probed material. Local variation of composition affects the sputter yield and the beam-induced roughness, which in turn alters the 3D render. To circumvent these drawbacks, the correlation of atomic force microscopy (AFM) with SIMS has been proposed in previous studies as a solution for the 3D chemical characterization. To extend the applicability of this approach, we have developed a methodology using AFM-time-of-flight (ToF)-SIMS combined with an empirical sputter model, "dynamic-model-based volume correction", to universally correct 3D structures. First, the simulation of 3D structures highlighted the great advantages of this new approach compared with classical methods. Then, we explored the applicability of this new correction to two types of samples, a patterned metallic multilayer and a diblock copolymer film presenting surface asperities. In both cases, the dynamic-model-based volume correction produced an accurate 3D reconstruction of the sample volume and composition. The combination of AFM-SIMS with the dynamic-model-based volume correction improves the understanding of the surface characteristics. Beyond the useful 3D chemical information provided by dynamic-model-based volume correction, the approach permits us to enhance the correlation of chemical information from spectroscopic techniques with the physical properties obtained by AFM.

  13. Architecture for high performance stereoscopic game rendering on Android

    NASA Astrophysics Data System (ADS)

    Flack, Julien; Sanderson, Hugh; Shetty, Sampath

    2014-03-01

    Stereoscopic gaming is a popular source of content for consumer 3D display systems. There has been a significant shift in the gaming industry towards casual games for mobile devices running on the Android™ Operating System and driven by ARM™ and other low power processors. Such systems are now being integrated directly into the next generation of 3D TVs potentially removing the requirement for an external games console. Although native stereo support has been integrated into some high profile titles on established platforms like Windows PC and PS3 there is a lack of GPU independent 3D support for the emerging Android platform. We describe a framework for enabling stereoscopic 3D gaming on Android for applications on mobile devices, set top boxes and TVs. A core component of the architecture is a 3D game driver, which is integrated into the Android OpenGL™ ES graphics stack to convert existing 2D graphics applications into stereoscopic 3D in real-time. The architecture includes a method of analyzing 2D games and using rule based Artificial Intelligence (AI) to position separate objects in 3D space. We describe an innovative stereo 3D rendering technique to separate the views in the depth domain and render directly into the display buffer. The advantages of the stereo renderer are demonstrated by characterizing the performance in comparison to more traditional render techniques, including depth based image rendering, both in terms of frame rates and impact on battery consumption.

  14. 3D printing the pterygopalatine fossa: a negative space model of a complex structure.

    PubMed

    Bannon, Ross; Parihar, Shivani; Skarparis, Yiannis; Varsou, Ourania; Cezayirli, Enis

    2018-02-01

    The pterygopalatine fossa is one of the most complex anatomical regions to understand. It is poorly visualized in cadaveric dissection and most textbooks rely on schematic depictions. We describe our approach to creating a low-cost, 3D model of the pterygopalatine fossa, including its associated canals and foramina, using an affordable "desktop" 3D printer. We used open source software to create a volume render of the pterygopalatine fossa from axial slices of a head computerised tomography scan. These data were then exported to a 3D printer to produce an anatomically accurate model. The resulting 'negative space' model of the pterygopalatine fossa provides a useful and innovative aid for understanding the complex anatomical relationships of the pterygopalatine fossa. This model was designed primarily for medical students; however, it will also be of interest to postgraduates in ENT, ophthalmology, neurosurgery, and radiology. The technical process described may be replicated by other departments wishing to develop their own anatomical models whilst incurring minimal costs.

  15. High-efficiency photorealistic computer-generated holograms based on the backward ray-tracing technique

    NASA Astrophysics Data System (ADS)

    Wang, Yuan; Chen, Zhidong; Sang, Xinzhu; Li, Hui; Zhao, Linmin

    2018-03-01

    Holographic displays can provide the complete optical wave field of a three-dimensional (3D) scene, including the depth perception. However, it often takes a long computation time to produce traditional computer-generated holograms (CGHs) without more complex and photorealistic rendering. The backward ray-tracing technique is able to render photorealistic high-quality images, which noticeably reduce the computation time achieved from the high-degree parallelism. Here, a high-efficiency photorealistic computer-generated hologram method is presented based on the ray-tracing technique. Rays are parallelly launched and traced under different illuminations and circumstances. Experimental results demonstrate the effectiveness of the proposed method. Compared with the traditional point cloud CGH, the computation time is decreased to 24 s to reconstruct a 3D object of 100 ×100 rays with continuous depth change.

  16. Comparative analysis of video processing and 3D rendering for cloud video games using different virtualization technologies

    NASA Astrophysics Data System (ADS)

    Bada, Adedayo; Alcaraz-Calero, Jose M.; Wang, Qi; Grecos, Christos

    2014-05-01

    This paper describes a comprehensive empirical performance evaluation of 3D video processing employing the physical/virtual architecture implemented in a cloud environment. Different virtualization technologies, virtual video cards and various 3D benchmarks tools have been utilized in order to analyse the optimal performance in the context of 3D online gaming applications. This study highlights 3D video rendering performance under each type of hypervisors, and other factors including network I/O, disk I/O and memory usage. Comparisons of these factors under well-known virtual display technologies such as VNC, Spice and Virtual 3D adaptors reveal the strengths and weaknesses of the various hypervisors with respect to 3D video rendering and streaming.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bayr, S., E-mail: suvi.bayr@jyu.fi; Ojanperä, M.; Kaparaju, P.

    Highlights: • Rendering wastes’ mono-digestion and co-digestion with potato pulp were studied. • CSTR process with OLR of 1.5 kg VS/m{sup 3} d, HRT of 50 d was unstable in mono-digestion. • Free NH{sub 3} inhibited mono-digestion of rendering wastes. • CSTR process with OLR of 1.5 kg VS/m{sup 3} d, HRT of 50 d was stable in co-digestion. • Co-digestion increased methane yield somewhat compared to mono-digestion. - Abstract: In this study, mono-digestion of rendering wastes and co-digestion of rendering wastes with potato pulp were studied for the first time in continuous stirred tank reactor (CSTR) experiments at 55more » °C. Rendering wastes have high protein and lipid contents and are considered good substrates for methane production. However, accumulation of digestion intermediate products viz., volatile fatty acids (VFAs), long chain fatty acids (LCFAs) and ammonia nitrogen (NH{sub 4}-N and/or free NH{sub 3}) can cause process imbalance during the digestion. Mono-digestion of rendering wastes at an organic loading rate (OLR) of 1.5 kg volatile solids (VS)/m{sup 3} d and hydraulic retention time (HRT) of 50 d was unstable and resulted in methane yields of 450 dm{sup 3}/kg VS{sub fed}. On the other hand, co-digestion of rendering wastes with potato pulp (60% wet weight, WW) at the same OLR and HRT improved the process stability and increased methane yields (500–680 dm{sup 3}/kg VS{sub fed}). Thus, it can be concluded that co-digestion of rendering wastes with potato pulp could improve the process stability and methane yields from these difficult to treat industrial waste materials.« less

  18. Algorithms for Haptic Rendering of 3D Objects

    NASA Technical Reports Server (NTRS)

    Basdogan, Cagatay; Ho, Chih-Hao; Srinavasan, Mandayam

    2003-01-01

    Algorithms have been developed to provide haptic rendering of three-dimensional (3D) objects in virtual (that is, computationally simulated) environments. The goal of haptic rendering is to generate tactual displays of the shapes, hardnesses, surface textures, and frictional properties of 3D objects in real time. Haptic rendering is a major element of the emerging field of computer haptics, which invites comparison with computer graphics. We have already seen various applications of computer haptics in the areas of medicine (surgical simulation, telemedicine, haptic user interfaces for blind people, and rehabilitation of patients with neurological disorders), entertainment (3D painting, character animation, morphing, and sculpting), mechanical design (path planning and assembly sequencing), and scientific visualization (geophysical data analysis and molecular manipulation).

  19. Architecture of web services in the enhancement of real-time 3D video virtualization in cloud environment

    NASA Astrophysics Data System (ADS)

    Bada, Adedayo; Wang, Qi; Alcaraz-Calero, Jose M.; Grecos, Christos

    2016-04-01

    This paper proposes a new approach to improving the application of 3D video rendering and streaming by jointly exploring and optimizing both cloud-based virtualization and web-based delivery. The proposed web service architecture firstly establishes a software virtualization layer based on QEMU (Quick Emulator), an open-source virtualization software that has been able to virtualize system components except for 3D rendering, which is still in its infancy. The architecture then explores the cloud environment to boost the speed of the rendering at the QEMU software virtualization layer. The capabilities and inherent limitations of Virgil 3D, which is one of the most advanced 3D virtual Graphics Processing Unit (GPU) available, are analyzed through benchmarking experiments and integrated into the architecture to further speed up the rendering. Experimental results are reported and analyzed to demonstrate the benefits of the proposed approach.

  20. Distributed rendering for multiview parallax displays

    NASA Astrophysics Data System (ADS)

    Annen, T.; Matusik, W.; Pfister, H.; Seidel, H.-P.; Zwicker, M.

    2006-02-01

    3D display technology holds great promise for the future of television, virtual reality, entertainment, and visualization. Multiview parallax displays deliver stereoscopic views without glasses to arbitrary positions within the viewing zone. These systems must include a high-performance and scalable 3D rendering subsystem in order to generate multiple views at real-time frame rates. This paper describes a distributed rendering system for large-scale multiview parallax displays built with a network of PCs, commodity graphics accelerators, multiple projectors, and multiview screens. The main challenge is to render various perspective views of the scene and assign rendering tasks effectively. In this paper we investigate two different approaches: Optical multiplexing for lenticular screens and software multiplexing for parallax-barrier displays. We describe the construction of large-scale multi-projector 3D display systems using lenticular and parallax-barrier technology. We have developed different distributed rendering algorithms using the Chromium stream-processing framework and evaluate the trade-offs and performance bottlenecks. Our results show that Chromium is well suited for interactive rendering on multiview parallax displays.

  1. Parallel Rendering of Large Time-Varying Volume Data

    NASA Technical Reports Server (NTRS)

    Garbutt, Alexander E.

    2005-01-01

    Interactive visualization of large time-varying 3D volume datasets has been and still is a great challenge to the modem computational world. It stretches the limits of the memory capacity, the disk space, the network bandwidth and the CPU speed of a conventional computer. In this SURF project, we propose to develop a parallel volume rendering program on SGI's Prism, a cluster computer equipped with state-of-the-art graphic hardware. The proposed program combines both parallel computing and hardware rendering in order to achieve an interactive rendering rate. We use 3D texture mapping and a hardware shader to implement 3D volume rendering on each workstation. We use SGI's VisServer to enable remote rendering using Prism's graphic hardware. And last, we will integrate this new program with ParVox, a parallel distributed visualization system developed at JPL. At the end of the project, we Will demonstrate remote interactive visualization using this new hardware volume renderer on JPL's Prism System using a time-varying dataset from selected JPL applications.

  2. Palmdale International Airport, Palmdale, California. Airport Development Program

    DTIC Science & Technology

    1982-01-01

    and ONT have rendered this system concept academic. • Concept B, described starting on page 209, is basically a reflection of the current situation...very different, and impacts the PIA in a very different manner. For example, the almost continuous use of the Complex 1 and Complex 4 MOAs will render ...been described in considerable detail by Underhill (n.d.), Strong (1929), and others ( Heizer 1978). Groups were subdivided into s~all bands

  3. Pseudo-shading technique in the two-dimensional domain: a post-processing algorithm for enhancing the Z-buffer of a three-dimensional binary image.

    PubMed

    Tan, A C; Richards, R

    1989-01-01

    Three-dimensional (3D) medical graphics is becoming popular in clinical use on tomographic scanners. Research work in 3D reconstructive display of computerized tomography (CT) and magnetic resonance imaging (MRI) scans on conventional computers has produced many so-called pseudo-3D images. The quality of these images depends on the rendering algorithm, the coarseness of the digitized object, the number of grey levels and the image screen resolution. CT and MRI data are fundamentally voxel based and they produce images that are coarse because of the resolution of the data acquisition system. 3D images produced by the Z-buffer depth shading technique suffer loss of detail when complex objects with fine textural detail need to be displayed. Attempts have been made to improve the display of voxel objects, and existing techniques have shown the improvement possible using these post-processing algorithms. The improved rendering technique works on the Z-buffer image to generate a shaded image using a single light source in any direction. The effectiveness of the technique in generating a shaded image has been shown to be a useful means of presenting 3D information for clinical use.

  4. Design and Validation of 3D Printed Complex Bone Models with Internal Anatomic Fidelity for Surgical Training and Rehearsal.

    PubMed

    Unger, Bertram J; Kraut, Jay; Rhodes, Charlotte; Hochman, Jordan

    2014-01-01

    Physical models of complex bony structures can be used for surgical skills training. Current models focus on surface rendering but suffer from a lack of internal accuracy due to limitations in the manufacturing process. We describe a technique for generating internally accurate rapid-prototyped anatomical models with solid and hollow structures from clinical and microCT data using a 3D printer. In a face validation experiment, otolaryngology residents drilled a cadaveric bone and its corresponding printed model. The printed bone models were deemed highly realistic representations across all measured parameters and the educational value of the models was strongly appreciated.

  5. Post-processing methods of rendering and visualizing 3-D reconstructed tomographic images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, S.T.C.

    The purpose of this presentation is to discuss the computer processing techniques of tomographic images, after they have been generated by imaging scanners, for volume visualization. Volume visualization is concerned with the representation, manipulation, and rendering of volumetric data. Since the first digital images were produced from computed tomography (CT) scanners in the mid 1970s, applications of visualization in medicine have expanded dramatically. Today, three-dimensional (3D) medical visualization has expanded from using CT data, the first inherently digital source of 3D medical data, to using data from various medical imaging modalities, including magnetic resonance scanners, positron emission scanners, digital ultrasound,more » electronic and confocal microscopy, and other medical imaging modalities. We have advanced from rendering anatomy to aid diagnosis and visualize complex anatomic structures to planning and assisting surgery and radiation treatment. New, more accurate and cost-effective procedures for clinical services and biomedical research have become possible by integrating computer graphics technology with medical images. This trend is particularly noticeable in current market-driven health care environment. For example, interventional imaging, image-guided surgery, and stereotactic and visualization techniques are now stemming into surgical practice. In this presentation, we discuss only computer-display-based approaches of volumetric medical visualization. That is, we assume that the display device available is two-dimensional (2D) in nature and all analysis of multidimensional image data is to be carried out via the 2D screen of the device. There are technologies such as holography and virtual reality that do provide a {open_quotes}true 3D screen{close_quotes}. To confine the scope, this presentation will not discuss such approaches.« less

  6. Using 3D visualization and seismic attributes to improve structural and stratigraphic resolution of reservoirs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerr, J.; Jones, G.L.

    1996-01-01

    Recent advances in hardware and software have given the interpreter and engineer new ways to view 3D seismic data and well bore information. Recent papers have also highlighted the use of various statistics and seismic attributes. By combining new 3D rendering technologies with recent trends in seismic analysis, the interpreter can improve the structural and stratigraphic resolution of hydrocarbon reservoirs. This paper gives several examples using 3D visualization to better define both the structural and stratigraphic aspects of several different structural types from around the world. Statistics, 3D visualization techniques and rapid animation are used to show complex faulting andmore » detailed channel systems. These systems would be difficult to map using either 2D or 3D data with conventional interpretation techniques.« less

  7. Using 3D visualization and seismic attributes to improve structural and stratigraphic resolution of reservoirs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerr, J.; Jones, G.L.

    1996-12-31

    Recent advances in hardware and software have given the interpreter and engineer new ways to view 3D seismic data and well bore information. Recent papers have also highlighted the use of various statistics and seismic attributes. By combining new 3D rendering technologies with recent trends in seismic analysis, the interpreter can improve the structural and stratigraphic resolution of hydrocarbon reservoirs. This paper gives several examples using 3D visualization to better define both the structural and stratigraphic aspects of several different structural types from around the world. Statistics, 3D visualization techniques and rapid animation are used to show complex faulting andmore » detailed channel systems. These systems would be difficult to map using either 2D or 3D data with conventional interpretation techniques.« less

  8. The Effect of a Simple Simulation Game on Long-Term Vocabulary Retention

    ERIC Educational Resources Information Center

    Franciosi, Stephan J.; Yagi, Junichi; Tomoshige, Yuuki; Ye, Suying

    2016-01-01

    Recent studies have shown that simulation games may be useful tools for supporting foreign language education. However, much of this research has focused on games using 3D graphic technology, which entail technical requirements that may render them too complex for use in many educational contexts. Accordingly, we wanted to determine if less…

  9. Techniques for efficient, real-time, 3D visualization of multi-modality cardiac data using consumer graphics hardware.

    PubMed

    Levin, David; Aladl, Usaf; Germano, Guido; Slomka, Piotr

    2005-09-01

    We exploit consumer graphics hardware to perform real-time processing and visualization of high-resolution, 4D cardiac data. We have implemented real-time, realistic volume rendering, interactive 4D motion segmentation of cardiac data, visualization of multi-modality cardiac data and 3D display of multiple series cardiac MRI. We show that an ATI Radeon 9700 Pro can render a 512x512x128 cardiac Computed Tomography (CT) study at 0.9 to 60 frames per second (fps) depending on rendering parameters and that 4D motion based segmentation can be performed in real-time. We conclude that real-time rendering and processing of cardiac data can be implemented on consumer graphics cards.

  10. Structural and congenital heart disease interventions: the role of three-dimensional printing.

    PubMed

    Meier, L M; Meineri, M; Qua Hiansen, J; Horlick, E M

    2017-02-01

    Advances in catheter-based interventions in structural and congenital heart disease have mandated an increased demand for three-dimensional (3D) visualisation of complex cardiac anatomy. Despite progress in 3D imaging modalities, the pre- and periprocedural visualisation of spatial anatomy is relegated to two-dimensional flat screen representations. 3D printing is an evolving technology based on the concept of additive manufacturing, where computerised digital surface renders are converted into physical models. Printed models replicate complex structures in tangible forms that cardiovascular physicians and surgeons can use for education, preprocedural planning and device testing. In this review we discuss the different steps of the 3D printing process, which include image acquisition, segmentation, printing methods and materials. We also examine the expanded applications of 3D printing in the catheter-based treatment of adult patients with structural and congenital heart disease while highlighting the current limitations of this technology in terms of segmentation, model accuracy and dynamic capabilities. Furthermore, we provide information on the resources needed to establish a hospital-based 3D printing laboratory.

  11. [Registration and 3D rendering of serial tissue section images].

    PubMed

    Liu, Zhexing; Jiang, Guiping; Dong, Wu; Zhang, Yu; Xie, Xiaomian; Hao, Liwei; Wang, Zhiyuan; Li, Shuxiang

    2002-12-01

    It is an important morphological research method to reconstruct the 3D imaging from serial section tissue images. Registration of serial images is a key step to 3D reconstruction. Firstly, an introduction to the segmentation-counting registration algorithm is presented, which is based on the joint histogram. After thresholding of the two images to be registered, the criterion function is defined as counting in a specific region of the joint histogram, which greatly speeds up the alignment process. Then, the method is used to conduct the serial tissue image matching task, and lies a solid foundation for 3D rendering. Finally, preliminary surface rendering results are presented.

  12. Augmented reality to the rescue of the minimally invasive surgeon. The usefulness of the interposition of stereoscopic images in the Da Vinci™ robotic console.

    PubMed

    Volonté, Francesco; Buchs, Nicolas C; Pugin, François; Spaltenstein, Joël; Schiltz, Boris; Jung, Minoa; Hagen, Monika; Ratib, Osman; Morel, Philippe

    2013-09-01

    Computerized management of medical information and 3D imaging has become the norm in everyday medical practice. Surgeons exploit these emerging technologies and bring information previously confined to the radiology rooms into the operating theatre. The paper reports the authors' experience with integrated stereoscopic 3D-rendered images in the da Vinci surgeon console. Volume-rendered images were obtained from a standard computed tomography dataset using the OsiriX DICOM workstation. A custom OsiriX plugin was created that permitted the 3D-rendered images to be displayed in the da Vinci surgeon console and to appear stereoscopic. These rendered images were displayed in the robotic console using the TilePro multi-input display. The upper part of the screen shows the real endoscopic surgical field and the bottom shows the stereoscopic 3D-rendered images. These are controlled by a 3D joystick installed on the console, and are updated in real time. Five patients underwent a robotic augmented reality-enhanced procedure. The surgeon was able to switch between the classical endoscopic view and a combined virtual view during the procedure. Subjectively, the addition of the rendered images was considered to be an undeniable help during the dissection phase. With the rapid evolution of robotics, computer-aided surgery is receiving increasing interest. This paper details the authors' experience with 3D-rendered images projected inside the surgical console. The use of this intra-operative mixed reality technology is considered very useful by the surgeon. It has been shown that the usefulness of this technique is a step toward computer-aided surgery that will progress very quickly over the next few years. Copyright © 2012 John Wiley & Sons, Ltd.

  13. Specific CT 3D rendering of the treatment zone after Irreversible Electroporation (IRE) in a pig liver model: the “Chebyshev Center Concept” to define the maximum treatable tumor size

    PubMed Central

    2014-01-01

    Background Size and shape of the treatment zone after Irreversible electroporation (IRE) can be difficult to depict due to the use of multiple applicators with complex spatial configuration. Exact geometrical definition of the treatment zone, however, is mandatory for acute treatment control since incomplete tumor coverage results in limited oncological outcome. In this study, the “Chebyshev Center Concept” was introduced for CT 3d rendering to assess size and position of the maximum treatable tumor at a specific safety margin. Methods In seven pig livers, three different IRE protocols were applied to create treatment zones of different size and shape: Protocol 1 (n = 5 IREs), Protocol 2 (n = 5 IREs), and Protocol 3 (n = 5 IREs). Contrast-enhanced CT was used to assess the treatment zones. Technique A consisted of a semi-automated software prototype for CT 3d rendering with the “Chebyshev Center Concept” implemented (the “Chebyshev Center” is the center of the largest inscribed sphere within the treatment zone) with automated definition of parameters for size, shape and position. Technique B consisted of standard CT 3d analysis with manual definition of the same parameters but position. Results For Protocol 1 and 2, short diameter of the treatment zone and diameter of the largest inscribed sphere within the treatment zone were not significantly different between Technique A and B. For Protocol 3, short diameter of the treatment zone and diameter of the largest inscribed sphere within the treatment zone were significantly smaller for Technique A compared with Technique B (41.1 ± 13.1 mm versus 53.8 ± 1.1 mm and 39.0 ± 8.4 mm versus 53.8 ± 1.1 mm; p < 0.05 and p < 0.01). For Protocol 1, 2 and 3, sphericity of the treatment zone was significantly larger for Technique A compared with B. Conclusions Regarding size and shape of the treatment zone after IRE, CT 3d rendering with the “Chebyshev Center Concept” implemented provides significantly different results compared with standard CT 3d analysis. Since the latter overestimates the size of the treatment zone, the “Chebyshev Center Concept” could be used for a more objective acute treatment control. PMID:24410997

  14. High-performance GPU-based rendering for real-time, rigid 2D/3D-image registration and motion prediction in radiation oncology.

    PubMed

    Spoerk, Jakob; Gendrin, Christelle; Weber, Christoph; Figl, Michael; Pawiro, Supriyanto Ardjo; Furtado, Hugo; Fabri, Daniella; Bloch, Christoph; Bergmann, Helmar; Gröller, Eduard; Birkfellner, Wolfgang

    2012-02-01

    A common problem in image-guided radiation therapy (IGRT) of lung cancer as well as other malignant diseases is the compensation of periodic and aperiodic motion during dose delivery. Modern systems for image-guided radiation oncology allow for the acquisition of cone-beam computed tomography data in the treatment room as well as the acquisition of planar radiographs during the treatment. A mid-term research goal is the compensation of tumor target volume motion by 2D/3D Registration. In 2D/3D registration, spatial information on organ location is derived by an iterative comparison of perspective volume renderings, so-called digitally rendered radiographs (DRR) from computed tomography volume data, and planar reference x-rays. Currently, this rendering process is very time consuming, and real-time registration, which should at least provide data on organ position in less than a second, has not come into existence. We present two GPU-based rendering algorithms which generate a DRR of 512×512 pixels size from a CT dataset of 53 MB size at a pace of almost 100 Hz. This rendering rate is feasible by applying a number of algorithmic simplifications which range from alternative volume-driven rendering approaches - namely so-called wobbled splatting - to sub-sampling of the DRR-image by means of specialized raycasting techniques. Furthermore, general purpose graphics processing unit (GPGPU) programming paradigms were consequently utilized. Rendering quality and performance as well as the influence on the quality and performance of the overall registration process were measured and analyzed in detail. The results show that both methods are competitive and pave the way for fast motion compensation by rigid and possibly even non-rigid 2D/3D registration and, beyond that, adaptive filtering of motion models in IGRT. Copyright © 2011. Published by Elsevier GmbH.

  15. High-performance GPU-based rendering for real-time, rigid 2D/3D-image registration and motion prediction in radiation oncology

    PubMed Central

    Spoerk, Jakob; Gendrin, Christelle; Weber, Christoph; Figl, Michael; Pawiro, Supriyanto Ardjo; Furtado, Hugo; Fabri, Daniella; Bloch, Christoph; Bergmann, Helmar; Gröller, Eduard; Birkfellner, Wolfgang

    2012-01-01

    A common problem in image-guided radiation therapy (IGRT) of lung cancer as well as other malignant diseases is the compensation of periodic and aperiodic motion during dose delivery. Modern systems for image-guided radiation oncology allow for the acquisition of cone-beam computed tomography data in the treatment room as well as the acquisition of planar radiographs during the treatment. A mid-term research goal is the compensation of tumor target volume motion by 2D/3D registration. In 2D/3D registration, spatial information on organ location is derived by an iterative comparison of perspective volume renderings, so-called digitally rendered radiographs (DRR) from computed tomography volume data, and planar reference x-rays. Currently, this rendering process is very time consuming, and real-time registration, which should at least provide data on organ position in less than a second, has not come into existence. We present two GPU-based rendering algorithms which generate a DRR of 512 × 512 pixels size from a CT dataset of 53 MB size at a pace of almost 100 Hz. This rendering rate is feasible by applying a number of algorithmic simplifications which range from alternative volume-driven rendering approaches – namely so-called wobbled splatting – to sub-sampling of the DRR-image by means of specialized raycasting techniques. Furthermore, general purpose graphics processing unit (GPGPU) programming paradigms were consequently utilized. Rendering quality and performance as well as the influence on the quality and performance of the overall registration process were measured and analyzed in detail. The results show that both methods are competitive and pave the way for fast motion compensation by rigid and possibly even non-rigid 2D/3D registration and, beyond that, adaptive filtering of motion models in IGRT. PMID:21782399

  16. Establishing the 3-D finite element solid model of femurs in partial by volume rendering.

    PubMed

    Zhang, Yinwang; Zhong, Wuxue; Zhu, Haibo; Chen, Yun; Xu, Lingjun; Zhu, Jianmin

    2013-01-01

    It remains rare to report three-dimensional (3-D) finite element solid model of femurs in partial by volume rendering method, though several methods of femoral 3-D finite element modeling are already available. We aim to analyze the advantages of the modeling method by establishing the 3-D finite element solid model of femurs in partial by volume rendering. A 3-D finite element model of the normal human femurs, made up of three anatomic structures: cortical bone, cancellous bone and pulp cavity, was constructed followed by pretreatment of the CT original image. Moreover, the finite-element analysis was carried on different material properties, three types of materials given for cortical bone, six assigned for cancellous bone, and single for pulp cavity. The established 3-D finite element of femurs contains three anatomical structures: cortical bone, cancellous bone, and pulp cavity. The compressive stress primarily concentrated in the medial surfaces of femur, especially in the calcar femorale. Compared with whole modeling by volume rendering method, the 3-D finite element solid model created in partial is more real and fit for finite element analysis. Copyright © 2013 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.

  17. Archaeological Investigations in the Gainesville Lake Area of the Tennessee-Tombigbee Waterway. Volume V. Archaeology of the Gainesville Lake Area: Synthesis.

    DTIC Science & Technology

    1982-09-01

    frequently awkward verbage thus rendering the report more readable. Richard Walling produced the figures and made many constructive coImnts on the...the Cobbs Swamp complex (Chase 1978), had developed into the Render - son complex (Dickens 1971). By approximately A.D. 400, check and simple j...Methods in Archaeology, edited by Robert F. Heizer and Sherburne F. Cook, pp. 60-92. Viking Fund Publications in Anthropology 28. Chicago. Stephenson

  18. Ink Wash Painting Style Rendering With Physically-based Ink Dispersion Model

    NASA Astrophysics Data System (ADS)

    Wang, Yifan; Li, Weiran; Zhu, Qing

    2018-04-01

    This paper presents a real-time rendering method based on the GPU programmable pipeline for rendering the 3D scene in ink wash painting style. The method is divided into main three parts: First, render the ink properties of 3D model by calculating its vertex curvature. Then, cached the ink properties to a paper structure and using an ink dispersion model which is defined by referencing the theory of porous media to simulate the dispersion of ink. Finally, convert the ink properties to the pixel color information and render it to the screen. This method has a better performance than previous methods in visual quality.

  19. An improved method of continuous LOD based on fractal theory in terrain rendering

    NASA Astrophysics Data System (ADS)

    Lin, Lan; Li, Lijun

    2007-11-01

    With the improvement of computer graphic hardware capability, the algorithm of 3D terrain rendering is going into the hot topic of real-time visualization. In order to solve conflict between the rendering speed and reality of rendering, this paper gives an improved method of terrain rendering which improves the traditional continuous level of detail technique based on fractal theory. This method proposes that the program needn't to operate the memory repeatedly to obtain different resolution terrain model, instead, obtains the fractal characteristic parameters of different region according to the movement of the viewpoint. Experimental results show that the method guarantees the authenticity of landscape, and increases the real-time 3D terrain rendering speed.

  20. Interactive 3-D graphics workstations in stereotaxy: clinical requirements, algorithms, and solutions

    NASA Astrophysics Data System (ADS)

    Ehricke, Hans-Heino; Daiber, Gerhard; Sonntag, Ralf; Strasser, Wolfgang; Lochner, Mathias; Rudi, Lothar S.; Lorenz, Walter J.

    1992-09-01

    In stereotactic treatment planning the spatial relationships between a variety of objects has to be taken into account in order to avoid destruction of vital brain structures and rupture of vasculature. The visualization of these highly complex relations may be supported by 3-D computer graphics methods. In this context the three-dimensional display of the intracranial vascular tree and additional objects, such as neuroanatomy, pathology, stereotactic devices, or isodose surfaces, is of high clinical value. We report an advanced rendering method for a depth-enhanced maximum intensity projection from magnetic resonance angiography (MRA) and a walk-through approach to the analysis of MRA volume data. Furthermore, various methods for a multiple-object 3-D rendering in stereotaxy are discussed. The development of advanced applications in medical imaging can hardly be successful if image acquisition problems are disregarded. We put particular emphasis on the use of conventional MRI and MRA for stereotactic guidance. The problem of MR distortion is discussed and a novel three- dimensional approach to the quantification and correction of the distortion patterns is presented. Our results suggest that the sole use of MR for stereotactic guidance is highly practical. The true three-dimensionality of the acquired datasets opens up new perspectives to stereotactic treatment planning. For the first time it is possible now to integrate all the necessary information into 3-D scenes, thus enabling an interactive 3-D planning.

  1. Using FastX on the Peregrine System | High-Performance Computing | NREL

    Science.gov Websites

    with full 3D hardware acceleration. The traditional method of displaying graphics applications to a remote X server (indirect rendering) supports 3D hardware acceleration, but this approach causes all of the OpenGL commands and 3D data to be sent over the network to be rendered on the client machine. With

  2. Seeing More by Showing Less: Orientation-Dependent Transparency Rendering for Fiber Tractography Visualization

    PubMed Central

    Tax, Chantal M. W.; Chamberland, Maxime; van Stralen, Marijn; Viergever, Max A.; Whittingstall, Kevin; Fortin, David; Descoteaux, Maxime; Leemans, Alexander

    2015-01-01

    Fiber tractography plays an important role in exploring the architectural organization of fiber trajectories, both in fundamental neuroscience and in clinical applications. With the advent of diffusion MRI (dMRI) approaches that can also model “crossing fibers”, the complexity of the fiber network as reconstructed with tractography has increased tremendously. Many pathways interdigitate and overlap, which hampers an unequivocal 3D visualization of the network and impedes an efficient study of its organization. We propose a novel fiber tractography visualization approach that interactively and selectively adapts the transparency rendering of fiber trajectories as a function of their orientation to enhance the visibility of the spatial context. More specifically, pathways that are oriented (locally or globally) along a user-specified opacity axis can be made more transparent or opaque. This substantially improves the 3D visualization of the fiber network and the exploration of tissue configurations that would otherwise be largely covered by other pathways. We present examples of fiber bundle extraction and neurosurgical planning cases where the added benefit of our new visualization scheme is demonstrated over conventional fiber visualization approaches. PMID:26444010

  3. Seeing More by Showing Less: Orientation-Dependent Transparency Rendering for Fiber Tractography Visualization.

    PubMed

    Tax, Chantal M W; Chamberland, Maxime; van Stralen, Marijn; Viergever, Max A; Whittingstall, Kevin; Fortin, David; Descoteaux, Maxime; Leemans, Alexander

    2015-01-01

    Fiber tractography plays an important role in exploring the architectural organization of fiber trajectories, both in fundamental neuroscience and in clinical applications. With the advent of diffusion MRI (dMRI) approaches that can also model "crossing fibers", the complexity of the fiber network as reconstructed with tractography has increased tremendously. Many pathways interdigitate and overlap, which hampers an unequivocal 3D visualization of the network and impedes an efficient study of its organization. We propose a novel fiber tractography visualization approach that interactively and selectively adapts the transparency rendering of fiber trajectories as a function of their orientation to enhance the visibility of the spatial context. More specifically, pathways that are oriented (locally or globally) along a user-specified opacity axis can be made more transparent or opaque. This substantially improves the 3D visualization of the fiber network and the exploration of tissue configurations that would otherwise be largely covered by other pathways. We present examples of fiber bundle extraction and neurosurgical planning cases where the added benefit of our new visualization scheme is demonstrated over conventional fiber visualization approaches.

  4. 3D Volume Rendering and 3D Printing (Additive Manufacturing).

    PubMed

    Katkar, Rujuta A; Taft, Robert M; Grant, Gerald T

    2018-07-01

    Three-dimensional (3D) volume-rendered images allow 3D insight into the anatomy, facilitating surgical treatment planning and teaching. 3D printing, additive manufacturing, and rapid prototyping techniques are being used with satisfactory accuracy, mostly for diagnosis and surgical planning, followed by direct manufacture of implantable devices. The major limitation is the time and money spent generating 3D objects. Printer type, material, and build thickness are known to influence the accuracy of printed models. In implant dentistry, the use of 3D-printed surgical guides is strongly recommended to facilitate planning and reduce risk of operative complications. Copyright © 2018 Elsevier Inc. All rights reserved.

  5. HVS: an image-based approach for constructing virtual environments

    NASA Astrophysics Data System (ADS)

    Zhang, Maojun; Zhong, Li; Sun, Lifeng; Li, Yunhao

    1998-09-01

    Virtual Reality Systems can construct virtual environment which provide an interactive walkthrough experience. Traditionally, walkthrough is performed by modeling and rendering 3D computer graphics in real-time. Despite the rapid advance of computer graphics technique, the rendering engine usually places a limit on scene complexity and rendering quality. This paper presents a approach which uses the real-world image or synthesized image to comprise a virtual environment. The real-world image or synthesized image can be recorded by camera, or synthesized by off-line multispectral image processing for Landsat TM (Thematic Mapper) Imagery and SPOT HRV imagery. They are digitally warped on-the-fly to simulate walking forward/backward, to left/right and 360-degree watching around. We have developed a system HVS (Hyper Video System) based on these principles. HVS improves upon QuickTime VR and Surround Video in the walking forward/backward.

  6. Creating 3D visualizations of MRI data: A brief guide.

    PubMed

    Madan, Christopher R

    2015-01-01

    While magnetic resonance imaging (MRI) data is itself 3D, it is often difficult to adequately present the results papers and slides in 3D. As a result, findings of MRI studies are often presented in 2D instead. A solution is to create figures that include perspective and can convey 3D information; such figures can sometimes be produced by standard functional magnetic resonance imaging (fMRI) analysis packages and related specialty programs. However, many options cannot provide functionality such as visualizing activation clusters that are both cortical and subcortical (i.e., a 3D glass brain), the production of several statistical maps with an identical perspective in the 3D rendering, or animated renderings. Here I detail an approach for creating 3D visualizations of MRI data that satisfies all of these criteria. Though a 3D 'glass brain' rendering can sometimes be difficult to interpret, they are useful in showing a more overall representation of the results, whereas the traditional slices show a more local view. Combined, presenting both 2D and 3D representations of MR images can provide a more comprehensive view of the study's findings.

  7. Creating 3D visualizations of MRI data: A brief guide

    PubMed Central

    Madan, Christopher R.

    2015-01-01

    While magnetic resonance imaging (MRI) data is itself 3D, it is often difficult to adequately present the results papers and slides in 3D. As a result, findings of MRI studies are often presented in 2D instead. A solution is to create figures that include perspective and can convey 3D information; such figures can sometimes be produced by standard functional magnetic resonance imaging (fMRI) analysis packages and related specialty programs. However, many options cannot provide functionality such as visualizing activation clusters that are both cortical and subcortical (i.e., a 3D glass brain), the production of several statistical maps with an identical perspective in the 3D rendering, or animated renderings. Here I detail an approach for creating 3D visualizations of MRI data that satisfies all of these criteria. Though a 3D ‘glass brain’ rendering can sometimes be difficult to interpret, they are useful in showing a more overall representation of the results, whereas the traditional slices show a more local view. Combined, presenting both 2D and 3D representations of MR images can provide a more comprehensive view of the study’s findings. PMID:26594340

  8. Enhanced visualization of MR angiogram with modified MIP and 3D image fusion

    NASA Astrophysics Data System (ADS)

    Kim, JongHyo; Yeon, Kyoung M.; Han, Man Chung; Lee, Dong Hyuk; Cho, Han I.

    1997-05-01

    We have developed a 3D image processing and display technique that include image resampling, modification of MIP, volume rendering, and fusion of MIP image with volumetric rendered image. This technique facilitates the visualization of the 3D spatial relationship between vasculature and surrounding organs by overlapping the MIP image on the volumetric rendered image of the organ. We applied this technique to a MR brain image data to produce an MRI angiogram that is overlapped with 3D volume rendered image of brain. MIP technique was used to visualize the vasculature of brain, and volume rendering was used to visualize the other structures of brain. The two images are fused after adjustment of contrast and brightness levels of each image in such a way that both the vasculature and brain structure are well visualized either by selecting the maximum value of each image or by assigning different color table to each image. The resultant image with this technique visualizes both the brain structure and vasculature simultaneously, allowing the physicians to inspect their relationship more easily. The presented technique will be useful for surgical planning for neurosurgery.

  9. Stretchable ultrasonic transducer arrays for three-dimensional imaging on complex surfaces

    PubMed Central

    Zhu, Xuan; Li, Xiaoshi; Chen, Zeyu; Chen, Yimu; Lei, Yusheng; Li, Yang; Nomoto, Akihiro; Zhou, Qifa; di Scalea, Francesco Lanza

    2018-01-01

    Ultrasonic imaging has been implemented as a powerful tool for noninvasive subsurface inspections of both structural and biological media. Current ultrasound probes are rigid and bulky and cannot readily image through nonplanar three-dimensional (3D) surfaces. However, imaging through these complicated surfaces is vital because stress concentrations at geometrical discontinuities render these surfaces highly prone to defects. This study reports a stretchable ultrasound probe that can conform to and detect nonplanar complex surfaces. The probe consists of a 10 × 10 array of piezoelectric transducers that exploit an “island-bridge” layout with multilayer electrodes, encapsulated by thin and compliant silicone elastomers. The stretchable probe shows excellent electromechanical coupling, minimal cross-talk, and more than 50% stretchability. Its performance is demonstrated by reconstructing defects in 3D space with high spatial resolution through flat, concave, and convex surfaces. The results hold great implications for applications of ultrasound that require imaging through complex surfaces. PMID:29740603

  10. Low-cost real-time 3D PC distributed-interactive-simulation (DIS) application for C4I

    NASA Astrophysics Data System (ADS)

    Gonthier, David L.; Veron, Harry

    1998-04-01

    A 3D Distributed Interactive Simulation (DIS) application was developed and demonstrated in a PC environment. The application is capable of running in the stealth mode or as a player which includes battlefield simulations, such as ModSAF. PCs can be clustered together, but not necessarily collocated, to run a simulation or training exercise on their own. A 3D perspective view of the battlefield is displayed that includes terrain, trees, buildings and other objects supported by the DIS application. Screen update rates of 15 to 20 frames per second have been achieved with fully lit and textured scenes thus providing high quality and fast graphics. A complete PC system can be configured for under $2,500. The software runs under Windows95 and WindowsNT. It is written in C++ and uses a commercial API called RenderWare for 3D rendering. The software uses Microsoft Foundation classes and Microsoft DirectPlay for joystick input. The RenderWare libraries enhance the performance through optimization for MMX and the Pentium Pro processor. The RenderWare and the Righteous 3D graphics board from Orchid Technologies with an advertised rendering rate of up to 2 million texture mapped triangles per second. A low-cost PC DIS simulator that can partake in a real-time collaborative simulation with other platforms is thus achieved.

  11. The rendering context for stereoscopic 3D web

    NASA Astrophysics Data System (ADS)

    Chen, Qinshui; Wang, Wenmin; Wang, Ronggang

    2014-03-01

    3D technologies on the Web has been studied for many years, but they are basically monoscopic 3D. With the stereoscopic technology gradually maturing, we are researching to integrate the binocular 3D technology into the Web, creating a stereoscopic 3D browser that will provide users with a brand new experience of human-computer interaction. In this paper, we propose a novel approach to apply stereoscopy technologies to the CSS3 3D Transforms. Under our model, each element can create or participate in a stereoscopic 3D rendering context, in which 3D Transforms such as scaling, translation and rotation, can be applied and be perceived in a truly 3D space. We first discuss the underlying principles of stereoscopy. After that we discuss how these principles can be applied to the Web. A stereoscopic 3D browser with backward compatibility is also created for demonstration purposes. We take advantage of the open-source WebKit project, integrating the 3D display ability into the rendering engine of the web browser. For each 3D web page, our 3D browser will create two slightly different images, each representing the left-eye view and right-eye view, both to be combined on the 3D display to generate the illusion of depth. And as the result turns out, elements can be manipulated in a truly 3D space.

  12. Design and Implementation of High-Performance GIS Dynamic Objects Rendering Engine

    NASA Astrophysics Data System (ADS)

    Zhong, Y.; Wang, S.; Li, R.; Yun, W.; Song, G.

    2017-12-01

    Spatio-temporal dynamic visualization is more vivid than static visualization. It important to use dynamic visualization techniques to reveal the variation process and trend vividly and comprehensively for the geographical phenomenon. To deal with challenges caused by dynamic visualization of both 2D and 3D spatial dynamic targets, especially for different spatial data types require high-performance GIS dynamic objects rendering engine. The main approach for improving the rendering engine with vast dynamic targets relies on key technologies of high-performance GIS, including memory computing, parallel computing, GPU computing and high-performance algorisms. In this study, high-performance GIS dynamic objects rendering engine is designed and implemented for solving the problem based on hybrid accelerative techniques. The high-performance GIS rendering engine contains GPU computing, OpenGL technology, and high-performance algorism with the advantage of 64-bit memory computing. It processes 2D, 3D dynamic target data efficiently and runs smoothly with vast dynamic target data. The prototype system of high-performance GIS dynamic objects rendering engine is developed based SuperMap GIS iObjects. The experiments are designed for large-scale spatial data visualization, the results showed that the high-performance GIS dynamic objects rendering engine have the advantage of high performance. Rendering two-dimensional and three-dimensional dynamic objects achieve 20 times faster on GPU than on CPU.

  13. A service protocol for post-processing of medical images on the mobile device

    NASA Astrophysics Data System (ADS)

    He, Longjun; Ming, Xing; Xu, Lang; Liu, Qian

    2014-03-01

    With computing capability and display size growing, the mobile device has been used as a tool to help clinicians view patient information and medical images anywhere and anytime. It is uneasy and time-consuming for transferring medical images with large data size from picture archiving and communication system to mobile client, since the wireless network is unstable and limited by bandwidth. Besides, limited by computing capability, memory and power endurance, it is hard to provide a satisfactory quality of experience for radiologists to handle some complex post-processing of medical images on the mobile device, such as real-time direct interactive three-dimensional visualization. In this work, remote rendering technology is employed to implement the post-processing of medical images instead of local rendering, and a service protocol is developed to standardize the communication between the render server and mobile client. In order to make mobile devices with different platforms be able to access post-processing of medical images, the Extensible Markup Language is taken to describe this protocol, which contains four main parts: user authentication, medical image query/ retrieval, 2D post-processing (e.g. window leveling, pixel values obtained) and 3D post-processing (e.g. maximum intensity projection, multi-planar reconstruction, curved planar reformation and direct volume rendering). And then an instance is implemented to verify the protocol. This instance can support the mobile device access post-processing of medical image services on the render server via a client application or on the web page.

  14. A Cut-and-Paste Approach to 3D Graphene-Oxide-Based Architectures.

    PubMed

    Luo, Chong; Yeh, Che-Ning; Baltazar, Jesus M Lopez; Tsai, Chao-Lin; Huang, Jiaxing

    2018-04-01

    Properly cut sheets can be converted into complex 3D structures by three basic operations including folding, bending, and pasting to render new functions. Folding and bending are extensively employed in crumpling, origami, and pop-up fabrications for 3D structures. Pasting joins different parts of a material together, and can create new geometries that are fundamentally unattainable by folding and bending. However, it has been much less explored, likely due to limited choice of weldable thin film materials and residue-free glues. Here it is shown that graphene oxide (GO) paper is one such suitable material. Stacked GO sheets can be readily loosened up and even redispersed in water, which upon drying, restack to form solid structures. Therefore, water can be utilized to heal local damage, glue separated pieces, and release internal stress in bent GO papers to fix their shapes. Complex and dynamic 3D GO architectures can thus be fabricated by a cut-and-paste approach, which is also applicable to GO-based hybrid with carbon nanotubes or clay sheets. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Integrity Determination for Image Rendering Vision Navigation

    DTIC Science & Technology

    2016-03-01

    identifying an object within a scene, tracking a SIFT feature between frames or matching images and/or features for stereo vision applications. This... object level, either in 2-D or 3-D, versus individual features. There is a breadth of information, largely from the machine vision community...matching or image rendering image correspondence approach is based upon using either 2-D or 3-D object models or templates to perform object detection or

  16. 3D in the Fast Lane: Render as You Go with the Latest OpenGL Boards.

    ERIC Educational Resources Information Center

    Sauer, Jeff; Murphy, Sam

    1997-01-01

    NT OpenGL hardware allows modelers and animators to work at relatively inexpensive NT workstations in their own offices or homes previous to shared space and workstation time in expensive studios. Rates seven OpenGL boards and two QuickDraw 3D accelerator boards for Mac users on overall value, wireframe and texture rendering, 2D acceleration, and…

  17. DspaceOgreTerrain 3D Terrain Visualization Tool

    NASA Technical Reports Server (NTRS)

    Myint, Steven; Jain, Abhinandan; Pomerantz, Marc I.

    2012-01-01

    DspaceOgreTerrain is an extension to the DspaceOgre 3D visualization tool that supports real-time visualization of various terrain types, including digital elevation maps, planets, and meshes. DspaceOgreTerrain supports creating 3D representations of terrains and placing them in a scene graph. The 3D representations allow for a continuous level of detail, GPU-based rendering, and overlaying graphics like wheel tracks and shadows. It supports reading data from the SimScape terrain- modeling library. DspaceOgreTerrain solves the problem of displaying the results of simulations that involve very large terrains. In the past, it has been used to visualize simulations of vehicle traverses on Lunar and Martian terrains. These terrains were made up of billions of vertices and would not have been renderable in real-time without using a continuous level of detail rendering technique.

  18. LOD-Sprite Technique for Accelerated Terrain Rendering

    DTIC Science & Technology

    1999-01-01

    includes limited parallax, is possible. Another category samples the full plenoptic function, resulting in 3D, 4D or even 5D image sprites [13, 10... Plenoptic modeling: An image- based rendering system. Computer Graphics (Proc. SIG- GRAPH ’95), pages 39–46, 1995. [19] P. Rademacher and G. Bishop

  19. Systematic Parameterization, Storage, and Representation of Volumetric DICOM Data.

    PubMed

    Fischer, Felix; Selver, M Alper; Gezer, Sinem; Dicle, Oğuz; Hillen, Walter

    Tomographic medical imaging systems produce hundreds to thousands of slices, enabling three-dimensional (3D) analysis. Radiologists process these images through various tools and techniques in order to generate 3D renderings for various applications, such as surgical planning, medical education, and volumetric measurements. To save and store these visualizations, current systems use snapshots or video exporting, which prevents further optimizations and requires the storage of significant additional data. The Grayscale Softcopy Presentation State extension of the Digital Imaging and Communications in Medicine (DICOM) standard resolves this issue for two-dimensional (2D) data by introducing an extensive set of parameters, namely 2D Presentation States (2DPR), that describe how an image should be displayed. 2DPR allows storing these parameters instead of storing parameter applied images, which cause unnecessary duplication of the image data. Since there is currently no corresponding extension for 3D data, in this study, a DICOM-compliant object called 3D presentation states (3DPR) is proposed for the parameterization and storage of 3D medical volumes. To accomplish this, the 3D medical visualization process is divided into four tasks, namely pre-processing, segmentation, post-processing, and rendering. The important parameters of each task are determined. Special focus is given to the compression of segmented data, parameterization of the rendering process, and DICOM-compliant implementation of the 3DPR object. The use of 3DPR was tested in a radiology department on three clinical cases, which require multiple segmentations and visualizations during the workflow of radiologists. The results show that 3DPR can effectively simplify the workload of physicians by directly regenerating 3D renderings without repeating intermediate tasks, increase efficiency by preserving all user interactions, and provide efficient storage as well as transfer of visualized data.

  20. Comparison of three-dimensional visualization techniques for depicting the scala vestibuli and scala tympani of the cochlea by using high-resolution MR imaging.

    PubMed

    Hans, P; Grant, A J; Laitt, R D; Ramsden, R T; Kassner, A; Jackson, A

    1999-08-01

    Cochlear implantation requires introduction of a stimulating electrode array into the scala vestibuli or scala tympani. Although these structures can be separately identified on many high-resolution scans, it is often difficult to ascertain whether these channels are patent throughout their length. The aim of this study was to determine whether an optimized combination of an imaging protocol and a visualization technique allows routine 3D rendering of the scala vestibuli and scala tympani. A submillimeter T2 fast spin-echo imaging sequence was designed to optimize the performance of 3D visualization methods. The spatial resolution was determined experimentally using primary images and 3D surface and volume renderings from eight healthy subjects. These data were used to develop the imaging sequence and to compare the quality and signal-to-noise dependency of four data visualization algorithms: maximum intensity projection, ray casting with transparent voxels, ray casting with opaque voxels, and isosurface rendering. The ability of these methods to produce 3D renderings of the scala tympani and scala vestibuli was also examined. The imaging technique was used in five patients with sensorineural deafness. Visualization techniques produced optimal results in combination with an isotropic volume imaging sequence. Clinicians preferred the isosurface-rendered images to other 3D visualizations. Both isosurface and ray casting displayed the scala vestibuli and scala tympani throughout their length. Abnormalities were shown in three patients, and in one of these, a focal occlusion of the scala tympani was confirmed at surgery. Three-dimensional images of the scala vestibuli and scala tympani can be routinely produced. The combination of an MR sequence optimized for use with isosurface rendering or ray-casting algorithms can produce 3D images with greater spatial resolution and anatomic detail than has been possible previously.

  1. iview: an interactive WebGL visualizer for protein-ligand complex.

    PubMed

    Li, Hongjian; Leung, Kwong-Sak; Nakane, Takanori; Wong, Man-Hon

    2014-02-25

    Visualization of protein-ligand complex plays an important role in elaborating protein-ligand interactions and aiding novel drug design. Most existing web visualizers either rely on slow software rendering, or lack virtual reality support. The vital feature of macromolecular surface construction is also unavailable. We have developed iview, an easy-to-use interactive WebGL visualizer of protein-ligand complex. It exploits hardware acceleration rather than software rendering. It features three special effects in virtual reality settings, namely anaglyph, parallax barrier and oculus rift, resulting in visually appealing identification of intermolecular interactions. It supports four surface representations including Van der Waals surface, solvent excluded surface, solvent accessible surface and molecular surface. Moreover, based on the feature-rich version of iview, we have also developed a neat and tailor-made version specifically for our istar web platform for protein-ligand docking purpose. This demonstrates the excellent portability of iview. Using innovative 3D techniques, we provide a user friendly visualizer that is not intended to compete with professional visualizers, but to enable easy accessibility and platform independence.

  2. Feasibility of Clinician-Facilitated Three-Dimensional Printing of Synthetic Cranioplasty Flaps.

    PubMed

    Panesar, Sandip S; Belo, Joao Tiago A; D'Souza, Rhett N

    2018-05-01

    Integration of three-dimensional (3D) printing and stereolithography into clinical practice is in its nascence, and concepts may be esoteric to the practicing neurosurgeon. Currently, creation of 3D printed implants involves recruitment of offsite third parties. We explored a range of 3D scanning and stereolithographic techniques to create patient-specific synthetic implants using an onsite, clinician-facilitated approach. We simulated bilateral craniectomies in a single cadaveric specimen. We devised 3 methods of creating stereolithographically viable virtual models from removed bone. First, we used preoperative and postoperative computed tomography scanner-derived bony window models from which the flap was extracted. Second, we used an entry-level 3D light scanner to scan and render models of the individual bone pieces. Third, we used an arm-mounted, 3D laser scanner to create virtual models using a real-time approach. Flaps were printed from the computed tomography scanner and laser scanner models only in a ultraviolet-cured polymer. The light scanner did not produce suitable virtual models for printing. The computed tomography scanner-derived models required extensive postfabrication modification to fit the existing defects. The laser scanner models assumed good fit within the defects without any modification. The methods presented varying levels of complexity in acquisition and model rendering. Each technique required hardware at varying in price points from $0 to approximately $100,000. The laser scanner models produced the best quality parts, which had near-perfect fit with the original defects. Potential neurosurgical applications of this technology are discussed. Copyright © 2018 Elsevier Inc. All rights reserved.

  3. LOD 1 VS. LOD 2 - Preliminary Investigations Into Differences in Mobile Rendering Performance

    NASA Astrophysics Data System (ADS)

    Ellul, C.; Altenbuchner, J.

    2013-09-01

    The increasing availability, size and detail of 3D City Model datasets has led to a challenge when rendering such data on mobile devices. Understanding the limitations to the usability of such models on these devices is particularly important given the broadening range of applications - such as pollution or noise modelling, tourism, planning, solar potential - for which these datasets and resulting visualisations can be utilized. Much 3D City Model data is created by extrusion of 2D topographic datasets, resulting in what is known as Level of Detail (LoD) 1 buildings - with flat roofs. However, in the UK the National Mapping Agency (the Ordnance Survey, OS) is now releasing test datasets to Level of Detail (LoD) 2 - i.e. including roof structures. These datasets are designed to integrate with the LoD 1 datasets provided by the OS, and provide additional detail in particular on larger buildings and in town centres. The availability of such integrated datasets at two different Levels of Detail permits investigation into the impact of the additional roof structures (and hence the display of a more realistic 3D City Model) on rendering performance on a mobile device. This paper describes preliminary work carried out to investigate this issue, for the test area of the city of Sheffield (in the UK Midlands). The data is stored in a 3D spatial database as triangles and then extracted and served as a web-based data stream which is queried by an App developed on the mobile device (using the Android environment, Java and OpenGL for graphics). Initial tests have been carried out on two dataset sizes, for the city centre and a larger area, rendering the data onto a tablet to compare results. Results of 52 seconds for rendering LoD 1 data, and 72 seconds for LoD 1 mixed with LoD 2 data, show that the impact of LoD 2 is significant.

  4. Autostereoscopic image creation by hyperview matrix controlled single pixel rendering

    NASA Astrophysics Data System (ADS)

    Grasnick, Armin

    2017-06-01

    Just as the increasing awareness level of the stereoscopic cinema, so the perception of limitations while watching movies with 3D glasses has been emerged as well. It is not only that the additional glasses are uncomfortable and annoying; there are some tangible arguments for avoiding 3D glasses. These "stereoscopic deficits" are caused by the 3D glasses itself. In contrast to natural viewing with naked eyes, the artificial 3D viewing with 3D glasses introduces specific "unnatural" side effects. The most of the moviegoers has experienced unspecific discomfort in 3D cinema, which they may have associated with insufficient image quality. Obviously, quality problems with 3D glasses can be solved by technical improvement. But this simple answer can -and already has- mislead some decision makers to relax on the existing 3D glasses solution. It needs to be underlined, that there are inherent difficulties with the glasses, which can never be solved with modest advancement; as the 3D glasses initiate them. To overcome the limitations of stereoscopy in display applications, several technologies has been proposed to create a 3D impression without the need of 3D glasses, known as autostereoscopy. But even todays autostereoscopic displays cannot solve all viewing problems and still show limitations. A hyperview display could be a suitable candidate, if it would be possible to create an affordable device and generate the necessary content in an acceptable time frame. All autostereoscopic displays, based on the idea of lightfield, integral photography or super-multiview could be unified within the concept of hyperview. It is essential for functionality that every of these display technologies uses numerous of different perspective images to create the 3D impression. Such a calculation of a very high number of views will require much more computing time as for the formation of a simple stereoscopic image pair. The hyperview concept allows to describe the screen image of any 3D technology just with a simple equation. This formula can be utilized to create a specific hyperview matrix for a certain 3D display - independent of the technology used. A hyperview matrix may contain the references to loads of images and act as an instruction for a subsequent rendering process of particular pixels. Naturally, a single pixel will deliver an image with no resolution and does not provide any idea of the rendered scene. However, by implementing the method of pixel recycling, a 3D image can be perceived, even if all source images are different. It will be proven that several millions of perspectives can be rendered with the support of GPU rendering and benefit from the hyperview matrix. In result, a conventional autostereoscopic display, which is designed to represent only a few perspectives can be used to show a hyperview image by using a suitable hyperview matrix. It will be shown that a millions-of-views-hyperview-image can be presented on a conventional autostereoscopic display. For such an hyperview image it is required that all pixels of the displays are allocated by different source images. Controlled by the hyperview matrix, an adapted renderer can render a full hyperview image in real-time.

  5. Perception-based 3D tactile rendering from a single image for human skin examinations by dynamic touch.

    PubMed

    Kim, K; Lee, S

    2015-05-01

    Diagnosis of skin conditions is dependent on the assessment of skin surface properties that are represented by more tactile properties such as stiffness, roughness, and friction than visual information. Due to this reason, adding tactile feedback to existing vision based diagnosis systems can help dermatologists diagnose skin diseases or disorders more accurately. The goal of our research was therefore to develop a tactile rendering system for skin examinations by dynamic touch. Our development consists of two stages: converting a single image to a 3D haptic surface and rendering the generated haptic surface in real-time. Converting to 3D surfaces from 2D single images was implemented with concerning human perception data collected by a psychophysical experiment that measured human visual and haptic sensibility to 3D skin surface changes. For the second stage, we utilized real skin biomechanical properties found by prior studies. Our tactile rendering system is a standalone system that can be used with any single cameras and haptic feedback devices. We evaluated the performance of our system by conducting an identification experiment with three different skin images with five subjects. The participants had to identify one of the three skin surfaces by using a haptic device (Falcon) only. No visual cue was provided for the experiment. The results indicate that our system provides sufficient performance to render discernable tactile rendering with different skin surfaces. Our system uses only a single skin image and automatically generates a 3D haptic surface based on human haptic perception. Realistic skin interactions can be provided in real-time for the purpose of skin diagnosis, simulations, or training. Our system can also be used for other applications like virtual reality and cosmetic applications. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  6. Earthscape, a Multi-Purpose Interactive 3d Globe Viewer for Hybrid Data Visualization and Analysis

    NASA Astrophysics Data System (ADS)

    Sarthou, A.; Mas, S.; Jacquin, M.; Moreno, N.; Salamon, A.

    2015-08-01

    The hybrid visualization and interaction tool EarthScape is presented here. The software is able to display simultaneously LiDAR point clouds, draped videos with moving footprint, volume scientific data (using volume rendering, isosurface and slice plane), raster data such as still satellite images, vector data and 3D models such as buildings or vehicles. The application runs on touch screen devices such as tablets. The software is based on open source libraries, such as OpenSceneGraph, osgEarth and OpenCV, and shader programming is used to implement volume rendering of scientific data. The next goal of EarthScape is to perform data analysis using ENVI Services Engine, a cloud data analysis solution. EarthScape is also designed to be a client of Jagwire which provides multisource geo-referenced video fluxes. When all these components will be included, EarthScape will be a multi-purpose platform that will provide at the same time data analysis, hybrid visualization and complex interactions. The software is available on demand for free at france@exelisvis.com.

  7. Real-time volume rendering of digital medical images on an iOS device

    NASA Astrophysics Data System (ADS)

    Noon, Christian; Holub, Joseph; Winer, Eliot

    2013-03-01

    Performing high quality 3D visualizations on mobile devices, while tantalizingly close in many areas, is still a quite difficult task. This is especially true for 3D volume rendering of digital medical images. Allowing this would empower medical personnel a powerful tool to diagnose and treat patients and train the next generation of physicians. This research focuses on performing real time volume rendering of digital medical images on iOS devices using custom developed GPU shaders for orthogonal texture slicing. An interactive volume renderer was designed and developed with several new features including dynamic modification of render resolutions, an incremental render loop, a shader-based clipping algorithm to support OpenGL ES 2.0, and an internal backface culling algorithm for properly sorting rendered geometry with alpha blending. The application was developed using several application programming interfaces (APIs) such as OpenSceneGraph (OSG) as the primary graphics renderer coupled with iOS Cocoa Touch for user interaction, and DCMTK for DICOM I/O. The developed application rendered volume datasets over 450 slices up to 50-60 frames per second, depending on the specific model of the iOS device. All rendering is done locally on the device so no Internet connection is required.

  8. Evaluation of haptic interfaces for simulation of drill vibration in virtual temporal bone surgery.

    PubMed

    Ghasemloonia, Ahmad; Baxandall, Shalese; Zareinia, Kourosh; Lui, Justin T; Dort, Joseph C; Sutherland, Garnette R; Chan, Sonny

    2016-11-01

    Surgical training is evolving from an observership model towards a new paradigm that includes virtual-reality (VR) simulation. In otolaryngology, temporal bone dissection has become intimately linked with VR simulation as the complexity of anatomy demands a high level of surgeon aptitude and confidence. While an adequate 3D visualization of the surgical site is available in current simulators, the force feedback rendered during haptic interaction does not convey vibrations. This lack of vibration rendering limits the simulation fidelity of a surgical drill such as that used in temporal bone dissection. In order to develop an immersive simulation platform capable of haptic force and vibration feedback, the efficacy of hand controllers for rendering vibration in different drilling circumstances needs to be investigated. In this study, the vibration rendering ability of four different haptic hand controllers were analyzed and compared to find the best commercial haptic hand controller. A test-rig was developed to record vibrations encountered during temporal bone dissection and a software was written to render the recorded signals without adding hardware to the system. An accelerometer mounted on the end-effector of each device recorded the rendered vibration signals. The newly recorded vibration signal was compared with the input signal in both time and frequency domains by coherence and cross correlation analyses to quantitatively measure the fidelity of these devices in terms of rendering vibrotactile drilling feedback in different drilling conditions. This method can be used to assess the vibration rendering ability in VR simulation systems and selection of ideal haptic devices. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Experimenter's Laboratory for Visualized Interactive Science

    NASA Technical Reports Server (NTRS)

    Hansen, Elaine R.; Rodier, Daniel R.; Klemp, Marjorie K.

    1994-01-01

    ELVIS (Experimenter's Laboratory for Visualized Interactive Science) is an interactive visualization environment that enables scientists, students, and educators to visualize and analyze large, complex, and diverse sets of scientific data. It accomplishes this by presenting the data sets as 2-D, 3-D, color, stereo, and graphic images with movable and multiple light sources combined with displays of solid-surface, contours, wire-frame, and transparency. By simultaneously rendering diverse data sets acquired from multiple sources, formats, and resolutions and by interacting with the data through an intuitive, direct-manipulation interface, ELVIS provides an interactive and responsive environment for exploratory data analysis.

  10. Technical Note: A 3-D rendering algorithm for electromechanical wave imaging of a beating heart.

    PubMed

    Nauleau, Pierre; Melki, Lea; Wan, Elaine; Konofagou, Elisa

    2017-09-01

    Arrhythmias can be treated by ablating the heart tissue in the regions of abnormal contraction. The current clinical standard provides electroanatomic 3-D maps to visualize the electrical activation and locate the arrhythmogenic sources. However, the procedure is time-consuming and invasive. Electromechanical wave imaging is an ultrasound-based noninvasive technique that can provide 2-D maps of the electromechanical activation of the heart. In order to fully visualize the complex 3-D pattern of activation, several 2-D views are acquired and processed separately. They are then manually registered with a 3-D rendering software to generate a pseudo-3-D map. However, this last step is operator-dependent and time-consuming. This paper presents a method to generate a full 3-D map of the electromechanical activation using multiple 2-D images. Two canine models were considered to illustrate the method: one in normal sinus rhythm and one paced from the lateral region of the heart. Four standard echographic views of each canine heart were acquired. Electromechanical wave imaging was applied to generate four 2-D activation maps of the left ventricle. The radial positions and activation timings of the walls were automatically extracted from those maps. In each slice, from apex to base, these values were interpolated around the circumference to generate a full 3-D map. In both cases, a 3-D activation map and a cine-loop of the propagation of the electromechanical wave were automatically generated. The 3-D map showing the electromechanical activation timings overlaid on realistic anatomy assists with the visualization of the sources of earlier activation (which are potential arrhythmogenic sources). The earliest sources of activation corresponded to the expected ones: septum for the normal rhythm and lateral for the pacing case. The proposed technique provides, automatically, a 3-D electromechanical activation map with a realistic anatomy. This represents a step towards a noninvasive tool to efficiently localize arrhythmias in 3-D. © 2017 American Association of Physicists in Medicine.

  11. Long-term thermophilic mono-digestion of rendering wastes and co-digestion with potato pulp.

    PubMed

    Bayr, S; Ojanperä, M; Kaparaju, P; Rintala, J

    2014-10-01

    In this study, mono-digestion of rendering wastes and co-digestion of rendering wastes with potato pulp were studied for the first time in continuous stirred tank reactor (CSTR) experiments at 55°C. Rendering wastes have high protein and lipid contents and are considered good substrates for methane production. However, accumulation of digestion intermediate products viz., volatile fatty acids (VFAs), long chain fatty acids (LCFAs) and ammonia nitrogen (NH4-N and/or free NH3) can cause process imbalance during the digestion. Mono-digestion of rendering wastes at an organic loading rate (OLR) of 1.5 kg volatile solids (VS)/m(3)d and hydraulic retention time (HRT) of 50 d was unstable and resulted in methane yields of 450 dm(3)/kg VS(fed). On the other hand, co-digestion of rendering wastes with potato pulp (60% wet weight, WW) at the same OLR and HRT improved the process stability and increased methane yields (500-680 dm(3)/kg VS(fed)). Thus, it can be concluded that co-digestion of rendering wastes with potato pulp could improve the process stability and methane yields from these difficult to treat industrial waste materials. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Vertex shading of the three-dimensional model based on ray-tracing algorithm

    NASA Astrophysics Data System (ADS)

    Hu, Xiaoming; Sang, Xinzhu; Xing, Shujun; Yan, Binbin; Wang, Kuiru; Dou, Wenhua; Xiao, Liquan

    2016-10-01

    Ray Tracing Algorithm is one of the research hotspots in Photorealistic Graphics. It is an important light and shadow technology in many industries with the three-dimensional (3D) structure, such as aerospace, game, video and so on. Unlike the traditional method of pixel shading based on ray tracing, a novel ray tracing algorithm is presented to color and render vertices of the 3D model directly. Rendering results are related to the degree of subdivision of the 3D model. A good light and shade effect is achieved by realizing the quad-tree data structure to get adaptive subdivision of a triangle according to the brightness difference of its vertices. The uniform grid algorithm is adopted to improve the rendering efficiency. Besides, the rendering time is independent of the screen resolution. In theory, as long as the subdivision of a model is adequate, cool effects as the same as the way of pixel shading will be obtained. Our practical application can be compromised between the efficiency and the effectiveness.

  13. Real-time 3D image reconstruction guidance in liver resection surgery.

    PubMed

    Soler, Luc; Nicolau, Stephane; Pessaux, Patrick; Mutter, Didier; Marescaux, Jacques

    2014-04-01

    Minimally invasive surgery represents one of the main evolutions of surgical techniques. However, minimally invasive surgery adds difficulty that can be reduced through computer technology. From a patient's medical image [US, computed tomography (CT) or MRI], we have developed an Augmented Reality (AR) system that increases the surgeon's intraoperative vision by providing a virtual transparency of the patient. AR is based on two major processes: 3D modeling and visualization of anatomical or pathological structures appearing in the medical image, and the registration of this visualization onto the real patient. We have thus developed a new online service, named Visible Patient, providing efficient 3D modeling of patients. We have then developed several 3D visualization and surgical planning software tools to combine direct volume rendering and surface rendering. Finally, we have developed two registration techniques, one interactive and one automatic providing intraoperative augmented reality view. From January 2009 to June 2013, 769 clinical cases have been modeled by the Visible Patient service. Moreover, three clinical validations have been realized demonstrating the accuracy of 3D models and their great benefit, potentially increasing surgical eligibility in liver surgery (20% of cases). From these 3D models, more than 50 interactive AR-assisted surgical procedures have been realized illustrating the potential clinical benefit of such assistance to gain safety, but also current limits that automatic augmented reality will overcome. Virtual patient modeling should be mandatory for certain interventions that have now to be defined, such as liver surgery. Augmented reality is clearly the next step of the new surgical instrumentation but remains currently limited due to the complexity of organ deformations during surgery. Intraoperative medical imaging used in new generation of automated augmented reality should solve this issue thanks to the development of Hybrid OR.

  14. PRISM: An open source framework for the interactive design of GPU volume rendering shaders.

    PubMed

    Drouin, Simon; Collins, D Louis

    2018-01-01

    Direct volume rendering has become an essential tool to explore and analyse 3D medical images. Despite several advances in the field, it remains a challenge to produce an image that highlights the anatomy of interest, avoids occlusion of important structures, provides an intuitive perception of shape and depth while retaining sufficient contextual information. Although the computer graphics community has proposed several solutions to address specific visualization problems, the medical imaging community still lacks a general volume rendering implementation that can address a wide variety of visualization use cases while avoiding complexity. In this paper, we propose a new open source framework called the Programmable Ray Integration Shading Model, or PRISM, that implements a complete GPU ray-casting solution where critical parts of the ray integration algorithm can be replaced to produce new volume rendering effects. A graphical user interface allows clinical users to easily experiment with pre-existing rendering effect building blocks drawn from an open database. For programmers, the interface enables real-time editing of the code inside the blocks. We show that in its default mode, the PRISM framework produces images very similar to those produced by a widely-adopted direct volume rendering implementation in VTK at comparable frame rates. More importantly, we demonstrate the flexibility of the framework by showing how several volume rendering techniques can be implemented in PRISM with no more than a few lines of code. Finally, we demonstrate the simplicity of our system in a usability study with 5 medical imaging expert subjects who have none or little experience with volume rendering. The PRISM framework has the potential to greatly accelerate development of volume rendering for medical applications by promoting sharing and enabling faster development iterations and easier collaboration between engineers and clinical personnel.

  15. PRISM: An open source framework for the interactive design of GPU volume rendering shaders

    PubMed Central

    Collins, D. Louis

    2018-01-01

    Direct volume rendering has become an essential tool to explore and analyse 3D medical images. Despite several advances in the field, it remains a challenge to produce an image that highlights the anatomy of interest, avoids occlusion of important structures, provides an intuitive perception of shape and depth while retaining sufficient contextual information. Although the computer graphics community has proposed several solutions to address specific visualization problems, the medical imaging community still lacks a general volume rendering implementation that can address a wide variety of visualization use cases while avoiding complexity. In this paper, we propose a new open source framework called the Programmable Ray Integration Shading Model, or PRISM, that implements a complete GPU ray-casting solution where critical parts of the ray integration algorithm can be replaced to produce new volume rendering effects. A graphical user interface allows clinical users to easily experiment with pre-existing rendering effect building blocks drawn from an open database. For programmers, the interface enables real-time editing of the code inside the blocks. We show that in its default mode, the PRISM framework produces images very similar to those produced by a widely-adopted direct volume rendering implementation in VTK at comparable frame rates. More importantly, we demonstrate the flexibility of the framework by showing how several volume rendering techniques can be implemented in PRISM with no more than a few lines of code. Finally, we demonstrate the simplicity of our system in a usability study with 5 medical imaging expert subjects who have none or little experience with volume rendering. The PRISM framework has the potential to greatly accelerate development of volume rendering for medical applications by promoting sharing and enabling faster development iterations and easier collaboration between engineers and clinical personnel. PMID:29534069

  16. STRING 3: An Advanced Groundwater Flow Visualization Tool

    NASA Astrophysics Data System (ADS)

    Schröder, Simon; Michel, Isabel; Biedert, Tim; Gräfe, Marius; Seidel, Torsten; König, Christoph

    2016-04-01

    The visualization of 3D groundwater flow is a challenging task. Previous versions of our software STRING [1] solely focused on intuitive visualization of complex flow scenarios for non-professional audiences. STRING, developed by Fraunhofer ITWM (Kaiserslautern, Germany) and delta h Ingenieurgesellschaft mbH (Witten, Germany), provides the necessary means for visualization of both 2D and 3D data on planar and curved surfaces. In this contribution we discuss how to extend this approach to a full 3D tool and its challenges in continuation of Michel et al. [2]. This elevates STRING from a post-production to an exploration tool for experts. In STRING moving pathlets provide an intuition of velocity and direction of both steady-state and transient flows. The visualization concept is based on the Lagrangian view of the flow. To capture every detail of the flow an advanced method for intelligent, time-dependent seeding is used building on the Finite Pointset Method (FPM) developed by Fraunhofer ITWM. Lifting our visualization approach from 2D into 3D provides many new challenges. With the implementation of a seeding strategy for 3D one of the major problems has already been solved (see Schröder et al. [3]). As pathlets only provide an overview of the velocity field other means are required for the visualization of additional flow properties. We suggest the use of Direct Volume Rendering and isosurfaces for scalar features. In this regard we were able to develop an efficient approach for combining the rendering through raytracing of the volume and regular OpenGL geometries. This is achieved through the use of Depth Peeling or A-Buffers for the rendering of transparent geometries. Animation of pathlets requires a strict boundary of the simulation domain. Hence, STRING needs to extract the boundary, even from unstructured data, if it is not provided. In 3D we additionally need a good visualization of the boundary itself. For this the silhouette based on the angle of neighboring faces is extracted. Similar algorithms help to find the 2D boundary of cuts through the 3D model. As interactivity plays a big role for an exploration tool the speed of the drawing routines is also important. To achieve this, different pathlet rendering solutions have been developed and benchmarked. These provide a trade-off between the usage of geometry and fragment shaders. We show that point sprite shaders have superior performance and visual quality over geometry-based approaches. Admittedly, the point sprite-based approach has many non-trivial problems of joining the different parts of the pathlet geometry. This research is funded by the Federal Ministry for Economic Affairs and Energy (Germany). [1] T. Seidel, C. König, M. Schäfer, I. Ostermann, T. Biedert, D. Hietel (2014). Intuitive visualization of transient groundwater flow. Computers & Geosciences, Vol. 67, pp. 173-179 [2] I. Michel, S. Schröder, T. Seidel, C. König (2015). Intuitive Visualization of Transient Flow: Towards a Full 3D Tool. Geophysical Research Abstracts, Vol. 17, EGU2015-1670 [3] S. Schröder, I. Michel, T. Seidel, C.M. König (2015). STRING 3: Full 3D visualization of groundwater Flow. In Proceedings of IAMG 2015 Freiberg, pp. 813-822

  17. Sci-Vis Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arthur Bleeker, PNNL

    2015-03-11

    SVF is a full featured OpenGL 3d framework that allows for rapid creation of complex visualizations. The SVF framework handles much of the lifecycle and complex tasks required for a 3d visualization. Unlike a game framework SVF was designed to use fewer resources, work well in a windowed environment, and only render when necessary. The scene also takes advantage of multiple threads to free up the UI thread as much as possible. Shapes (actors) in the scene are created by adding or removing functionality (through support objects) during runtime. This allows a highly flexible and dynamic means of creating highlymore » complex actors without the code complexity (it also helps overcome the lack of multiple inheritance in Java.) All classes are highly customizable and there are abstract classes which are intended to be subclassed to allow a developer to create more complex and highly performant actors. There are multiple demos included in the framework to help the developer get started and shows off nearly all of the functionality. Some simple shapes (actors) are already created for you such as text, bordered text, radial text, text area, complex paths, NURBS paths, cube, disk, grid, plane, geometric shapes, and volumetric area. It also comes with various camera types for viewing that can be dragged, zoomed, and rotated. Picking or selecting items in the scene can be accomplished in various ways depending on your needs (raycasting or color picking.) The framework currently has functionality for tooltips, animation, actor pools, color gradients, 2d physics, text, 1d/2d/3d textures, children, blending, clipping planes, view frustum culling, custom shaders, and custom actor states« less

  18. A 3D ultrasound scanner: real time filtering and rendering algorithms.

    PubMed

    Cifarelli, D; Ruggiero, C; Brusacà, M; Mazzarella, M

    1997-01-01

    The work described here has been carried out within a collaborative project between DIST and ESAOTE BIOMEDICA aiming to set up a new ultrasonic scanner performing 3D reconstruction. A system is being set up to process and display 3D ultrasonic data in a fast, economical and user friendly way to help the physician during diagnosis. A comparison is presented among several algorithms for digital filtering, data segmentation and rendering for real time, PC based, three-dimensional reconstruction from B-mode ultrasonic biomedical images. Several algorithms for digital filtering have been compared as relates to processing time and to final image quality. Three-dimensional data segmentation techniques and rendering has been carried out with special reference to user friendly features for foreseeable applications and reconstruction speed.

  19. Compression and accelerated rendering of volume data using DWT

    NASA Astrophysics Data System (ADS)

    Kamath, Preyas; Akleman, Ergun; Chan, Andrew K.

    1998-09-01

    2D images cannot convey information on object depth and location relative to the surfaces. The medical community is increasingly using 3D visualization techniques to view data from CT scans, MRI etc. 3D images provide more information on depth and location in the spatial domain to help surgeons making better diagnoses of the problem. 3D images can be constructed from 2D images using 3D scalar algorithms. With recent advances in communication techniques, it is possible for doctors to diagnose and plan treatment of a patient who lives at a remote location. It is made possible by transmitting relevant data of the patient via telephone lines. If this information is to be reconstructed in 3D, then 2D images must be transmitted. However 2D dataset storage occupies a lot of memory. In addition, visualization algorithms are slow. We describe in this paper a scheme which reduces the data transfer time by only transmitting information that the doctor wants. Compression is achieved by reducing the amount of data transfer. This is possible by using the 3D wavelet transform applied to 3D datasets. Since the wavelet transform is localized in frequency and spatial domain, we transmit detail only in the region where the doctor needs it. Since only ROM (Region of Interest) is reconstructed in detail, we need to render only ROI in detail, thus we can reduce the rendering time.

  20. Irreparable complex DNA double-strand breaks induce chromosome breakage in organotypic three-dimensional human lung epithelial cell culture

    PubMed Central

    Asaithamby, Aroumougame; Hu, Burong; Delgado, Oliver; Ding, Liang-Hao; Story, Michael D.; Minna, John D.; Shay, Jerry W.; Chen, David J.

    2011-01-01

    DNA damage and consequent mutations initiate the multistep carcinogenic process. Differentiated cells have a reduced capacity to repair DNA lesions, but the biological impact of unrepaired DNA lesions in differentiated lung epithelial cells is unclear. Here, we used a novel organotypic human lung three-dimensional (3D) model to investigate the biological significance of unrepaired DNA lesions in differentiated lung epithelial cells. We showed, consistent with existing notions that the kinetics of loss of simple double-strand breaks (DSBs) were significantly reduced in organotypic 3D culture compared to kinetics of repair in two-dimensional (2D) culture. Strikingly, we found that, unlike simple DSBs, a majority of complex DNA lesions were irreparable in organotypic 3D culture. Levels of expression of multiple DNA damage repair pathway genes were significantly reduced in the organotypic 3D culture compared with those in 2D culture providing molecular evidence for the defective DNA damage repair in organotypic culture. Further, when differentiated cells with unrepaired DNA lesions re-entered the cell cycle, they manifested a spectrum of gross-chromosomal aberrations in mitosis. Our data suggest that downregulation of multiple DNA repair pathway genes in differentiated cells renders them vulnerable to DSBs, promoting genome instability that may lead to carcinogenesis. PMID:21421565

  1. Application of Virtual and Augmented reality to geoscientific teaching and research.

    NASA Astrophysics Data System (ADS)

    Hodgetts, David

    2017-04-01

    The geological sciences are the ideal candidate for the application of Virtual Reality (VR) and Augmented Reality (AR). Digital data collection techniques such as laser scanning, digital photogrammetry and the increasing use of Unmanned Aerial Vehicles (UAV) or Small Unmanned Aircraft (SUA) technology allow us to collect large datasets efficiently and evermore affordably. This linked with the recent resurgence in VR and AR technologies make these 3D digital datasets even more valuable. These advances in VR and AR have been further supported by rapid improvements in graphics card technologies, and by development of high performance software applications to support them. Visualising data in VR is more complex than normal 3D rendering, consideration needs to be given to latency, frame-rate and the comfort of the viewer to enable reasonably long immersion time. Each frame has to be rendered from 2 viewpoints (one for each eye) requiring twice the rendering than for normal monoscopic views. Any unnatural effects (e.g. incorrect lighting) can lead to an uncomfortable VR experience so these have to be minimised. With large digital outcrop datasets comprising 10's-100's of millions of triangles this is challenging but achievable. Apart from the obvious "wow factor" of VR there are some serious applications. It is often the case that users of digital outcrop data do not appreciate the size of features they are dealing with. This is not the case when using correctly scaled VR, and a true sense of scale can be achieved. In addition VR provides an excellent way of performing quality control on 3D models and interpretations and errors are much more easily visible. VR models can then be used to create content that can then be used in AR applications closing the loop and taking interpretations back into the field.

  2. Latency in Distributed Acquisition and Rendering for Telepresence Systems.

    PubMed

    Ohl, Stephan; Willert, Malte; Staadt, Oliver

    2015-12-01

    Telepresence systems use 3D techniques to create a more natural human-centered communication over long distances. This work concentrates on the analysis of latency in telepresence systems where acquisition and rendering are distributed. Keeping latency low is important to immerse users in the virtual environment. To better understand latency problems and to identify the source of such latency, we focus on the decomposition of system latency into sub-latencies. We contribute a model of latency and show how it can be used to estimate latencies in a complex telepresence dataflow network. To compare the estimates with real latencies in our prototype, we modify two common latency measurement methods. This presented methodology enables the developer to optimize the design, find implementation issues and gain deeper knowledge about specific sources of latency.

  3. "Black Bone" MRI: a novel imaging technique for 3D printing.

    PubMed

    Eley, Karen A; Watt-Smith, Stephen R; Golding, Stephen J

    2017-03-01

    Three-dimensionally printed anatomical models are rapidly becoming an integral part of pre-operative planning of complex surgical cases. We have previously reported the "Black Bone" MRI technique as a non-ionizing alternative to CT. Segmentation of bone becomes possible by minimizing soft tissue contrast to enhance the bone-soft tissue boundary. The objectives of this study were to ascertain the potential of utilizing this technique to produce three-dimensional (3D) printed models. "Black Bone" MRI acquired from adult volunteers and infants with craniosynostosis were 3D rendered and 3D printed. A custom phantom provided a surrogate marker of accuracy permitting comparison between direct measurements and 3D printed models created by segmenting both CT and "Black Bone" MRI data sets using two different software packages. "Black Bone" MRI was successfully utilized to produce 3D models of the craniofacial skeleton in both adults and an infant. Measurements of the cube phantom and 3D printed models demonstrated submillimetre discrepancy. In this novel preliminary study exploring the potential of 3D printing from "Black Bone" MRI data, the feasibility of producing anatomical 3D models has been demonstrated, thus offering a potential non-ionizing alterative to CT for the craniofacial skeleton.

  4. 3D image display of fetal ultrasonic images by thin shell

    NASA Astrophysics Data System (ADS)

    Wang, Shyh-Roei; Sun, Yung-Nien; Chang, Fong-Ming; Jiang, Ching-Fen

    1999-05-01

    Due to the properties of convenience and non-invasion, ultrasound has become an essential tool for diagnosis of fetal abnormality during women pregnancy in obstetrics. However, the 'noisy and blurry' nature of ultrasound data makes the rendering of the data a challenge in comparison with MRI and CT images. In spite of the speckle noise, the unwanted objects usually occlude the target to be observed. In this paper, we proposed a new system that can effectively depress the speckle noise, extract the target object, and clearly render the 3D fetal image in almost real-time from 3D ultrasound image data. The system is based on a deformable model that detects contours of the object according to the local image feature of ultrasound. Besides, in order to accelerate rendering speed, a thin shell is defined to separate the observed organ from unrelated structures depending on those detected contours. In this way, we can support quick 3D display of ultrasound, and the efficient visualization of 3D fetal ultrasound thus becomes possible.

  5. A Study on AR 3D Objects Shading Method Using Electronic Compass Sensor

    NASA Astrophysics Data System (ADS)

    Jung, Sungmo; Kim, Seoksoo

    More effective communications can be offered to users by applying NPR (Non-Photorealistic Rendering) methods to 3D graphics. Thus, there has been much research on how to apply NPR to mobile contents. However, previous studies only propose cartoon rendering for pre-treatment with no consideration for directions of light in the surrounding environment. In this study, therefore, ECS(Electronic Compass Sensor) is applied to AR 3D objects shading in order to define directions of light as per time slots for assimilation with the surrounding environment.

  6. Hybrid 3D visualization of the chest and virtual endoscopy of the tracheobronchial system: possibilities and limitations of clinical application.

    PubMed

    Seemann, M D; Claussen, C D

    2001-06-01

    A hybrid rendering method which combines a color-coded surface rendering method and a volume rendering method is described, which enables virtual endoscopic examinations using different representation models. 14 patients with malignancies of the lung and mediastinum (n=11) and lung transplantation (n=3) underwent thin-section spiral computed tomography. The tracheobronchial system and anatomical and pathological features of the chest were segmented using an interactive threshold interval volume-growing segmentation algorithm and visualized with a color-coded surface rendering method. The structures of interest were then superimposed on a volume rendering of the other thoracic structures. For the virtual endoscopy of the tracheobronchial system, a shaded-surface model without color coding, a transparent color-coded shaded-surface model and a triangle-surface model were tested and compared. The hybrid rendering technique exploit the advantages of both rendering methods, provides an excellent overview of the tracheobronchial system and allows a clear depiction of the complex spatial relationships of anatomical and pathological features. Virtual bronchoscopy with a transparent color-coded shaded-surface model allows both a simultaneous visualization of an airway, an airway lesion and mediastinal structures and a quantitative assessment of the spatial relationship between these structures, thus improving confidence in the diagnosis of endotracheal and endobronchial diseases. Hybrid rendering and virtual endoscopy obviate the need for time consuming detailed analysis and presentation of axial source images. Virtual bronchoscopy with a transparent color-coded shaded-surface model offers a practical alternative to fiberoptic bronchoscopy and is particularly promising for patients in whom fiberoptic bronchoscopy is not feasible, contraindicated or refused. Furthermore, it can be used as a complementary procedure to fiberoptic bronchoscopy in evaluating airway stenosis and guiding bronchoscopic biopsy, surgical intervention and palliative therapy and is likely to be increasingly accepted as a screening method for people with suspected endobronchial malignancy and as control examination in the aftercare of patients with malignant diseases.

  7. The Visible Human Project: From Body to Bits.

    PubMed

    Ackerman, Michael J

    2017-01-01

    Atlases of anatomy have long been a mainstay for visualizing and identifying features of the human body [1]. Many are constructed of idealized illustrations rendered so that structures are presented as three-dimensional (3-D) pictures. Others have employed photographs of actual dissections. Still others are composed of collections of artist renderings of organs or areas of interest. All rely on a basically two-dimensional (2-D) graphic display to depict and allow for a better understanding of a complicated 3-D structure.

  8. Integrated VR platform for 3D and image-based models: a step toward interactive image-based virtual environments

    NASA Astrophysics Data System (ADS)

    Yoon, Jayoung; Kim, Gerard J.

    2003-04-01

    Traditionally, three dimension models have been used for building virtual worlds, and a data structure called the "scene graph" is often employed to organize these 3D objects in the virtual space. On the other hand, image-based rendering has recently been suggested as a probable alternative VR platform for its photo-realism, however, due to limited interactivity, it has only been used for simple navigation systems. To combine the merits of these two approaches to object/scene representations, this paper proposes for a scene graph structure in which both 3D models and various image-based scenes/objects can be defined, traversed, and rendered together. In fact, as suggested by Shade et al., these different representations can be used as different LOD's for a given object. For instance, an object might be rendered using a 3D model at close range, a billboard at an intermediate range, and as part of an environment map at far range. The ultimate objective of this mixed platform is to breath more interactivity into the image based rendered VE's by employing 3D models as well. There are several technical challenges in devising such a platform: designing scene graph nodes for various types of image based techniques, establishing criteria for LOD/representation selection, handling their transitions, implementing appropriate interaction schemes, and correctly rendering the overall scene. Currently, we have extended the scene graph structure of the Sense8's WorldToolKit, to accommodate new node types for environment maps billboards, moving textures and sprites, "Tour-into-the-Picture" structure, and view interpolated objects. As for choosing the right LOD level, the usual viewing distance and image space criteria are used, however, the switching between the image and 3D model occurs at a distance from the user where the user starts to perceive the object's internal depth. Also, during interaction, regardless of the viewing distance, a 3D representation would be used, it if exists. Before rendering, objects are conservatively culled from the view frustum using the representation with the largest volume. Finally, we carried out experiments to verify the theoretical derivation of the switching rule and obtained positive results.

  9. Visualizing dynamic geosciences phenomena using an octree-based view-dependent LOD strategy within virtual globes

    NASA Astrophysics Data System (ADS)

    Li, Jing; Wu, Huayi; Yang, Chaowei; Wong, David W.; Xie, Jibo

    2011-09-01

    Geoscientists build dynamic models to simulate various natural phenomena for a better understanding of our planet. Interactive visualizations of these geoscience models and their outputs through virtual globes on the Internet can help the public understand the dynamic phenomena related to the Earth more intuitively. However, challenges arise when the volume of four-dimensional data (4D), 3D in space plus time, is huge for rendering. Datasets loaded from geographically distributed data servers require synchronization between ingesting and rendering data. Also the visualization capability of display clients varies significantly in such an online visualization environment; some may not have high-end graphic cards. To enhance the efficiency of visualizing dynamic volumetric data in virtual globes, this paper proposes a systematic framework, in which an octree-based multiresolution data structure is implemented to organize time series 3D geospatial data to be used in virtual globe environments. This framework includes a view-dependent continuous level of detail (LOD) strategy formulated as a synchronized part of the virtual globe rendering process. Through the octree-based data retrieval process, the LOD strategy enables the rendering of the 4D simulation at a consistent and acceptable frame rate. To demonstrate the capabilities of this framework, data of a simulated dust storm event are rendered in World Wind, an open source virtual globe. The rendering performances with and without the octree-based LOD strategy are compared. The experimental results show that using the proposed data structure and processing strategy significantly enhances the visualization performance when rendering dynamic geospatial phenomena in virtual globes.

  10. Spatial 3D infrastructure: display-independent software framework, high-speed rendering electronics, and several new displays

    NASA Astrophysics Data System (ADS)

    Chun, Won-Suk; Napoli, Joshua; Cossairt, Oliver S.; Dorval, Rick K.; Hall, Deirdre M.; Purtell, Thomas J., II; Schooler, James F.; Banker, Yigal; Favalora, Gregg E.

    2005-03-01

    We present a software and hardware foundation to enable the rapid adoption of 3-D displays. Different 3-D displays - such as multiplanar, multiview, and electroholographic displays - naturally require different rendering methods. The adoption of these displays in the marketplace will be accelerated by a common software framework. The authors designed the SpatialGL API, a new rendering framework that unifies these display methods under one interface. SpatialGL enables complementary visualization assets to coexist through a uniform infrastructure. Also, SpatialGL supports legacy interfaces such as the OpenGL API. The authors" first implementation of SpatialGL uses multiview and multislice rendering algorithms to exploit the performance of modern graphics processing units (GPUs) to enable real-time visualization of 3-D graphics from medical imaging, oil & gas exploration, and homeland security. At the time of writing, SpatialGL runs on COTS workstations (both Windows and Linux) and on Actuality"s high-performance embedded computational engine that couples an NVIDIA GeForce 6800 Ultra GPU, an AMD Athlon 64 processor, and a proprietary, high-speed, programmable volumetric frame buffer that interfaces to a 1024 x 768 x 3 digital projector. Progress is illustrated using an off-the-shelf multiview display, Actuality"s multiplanar Perspecta Spatial 3D System, and an experimental multiview display. The experimental display is a quasi-holographic view-sequential system that generates aerial imagery measuring 30 mm x 25 mm x 25 mm, providing 198 horizontal views.

  11. Real-time reconstruction of three-dimensional brain surface MR image using new volume-surface rendering technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watanabe, T.; Momose, T.; Oku, S.

    It is essential to obtain realistic brain surface images, in which sulci and gyri are easily recognized, when examining the correlation between functional (PET or SPECT) and anatomical (MRI) brain studies. The volume rendering technique (VRT) is commonly employed to make three-dimensional (3D) brain surface images. This technique, however, takes considerable time to make only one 3D image. Therefore it has not been practical to make the brain surface images in arbitrary directions on a real-time basis using ordinary work stations or personal computers. The surface rendering technique (SRT), on the other hand, is much less computationally demanding, but themore » quality of resulting images is not satisfactory for our purpose. A new computer algorithm has been developed to make 3D brain surface MR images very quickly using a volume-surface rendering technique (VSRT), in which the quality of resulting images is comparable to that of VRT and computation time to SRT. In VSRT the process of volume rendering is done only once to the direction of the normal vector of each surface point, rather than each time a new view point is determined as in VRT. Subsequent reconstruction of the 3D image uses a similar algorithm to that of SRT. Thus we can obtain brain surface MR images of sufficient quality viewed from any direction on a real-time basis using an easily available personal computer (Macintosh Quadra 800). The calculation time to make a 3D image is less than 1 sec. in VSRT, while that is more than 15 sec. in the conventional VRT. The difference of resulting image quality between VSRT and VRT is almost imperceptible. In conclusion, our new technique for real-time reconstruction of 3D brain surface MR image is very useful and practical in the functional and anatomical correlation study.« less

  12. Feasibility study: real-time 3-D ultrasound imaging of the brain.

    PubMed

    Smith, Stephen W; Chu, Kengyeh; Idriss, Salim F; Ivancevich, Nikolas M; Light, Edward D; Wolf, Patrick D

    2004-10-01

    We tested the feasibility of real-time, 3-D ultrasound (US) imaging in the brain. The 3-D scanner uses a matrix phased-array transducer of 512 transmit channels and 256 receive channels operating at 2.5 MHz with a 15-mm diameter footprint. The real-time system scans a 65 degrees pyramid, producing up to 30 volumetric scans per second, and features up to five image planes as well as 3-D rendering, 3-D pulsed-wave and color Doppler. In a human subject, the real-time 3-D scans produced simultaneous transcranial horizontal (axial), coronal and sagittal image planes and real-time volume-rendered images of the gross anatomy of the brain. In a transcranial sheep model, we obtained real-time 3-D color flow Doppler scans and perfusion images using bolus injection of contrast agents into the internal carotid artery.

  13. A pitfall of the volume rendering method with 3D time-of-flight MRA: a case of a branching vessel at the aneurysm neck.

    PubMed

    Goto, Masami; Kunimatsu, Akira; Shojima, Masaaki; Abe, Osamu; Aoki, Shigeki; Hayashi, Naoto; Mori, Harushi; Ino, Kenji; Yano, Keiichi; Saito, Nobuhito; Ohtomo, Kuni

    2013-03-25

    We present a case in which the origin of the branching vessel at the aneurysm neck was observed at the wrong place on the volume rendering method (VR) with 3D time-of-flight MRA (3D-TOF-MRA) with 3-Tesla MR system. In 3D-TOF-MRA, it is often difficult to observe the origin of the branching vessel, but it is unusual for it to be observed in the wrong place. In the planning of interventional treatment and surgical procedures, false recognition, as in the unique case in the present report, is a serious problem. Decisions based only on VR with 3D-TOF-MRA can be a cause of suboptimal selection in clinical treatment.

  14. Visualizing Astronomical Data with Blender

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.

    2014-01-01

    We present methods for using the 3D graphics program Blender in the visualization of astronomical data. The software's forte for animating 3D data lends itself well to use in astronomy. The Blender graphical user interface and Python scripting capabilities can be utilized in the generation of models for data cubes, catalogs, simulations, and surface maps. We review methods for data import, 2D and 3D voxel texture applications, animations, camera movement, and composite renders. Rendering times can be improved by using graphic processing units (GPUs). A number of examples are shown using the software features most applicable to various kinds of data paradigms in astronomy.

  15. Gallium(III) complexes of DOTA and DOTA-monoamide: kinetic and thermodynamic studies.

    PubMed

    Kubícek, Vojtech; Havlícková, Jana; Kotek, Jan; Tircsó, Gyula; Hermann, Petr; Tóth, Eva; Lukes, Ivan

    2010-12-06

    Given the practical advantages of the (68)Ga isotope in positron emission tomography applications, gallium complexes are gaining increasing importance in biomedical imaging. However, the strong tendency of Ga(3+) to hydrolyze and the slow formation and very high stability of macrocyclic complexes altogether render Ga(3+) coordination chemistry difficult and explain why stability and kinetic data on Ga(3+) complexes are rather scarce. Here we report solution and solid-state studies of Ga(3+) complexes formed with the macrocyclic ligand 1,4,7,10-tetraazacyclododecane-1,4,7,10-tetraacetic acid, (DOTA)(4-), and its mono(n-butylamide) derivative, (DO3AM(Bu))(3-). Thermodynamic stability constants, log K(GaDOTA) = 26.05 and log K(GaDO3AM(Bu)) = 24.64, were determined by out-of-cell pH-potentiometric titrations. Due to the very slow formation and dissociation of the complexes, equilibration times of up to ∼4 weeks were necessary. The kinetics of complex dissociation were followed by (71)Ga NMR under both acidic and alkaline conditions. The GaDOTA complex is significantly more inert (τ(1/2) ∼12.2 d at pH = 0 and τ(1/2) ∼6.2 h at pH = 10) than the GaDO3AM(Bu) analogue (τ(1/2) ∼2.7 d at pH = 0 and τ(1/2) ∼0.7 h at pH = 10). Nevertheless, the kinetic inertness of both chelates is extremely high and approves the application of Ga(3+) complexes of such DOTA-like ligands in molecular imaging. The solid-state structure of the GaDOTA complex, crystallized from a strongly acidic solution (pH < 1), evidenced a diprotonated form with protons localized on the free carboxylate pendants.

  16. Synthesized view comparison method for no-reference 3D image quality assessment

    NASA Astrophysics Data System (ADS)

    Luo, Fangzhou; Lin, Chaoyi; Gu, Xiaodong; Ma, Xiaojun

    2018-04-01

    We develop a no-reference image quality assessment metric to evaluate the quality of synthesized view rendered from the Multi-view Video plus Depth (MVD) format. Our metric is named Synthesized View Comparison (SVC), which is designed for real-time quality monitoring at the receiver side in a 3D-TV system. The metric utilizes the virtual views in the middle which are warped from left and right views by Depth-image-based rendering algorithm (DIBR), and compares the difference between the virtual views rendered from different cameras by Structural SIMilarity (SSIM), a popular 2D full-reference image quality assessment metric. The experimental results indicate that our no-reference quality assessment metric for the synthesized images has competitive prediction performance compared with some classic full-reference image quality assessment metrics.

  17. Server-based Approach to Web Visualization of Integrated Three-dimensional Brain Imaging Data

    PubMed Central

    Poliakov, Andrew V.; Albright, Evan; Hinshaw, Kevin P.; Corina, David P.; Ojemann, George; Martin, Richard F.; Brinkley, James F.

    2005-01-01

    The authors describe a client-server approach to three-dimensional (3-D) visualization of neuroimaging data, which enables researchers to visualize, manipulate, and analyze large brain imaging datasets over the Internet. All computationally intensive tasks are done by a graphics server that loads and processes image volumes and 3-D models, renders 3-D scenes, and sends the renderings back to the client. The authors discuss the system architecture and implementation and give several examples of client applications that allow visualization and analysis of integrated language map data from single and multiple patients. PMID:15561787

  18. Cost and time-effective method for multi-scale measures of rugosity, fractal dimension, and vector dispersion from coral reef 3D models

    PubMed Central

    Dey, S.

    2017-01-01

    We present a method to construct and analyse 3D models of underwater scenes using a single cost-effective camera on a standard laptop with (a) free or low-cost software, (b) no computer programming ability, and (c) minimal man hours for both filming and analysis. This study focuses on four key structural complexity metrics: point-to-point distances, linear rugosity (R), fractal dimension (D), and vector dispersion (1/k). We present the first assessment of accuracy and precision of structure-from-motion (SfM) 3D models from an uncalibrated GoPro™ camera at a small scale (4 m2) and show that they can provide meaningful, ecologically relevant results. Models had root mean square errors of 1.48 cm in X-Y and 1.35 in Z, and accuracies of 86.8% (R), 99.6% (D at scales 30–60 cm), 93.6% (D at scales 1–5 cm), and 86.9 (1/k). Values of R were compared to in-situ chain-and-tape measurements, while values of D and 1/k were compared with ground truths from 3D printed objects modelled underwater. All metrics varied less than 3% between independently rendered models. We thereby improve and rigorously validate a tool for ecologists to non-invasively quantify coral reef structural complexity with a variety of multi-scale metrics. PMID:28406937

  19. [3-D echocardiography: new developments and future prospects].

    PubMed

    Müller, Silvana; Bartel, Thomas; Pachinger, Otmar; Erbel, Raimund

    2002-05-01

    Due to limitations in transthoracic and occasionally transesophageal 2-D echocardiography with respect to volumetric analysis and morphologic and functional assessment in patients with congenital malformations and valvular heart disease, additional diagnostic tools have been established. In parallel with the rapid evolution in computer technology, 3-D echocardiography has grown into a well-developed technique, such as volume-rendered 3-D reconstruction, capable of displaying dynamic morphology depicting depth of the structures, their attachment, and spatial relation to the surrounding tissue. Nevertheless, the complexity of data acquisition and data processing required for adequate dynamic 3-D echocardiographic imaging and volumetric analysis does not allow to use this approach routinely. The commonly used dynamic 3-D echocardiography means off-line computer-assisted image reconstruction from a series of cross-sectional echocardiographic images using currently available transesophageal and transthoracic transducers. Alternatively, real-time 3-D echocardiography based on novel matrix, phased-array transducer technology has been introduced. Although this technique can be easily combined with any routine examination, its clinical use is limited because of a lower image quality in comparison with dynamic 3-D echocardiography. Up to now, there is no transesophageal approach available using real-time 3-D echocardiography. Recently, dynamic 3-D echocardiographic technique has matured noticeably. Beside the well-known sequential scanning, which is characterized by a fixed probe and patient in space and predetermined motion of the transducer, the freehand scanning using an electromagnetic location system has found its way to clinical environment. The main advantage of this technique is that the transducer can be freely moved by the examiner and, thus, the data set acquired within a routine examination. Also 3-D rendering and display have been developed further. In this respect, especially the "real-time rendering mode" allowing the reconstructed 3-D image to be animated and moved in space and to look at it from different perspectives has gained increasing acceptance. In valvular heart disease, reconstructive surgical treatment is aspired. 3-D echocardiographic imaging is the only technique providing "surgical views" prior to opening the heart. It is capable of distinguishing particular destructive substructures of the valves and the valvular apparatus. Especially in mitral valvular reconstruction, it is of clinical importance to achieve optimal surgical results. With respect to volumetric and mass analysis, 3-D echocardiography is more accurate and reproducible in comparison with conventional 2-D analysis. It provides data independent of geometric assumptions, what may considerably influence the results in the presence of wall motion abnormalities, especially in aneurysmatic ventricles. Volumetric analysis of the aneurysmal portion may also be helpful prior to surgical resection. 3-D echocardiography can also be recommended as a valuable additional approach to atrial septal defect (ASD), corrected transposition of the great arteries, cor triatriatum, and, within limits, to ventricular septal defect (VSD) as well. Especially with respect to ASD and VSD, the potential significance of 3-D echocardiography prior to device closure is emphasized. At present, its additional information in decision-making and the increasing number of clinical cases that can be addressed and answered already justify the clinical use of this technique.

  20. Voxel Datacubes for 3D Visualization in Blender

    NASA Astrophysics Data System (ADS)

    Gárate, Matías

    2017-05-01

    The growth of computational astrophysics and the complexity of multi-dimensional data sets evidences the need for new versatile visualization tools for both the analysis and presentation of the data. In this work, we show how to use the open-source software Blender as a three-dimensional (3D) visualization tool to study and visualize numerical simulation results, focusing on astrophysical hydrodynamic experiments. With a datacube as input, the software can generate a volume rendering of the 3D data, show the evolution of a simulation in time, and do a fly-around camera animation to highlight the points of interest. We explain the process to import simulation outputs into Blender using the voxel data format, and how to set up a visualization scene in the software interface. This method allows scientists to perform a complementary visual analysis of their data and display their results in an appealing way, both for outreach and science presentations.

  1. Tangible display systems: direct interfaces for computer-based studies of surface appearance

    NASA Astrophysics Data System (ADS)

    Darling, Benjamin A.; Ferwerda, James A.

    2010-02-01

    When evaluating the surface appearance of real objects, observers engage in complex behaviors involving active manipulation and dynamic viewpoint changes that allow them to observe the changing patterns of surface reflections. We are developing a class of tangible display systems to provide these natural modes of interaction in computer-based studies of material perception. A first-generation tangible display was created from an off-the-shelf laptop computer containing an accelerometer and webcam as standard components. Using these devices, custom software estimated the orientation of the display and the user's viewing position. This information was integrated with a 3D rendering module so that rotating the display or moving in front of the screen would produce realistic changes in the appearance of virtual objects. In this paper, we consider the design of a second-generation system to improve the fidelity of the virtual surfaces rendered to the screen. With a high-quality display screen and enhanced tracking and rendering capabilities, a secondgeneration system will be better able to support a range of appearance perception applications.

  2. Volumetric 3D display using a DLP projection engine

    NASA Astrophysics Data System (ADS)

    Geng, Jason

    2012-03-01

    In this article, we describe a volumetric 3D display system based on the high speed DLPTM (Digital Light Processing) projection engine. Existing two-dimensional (2D) flat screen displays often lead to ambiguity and confusion in high-dimensional data/graphics presentation due to lack of true depth cues. Even with the help of powerful 3D rendering software, three-dimensional (3D) objects displayed on a 2D flat screen may still fail to provide spatial relationship or depth information correctly and effectively. Essentially, 2D displays have to rely upon capability of human brain to piece together a 3D representation from 2D images. Despite the impressive mental capability of human visual system, its visual perception is not reliable if certain depth cues are missing. In contrast, volumetric 3D display technologies to be discussed in this article are capable of displaying 3D volumetric images in true 3D space. Each "voxel" on a 3D image (analogous to a pixel in 2D image) locates physically at the spatial position where it is supposed to be, and emits light from that position toward omni-directions to form a real 3D image in 3D space. Such a volumetric 3D display provides both physiological depth cues and psychological depth cues to human visual system to truthfully perceive 3D objects. It yields a realistic spatial representation of 3D objects and simplifies our understanding to the complexity of 3D objects and spatial relationship among them.

  3. 3D SPECT/CT fusion using image data projection of bone SPECT onto 3D volume-rendered CT images: feasibility and clinical impact in the diagnosis of bone metastasis.

    PubMed

    Ogata, Yuji; Nakahara, Tadaki; Ode, Kenichi; Matsusaka, Yohji; Katagiri, Mari; Iwabuchi, Yu; Itoh, Kazunari; Ichimura, Akira; Jinzaki, Masahiro

    2017-05-01

    We developed a method of image data projection of bone SPECT into 3D volume-rendered CT images for 3D SPECT/CT fusion. The aims of our study were to evaluate its feasibility and clinical usefulness. Whole-body bone scintigraphy (WB) and SPECT/CT scans were performed in 318 cancer patients using a dedicated SPECT/CT systems. Volume data of bone SPECT and CT were fused to obtain 2D SPECT/CT images. To generate our 3D SPECT/CT images, colored voxel data of bone SPECT were projected onto the corresponding location of the volume-rendered CT data after a semi-automatic bone extraction. Then, the resultant 3D images were blended with conventional volume-rendered CT images, allowing to grasp the three-dimensional relationship between bone metabolism and anatomy. WB and SPECT (WB + SPECT), 2D SPECT/CT fusion, and 3D SPECT/CT fusion were evaluated by two independent reviewers in the diagnosis of bone metastasis. The inter-observer variability and diagnostic accuracy in these three image sets were investigated using a four-point diagnostic scale. Increased bone metabolism was found in 744 metastatic sites and 1002 benign changes. On a per-lesion basis, inter-observer agreements in the diagnosis of bone metastasis were 0.72 for WB + SPECT, 0.90 for 2D SPECT/CT, and 0.89 for 3D SPECT/CT. Receiver operating characteristic analyses for the diagnostic accuracy of bone metastasis showed that WB + SPECT, 2D SPECT/CT, and 3D SPECT/CT had an area under the curve of 0.800, 0.983, and 0.983 for reader 1, 0.865, 0.992, and 0.993 for reader 2, respectively (WB + SPECT vs. 2D or 3D SPECT/CT, p < 0.001; 2D vs. 3D SPECT/CT, n.s.). The durations of interpretation of WB + SPECT, 2D SPECT/CT, and 3D SPECT/CT images were 241 ± 75, 225 ± 73, and 182 ± 71 s for reader 1 and 207 ± 72, 190 ± 73, and 179 ± 73 s for reader 2, respectively. As a result, it took shorter time to read 3D SPECT/CT images than 2D SPECT/CT (p < 0.0001) or WB + SPECT images (p < 0.0001). 3D SPECT/CT fusion offers comparable diagnostic accuracy to 2D SPECT/CT fusion. The visual effect of 3D SPECT/CT fusion facilitates reduction of reading time compared to 2D SPECT/CT fusion.

  4. RenderMan design principles

    NASA Technical Reports Server (NTRS)

    Apodaca, Tony; Porter, Tom

    1989-01-01

    The two worlds of interactive graphics and realistic graphics have remained separate. Fast graphics hardware runs simple algorithms and generates simple looking images. Photorealistic image synthesis software runs slowly on large expensive computers. The time has come for these two branches of computer graphics to merge. The speed and expense of graphics hardware is no longer the barrier to the wide acceptance of photorealism. There is every reason to believe that high quality image synthesis will become a standard capability of every graphics machine, from superworkstation to personal computer. The significant barrier has been the lack of a common language, an agreed-upon set of terms and conditions, for 3-D modeling systems to talk to 3-D rendering systems for computing an accurate rendition of that scene. Pixar has introduced RenderMan to serve as that common language. RenderMan, specifically the extensibility it offers in shading calculations, is discussed.

  5. NOTE: Wobbled splatting—a fast perspective volume rendering method for simulation of x-ray images from CT

    NASA Astrophysics Data System (ADS)

    Birkfellner, Wolfgang; Seemann, Rudolf; Figl, Michael; Hummel, Johann; Ede, Christopher; Homolka, Peter; Yang, Xinhui; Niederer, Peter; Bergmann, Helmar

    2005-05-01

    3D/2D registration, the automatic assignment of a global rigid-body transformation matching the coordinate systems of patient and preoperative volume scan using projection images, is an important topic in image-guided therapy and radiation oncology. A crucial part of most 3D/2D registration algorithms is the fast computation of digitally rendered radiographs (DRRs) to be compared iteratively to radiographs or portal images. Since registration is an iterative process, fast generation of DRRs—which are perspective summed voxel renderings—is desired. In this note, we present a simple and rapid method for generation of DRRs based on splat rendering. As opposed to conventional splatting, antialiasing of the resulting images is not achieved by means of computing a discrete point spread function (a so-called footprint), but by stochastic distortion of either the voxel positions in the volume scan or by the simulation of a focal spot of the x-ray tube with non-zero diameter. Our method generates slightly blurred DRRs suitable for registration purposes at framerates of approximately 10 Hz when rendering volume images with a size of 30 MB.

  6. 3D Printout Models vs. 3D-Rendered Images: Which Is Better for Preoperative Planning?

    PubMed

    Zheng, Yi-xiong; Yu, Di-fei; Zhao, Jian-gang; Wu, Yu-lian; Zheng, Bin

    2016-01-01

    Correct interpretation of a patient's anatomy and changes that occurs secondary to a disease process are crucial in the preoperative process to ensure optimal surgical treatment. In this study, we presented 3 different pancreatic cancer cases to surgical residents in the form of 3D-rendered images and 3D-printed models to investigate which modality resulted in the most appropriate preoperative plan. We selected 3 cases that would require significantly different preoperative plans based on key features identifiable in the preoperative computed tomography imaging. 3D volume rendering and 3D printing were performed respectively to create 2 different training ways. A total of 30, year 1 surgical residents were randomly divided into 2 groups. Besides traditional 2D computed tomography images, residents in group A (n = 15) reviewed 3D computer models, whereas in group B, residents (n = 15) reviewed 3D-printed models. Both groups subsequently completed an examination, designed in-house, to assess the appropriateness of their preoperative plan and provide a numerical score of the quality of the surgical plan. Residents in group B showed significantly higher quality of the surgical plan scores compared with residents in group A (76.4 ± 10.5 vs. 66.5 ± 11.2, p = 0.018). This difference was due in large part to a significant difference in knowledge of key surgical steps (22.1 ± 2.9 vs. 17.4 ± 4.2, p = 0.004) between each group. All participants reported a high level of satisfaction with the exercise. Results from this study support our hypothesis that 3D-printed models improve the quality of surgical trainee's preoperative plans. Copyright © 2016 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  7. Hyoid bone development: An assessment of optimal CT scanner parameters and 3D volume rendering techniques

    PubMed Central

    Cotter, Meghan M.; Whyms, Brian J.; Kelly, Michael P.; Doherty, Benjamin M.; Gentry, Lindell R.; Bersu, Edward T.; Vorperian, Houri K.

    2015-01-01

    The hyoid bone anchors and supports the vocal tract. Its complex shape is best studied in three dimensions, but it is difficult to capture on computed tomography (CT) images and three-dimensional volume renderings. The goal of this study was to determine the optimal CT scanning and rendering parameters to accurately measure the growth and developmental anatomy of the hyoid and to determine whether it is feasible and necessary to use these parameters in the measurement of hyoids from in vivo CT scans. Direct linear and volumetric measurements of skeletonized hyoid bone specimens were compared to corresponding CT images to determine the most accurate scanning parameters and three-dimensional rendering techniques. A pilot study was undertaken using in vivo scans from a retrospective CT database to determine feasibility of quantifying hyoid growth. Scanning parameters and rendering technique affected accuracy of measurements. Most linear CT measurements were within 10% of direct measurements; however, volume was overestimated when CT scans were acquired with a slice thickness greater than 1.25 mm. Slice-by-slice thresholding of hyoid images decreased volume overestimation. The pilot study revealed that the linear measurements tested correlate with age. A fine-tuned rendering approach applied to small slice thickness CT scans produces the most accurate measurements of hyoid bones. However, linear measurements can be accurately assessed from in vivo CT scans at a larger slice thickness. Such findings imply that investigation into the growth and development of the hyoid bone, and the vocal tract as a whole, can now be performed using these techniques. PMID:25810349

  8. Data Cube Visualization with Blender

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.; Gárate, Matías

    2017-06-01

    With the increasing data acquisition rates from observational and computational astrophysics, new tools are needed to study and visualize data. We present a methodology for rendering 3D data cubes using the open-source 3D software Blender. By importing processed observations and numerical simulations through the Voxel Data format, we are able use the Blender interface and Python API to create high-resolution animated visualizations. We review the methods for data import, animation, and camera movement, and present examples of this methodology. The 3D rendering of data cubes gives scientists the ability to create appealing displays that can be used for both scientific presentations as well as public outreach.

  9. Recent advances in head-mounted light field displays for virtual and augmented reality (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Hua, Hong

    2017-02-01

    Head-mounted light field displays render a true 3D scene by sampling either the projections of the 3D scene at different depths or the directions of the light rays apparently emitted by the 3D scene and viewed from different eye positions. They are capable of rendering correct or nearly correct focus cues and addressing the very well-known vergence-accommodation mismatch problem in conventional virtual and augmented reality displays. In this talk, I will focus on reviewing recent advancements of head-mounted light field displays for VR and AR applications. I will demonstrate examples of HMD systems developed in my group.

  10. Three-dimensional volume rendering of the ankle based on magnetic resonance images enables the generation of images comparable to real anatomy.

    PubMed

    Anastasi, Giuseppe; Cutroneo, Giuseppina; Bruschetta, Daniele; Trimarchi, Fabio; Ielitro, Giuseppe; Cammaroto, Simona; Duca, Antonio; Bramanti, Placido; Favaloro, Angelo; Vaccarino, Gianluigi; Milardi, Demetrio

    2009-11-01

    We have applied high-quality medical imaging techniques to study the structure of the human ankle. Direct volume rendering, using specific algorithms, transforms conventional two-dimensional (2D) magnetic resonance image (MRI) series into 3D volume datasets. This tool allows high-definition visualization of single or multiple structures for diagnostic, research, and teaching purposes. No other image reformatting technique so accurately highlights each anatomic relationship and preserves soft tissue definition. Here, we used this method to study the structure of the human ankle to analyze tendon-bone-muscle relationships. We compared ankle MRI and computerized tomography (CT) images from 17 healthy volunteers, aged 18-30 years (mean 23 years). An additional subject had a partial rupture of the Achilles tendon. The MRI images demonstrated superiority in overall quality of detail compared to the CT images. The MRI series accurately rendered soft tissue and bone in simultaneous image acquisition, whereas CT required several window-reformatting algorithms, with loss of image data quality. We obtained high-quality digital images of the human ankle that were sufficiently accurate for surgical and clinical intervention planning, as well as for teaching human anatomy. Our approach demonstrates that complex anatomical structures such as the ankle, which is rich in articular facets and ligaments, can be easily studied non-invasively using MRI data.

  11. Three-dimensional volume rendering of the ankle based on magnetic resonance images enables the generation of images comparable to real anatomy

    PubMed Central

    Anastasi, Giuseppe; Cutroneo, Giuseppina; Bruschetta, Daniele; Trimarchi, Fabio; Ielitro, Giuseppe; Cammaroto, Simona; Duca, Antonio; Bramanti, Placido; Favaloro, Angelo; Vaccarino, Gianluigi; Milardi, Demetrio

    2009-01-01

    We have applied high-quality medical imaging techniques to study the structure of the human ankle. Direct volume rendering, using specific algorithms, transforms conventional two-dimensional (2D) magnetic resonance image (MRI) series into 3D volume datasets. This tool allows high-definition visualization of single or multiple structures for diagnostic, research, and teaching purposes. No other image reformatting technique so accurately highlights each anatomic relationship and preserves soft tissue definition. Here, we used this method to study the structure of the human ankle to analyze tendon–bone–muscle relationships. We compared ankle MRI and computerized tomography (CT) images from 17 healthy volunteers, aged 18–30 years (mean 23 years). An additional subject had a partial rupture of the Achilles tendon. The MRI images demonstrated superiority in overall quality of detail compared to the CT images. The MRI series accurately rendered soft tissue and bone in simultaneous image acquisition, whereas CT required several window-reformatting algorithms, with loss of image data quality. We obtained high-quality digital images of the human ankle that were sufficiently accurate for surgical and clinical intervention planning, as well as for teaching human anatomy. Our approach demonstrates that complex anatomical structures such as the ankle, which is rich in articular facets and ligaments, can be easily studied non-invasively using MRI data. PMID:19678857

  12. Real-time 3D image reconstruction guidance in liver resection surgery

    PubMed Central

    Nicolau, Stephane; Pessaux, Patrick; Mutter, Didier; Marescaux, Jacques

    2014-01-01

    Background Minimally invasive surgery represents one of the main evolutions of surgical techniques. However, minimally invasive surgery adds difficulty that can be reduced through computer technology. Methods From a patient’s medical image [US, computed tomography (CT) or MRI], we have developed an Augmented Reality (AR) system that increases the surgeon’s intraoperative vision by providing a virtual transparency of the patient. AR is based on two major processes: 3D modeling and visualization of anatomical or pathological structures appearing in the medical image, and the registration of this visualization onto the real patient. We have thus developed a new online service, named Visible Patient, providing efficient 3D modeling of patients. We have then developed several 3D visualization and surgical planning software tools to combine direct volume rendering and surface rendering. Finally, we have developed two registration techniques, one interactive and one automatic providing intraoperative augmented reality view. Results From January 2009 to June 2013, 769 clinical cases have been modeled by the Visible Patient service. Moreover, three clinical validations have been realized demonstrating the accuracy of 3D models and their great benefit, potentially increasing surgical eligibility in liver surgery (20% of cases). From these 3D models, more than 50 interactive AR-assisted surgical procedures have been realized illustrating the potential clinical benefit of such assistance to gain safety, but also current limits that automatic augmented reality will overcome. Conclusions Virtual patient modeling should be mandatory for certain interventions that have now to be defined, such as liver surgery. Augmented reality is clearly the next step of the new surgical instrumentation but remains currently limited due to the complexity of organ deformations during surgery. Intraoperative medical imaging used in new generation of automated augmented reality should solve this issue thanks to the development of Hybrid OR. PMID:24812598

  13. Covalently-bonded grafting of [Ln3(Benzimidazole)4]-arrayed (Ln = Tb, Nd, Yb or Er) complex monomers into PNBE (poly(norbornene)) with highly luminous color-purity green-light or efficient NIR luminescence

    NASA Astrophysics Data System (ADS)

    Liu, Lin; Fu, Guorui; Feng, Heini; Guan, Jiaqing; Li, Fengping; Lü, Xingqiang; Wong, Wai-Kwok; Jones, Richard A.

    2017-07-01

    Within series of Ln3-grafted polymers Poly({[Ln3(L)4(NO3)6]·(NO3)·(H3O)2}-co-NBE) (Ln = La, 1; Ln = Eu, 2; Ln = Tb, 3; Ln = Nd, 4; Ln = Yb, 5; Ln = Er, 6 or Ln = Gd, 7) obtained from ring-opening metathesis polymerization (ROMP) of norbornene (NBE) with each of allyl-functionalized complex monomers {[Ln3(L)4(NO3)6]·(NO3)·(H3O)2} (HL = 4-allyl-2-(1H-benzo[d]imidazol-2-yl)-6-methoxyphenol), PNBE-assisted effective energy transfer renders Poly(3-co-NBE) Tb3+-centered highly luminous color-purity green-light with an attractive quantum yield of 87% and efficient near-infrared (NIR) luminescence (ΦNdL = 0.61%; ΦYbL = 1.47% and ΦErL = 0.03%) for Nd3+-, Yb3+- or Er3+-grafted polymers.

  14. Astronomy Data Visualization with Blender

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.

    2015-08-01

    We present innovative methods and techniques for using Blender, a 3D software package, in the visualization of astronomical data. N-body simulations, data cubes, galaxy and stellar catalogs, and planetary surface maps can be rendered in high quality videos for exploratory data analysis. Blender's API is Python based, making it advantageous for use in astronomy with flexible libraries like astroPy. Examples will be exhibited that showcase the features of the software in astronomical visualization paradigms. 2D and 3D voxel texture applications, animations, camera movement, and composite renders are introduced to the astronomer's toolkit and how they mesh with different forms of data.

  15. Image-Based Virtual Tours and 3d Modeling of Past and Current Ages for the Enhancement of Archaeological Parks: the Visualversilia 3d Project

    NASA Astrophysics Data System (ADS)

    Castagnetti, C.; Giannini, M.; Rivola, R.

    2017-05-01

    The research project VisualVersilia 3D aims at offering a new way to promote the territory and its heritage by matching the traditional reading of the document and the potential use of modern communication technologies for the cultural tourism. Recently, the research on the use of new technologies applied to cultural heritage have turned their attention mainly to technologies to reconstruct and narrate the complexity of the territory and its heritage, including 3D scanning, 3D printing and augmented reality. Some museums and archaeological sites already exploit the potential of digital tools to preserve and spread their heritage but interactive services involving tourists in an immersive and more modern experience are still rare. The innovation of the project consists in the development of a methodology for documenting current and past historical ages and integrating their 3D visualizations with rendering capable of returning an immersive virtual reality for a successful enhancement of the heritage. The project implements the methodology in the archaeological complex of Massaciuccoli, one of the best preserved roman site of the Versilia Area (Tuscany, Italy). The activities of the project briefly consist in developing: 1. the virtual tour of the site in its current configuration on the basis of spherical images then enhanced by texts, graphics and audio guides in order to enable both an immersive and remote tourist experience; 2. 3D reconstruction of the evidences and buildings in their current condition for documentation and conservation purposes on the basis of a complete metric survey carried out through laser scanning; 3. 3D virtual reconstructions through the main historical periods on the basis of historical investigation and the analysis of data acquired.

  16. Patient-specific bronchoscopy visualization through BRDF estimation and disocclusion correction.

    PubMed

    Chung, Adrian J; Deligianni, Fani; Shah, Pallav; Wells, Athol; Yang, Guang-Zhong

    2006-04-01

    This paper presents an image-based method for virtual bronchoscope with photo-realistic rendering. The technique is based on recovering bidirectional reflectance distribution function (BRDF) parameters in an environment where the choice of viewing positions, directions, and illumination conditions are restricted. Video images of bronchoscopy examinations are combined with patient-specific three-dimensional (3-D) computed tomography data through two-dimensional (2-D)/3-D registration and shading model parameters are then recovered by exploiting the restricted lighting configurations imposed by the bronchoscope. With the proposed technique, the recovered BRDF is used to predict the expected shading intensity, allowing a texture map independent of lighting conditions to be extracted from each video frame. To correct for disocclusion artefacts, statistical texture synthesis was used to recreate the missing areas. New views not present in the original bronchoscopy video are rendered by evaluating the BRDF with different viewing and illumination parameters. This allows free navigation of the acquired 3-D model with enhanced photo-realism. To assess the practical value of the proposed technique, a detailed visual scoring that involves both real and rendered bronchoscope images is conducted.

  17. Advanced 3-dimensional planning in neurosurgery.

    PubMed

    Ferroli, Paolo; Tringali, Giovanni; Acerbi, Francesco; Schiariti, Marco; Broggi, Morgan; Aquino, Domenico; Broggi, Giovanni

    2013-01-01

    During the past decades, medical applications of virtual reality technology have been developing rapidly, ranging from a research curiosity to a commercially and clinically important area of medical informatics and technology. With the aid of new technologies, the user is able to process large amounts of data sets to create accurate and almost realistic reconstructions of anatomic structures and related pathologies. As a result, a 3-diensional (3-D) representation is obtained, and surgeons can explore the brain for planning or training. Further improvement such as a feedback system increases the interaction between users and models by creating a virtual environment. Its use for advanced 3-D planning in neurosurgery is described. Different systems of medical image volume rendering have been used and analyzed for advanced 3-D planning: 1 is a commercial "ready-to-go" system (Dextroscope, Bracco, Volume Interaction, Singapore), whereas the others are open-source-based software (3-D Slicer, FSL, and FreesSurfer). Different neurosurgeons at our institution experienced how advanced 3-D planning before surgery allowed them to facilitate and increase their understanding of the complex anatomic and pathological relationships of the lesion. They all agreed that the preoperative experience of virtually planning the approach was helpful during the operative procedure. Virtual reality for advanced 3-D planning in neurosurgery has achieved considerable realism as a result of the available processing power of modern computers. Although it has been found useful to facilitate the understanding of complex anatomic relationships, further effort is needed to increase the quality of the interaction between the user and the model.

  18. Target surface finding using 3D SAR data

    NASA Astrophysics Data System (ADS)

    Ruiter, Jason R.; Burns, Joseph W.; Subotic, Nikola S.

    2005-05-01

    Methods of generating more literal, easily interpretable imagery from 3-D SAR data are being studied to provide all weather, near-visual target identification and/or scene interpretation. One method of approaching this problem is to automatically generate shape-based geometric renderings from the SAR data. In this paper we describe the application of the Marching Tetrahedrons surface finding algorithm to 3-D SAR data. The Marching Tetrahedrons algorithm finds a surface through the 3-D data cube, which provides a recognizable representation of the target surface. This algorithm was applied to the public-release X-patch simulations of a backhoe, which provided densely sampled 3-D SAR data sets. The performance of the algorithm to noise and spatial resolution were explored. Surface renderings were readily recognizable over a range of spatial resolution, and maintained their fidelity even under relatively low Signal-to-Noise Ratio (SNR) conditions.

  19. Optimization of spine surgery planning with 3D image templating tools

    NASA Astrophysics Data System (ADS)

    Augustine, Kurt E.; Huddleston, Paul M.; Holmes, David R., III; Shridharani, Shyam M.; Robb, Richard A.

    2008-03-01

    The current standard of care for patients with spinal disorders involves a thorough clinical history, physical exam, and imaging studies. Simple radiographs provide a valuable assessment but prove inadequate for surgery planning because of the complex 3-dimensional anatomy of the spinal column and the close proximity of the neural elements, large blood vessels, and viscera. Currently, clinicians still use primitive techniques such as paper cutouts, pencils, and markers in an attempt to analyze and plan surgical procedures. 3D imaging studies are routinely ordered prior to spine surgeries but are currently limited to generating simple, linear and angular measurements from 2D views orthogonal to the central axis of the patient. Complex spinal corrections require more accurate and precise calculation of 3D parameters such as oblique lengths, angles, levers, and pivot points within individual vertebra. We have developed a clinician friendly spine surgery planning tool which incorporates rapid oblique reformatting of each individual vertebra, followed by interactive templating for 3D placement of implants. The template placement is guided by the simultaneous representation of multiple 2D section views from reformatted orthogonal views and a 3D rendering of individual or multiple vertebrae enabling superimposition of virtual implants. These tools run efficiently on desktop PCs typically found in clinician offices or workrooms. A preliminary study conducted with Mayo Clinic spine surgeons using several actual cases suggests significantly improved accuracy of pre-operative measurements and implant localization, which is expected to increase spinal procedure efficiency and safety, and reduce time and cost of the operation.

  20. D Model Generation from Uav: Historical Mosque (masjid LAMA Nilai)

    NASA Astrophysics Data System (ADS)

    Nasir, N. H. Mohd; Tahar, K. N.

    2017-08-01

    Preserving cultural heritage and historic sites is an important issue. These sites are subjected to erosion and vandalism, and, as long-lived artifacts, they have gone through many phases of construction, damage and repair. It is important to keep an accurate record of these sites using the 3-D model building technology as they currently are, so that preservationists can track changes, foresee structural problems, and allow a wider audience to "virtually" see and tour these sites. Due to the complexity of these sites, building 3-D models is time consuming and difficult, usually involving much manual effort. This study discusses new methods that can reduce the time to build a model using the Unmanned Aerial Vehicle method. This study aims to develop a 3D model of a historical mosque using UAV photogrammetry. In order to achieve this, the data acquisition set of Masjid Lama Nilai, Negeri Sembilan was captured by using an Unmanned Aerial Vehicle. In addition, accuracy assessment between the actual and measured values is made. Besides that, a comparison between the rendering 3D model and texturing 3D model is also carried out through this study.

  1. Is 3D true non linear traveltime tomography reasonable ?

    NASA Astrophysics Data System (ADS)

    Herrero, A.; Virieux, J.

    2003-04-01

    The data sets requiring 3D analysis tools in the context of seismic exploration (both onshore and offshore experiments) or natural seismicity (micro seismicity surveys or post event measurements) are more and more numerous. Classical linearized tomographies and also earthquake localisation codes need an accurate 3D background velocity model. However, if the medium is complex and a priori information not available, a 1D analysis is not able to provide an adequate background velocity image. Moreover, the design of the acquisition layouts is often intrinsically 3D and renders difficult even 2D approaches, especially in natural seismicity cases. Thus, the solution relies on the use of a 3D true non linear approach, which allows to explore the model space and to identify an optimal velocity image. The problem becomes then practical and its feasibility depends on the available computing resources (memory and time). In this presentation, we show that facing a 3D traveltime tomography problem with an extensive non-linear approach combining fast travel time estimators based on level set methods and optimisation techniques such as multiscale strategy is feasible. Moreover, because management of inhomogeneous inversion parameters is more friendly in a non linear approach, we describe how to perform a jointly non-linear inversion for the seismic velocities and the sources locations.

  2. Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering.

    PubMed

    Sicat, Ronell; Krüger, Jens; Möller, Torsten; Hadwiger, Markus

    2014-12-01

    This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs.

  3. Three-dimensional rendering of segmented object using matlab - biomed 2010.

    PubMed

    Anderson, Jeffrey R; Barrett, Steven F

    2010-01-01

    The three-dimensional rendering of microscopic objects is a difficult and challenging task that often requires specialized image processing techniques. Previous work has been described of a semi-automatic segmentation process of fluorescently stained neurons collected as a sequence of slice images with a confocal laser scanning microscope. Once properly segmented, each individual object can be rendered and studied as a three-dimensional virtual object. This paper describes the work associated with the design and development of Matlab files to create three-dimensional images from the segmented object data previously mentioned. Part of the motivation for this work is to integrate both the segmentation and rendering processes into one software application, providing a seamless transition from the segmentation tasks to the rendering and visualization tasks. Previously these tasks were accomplished on two different computer systems, windows and Linux. This transition basically limits the usefulness of the segmentation and rendering applications to those who have both computer systems readily available. The focus of this work is to create custom Matlab image processing algorithms for object rendering and visualization, and merge these capabilities to the Matlab files that were developed especially for the image segmentation task. The completed Matlab application will contain both the segmentation and rendering processes in a single graphical user interface, or GUI. This process for rendering three-dimensional images in Matlab requires that a sequence of two-dimensional binary images, representing a cross-sectional slice of the object, be reassembled in a 3D space, and covered with a surface. Additional segmented objects can be rendered in the same 3D space. The surface properties of each object can be varied by the user to aid in the study and analysis of the objects. This inter-active process becomes a powerful visual tool to study and understand microscopic objects.

  4. Fast DRR generation for 2D to 3D registration on GPUs.

    PubMed

    Tornai, Gábor János; Cserey, György; Pappas, Ion

    2012-08-01

    The generation of digitally reconstructed radiographs (DRRs) is the most time consuming step on the CPU in intensity based two-dimensional x-ray to three-dimensional (CT or 3D rotational x-ray) medical image registration, which has application in several image guided interventions. This work presents optimized DRR rendering on graphical processor units (GPUs) and compares performance achievable on four commercially available devices. A ray-cast based DRR rendering was implemented for a 512 × 512 × 72 CT volume. The block size parameter was optimized for four different GPUs for a region of interest (ROI) of 400 × 225 pixels with different sampling ratios (1.1%-9.1% and 100%). Performance was statistically evaluated and compared for the four GPUs. The method and the block size dependence were validated on the latest GPU for several parameter settings with a public gold standard dataset (512 × 512 × 825 CT) for registration purposes. Depending on the GPU, the full ROI is rendered in 2.7-5.2 ms. If sampling ratio of 1.1%-9.1% is applied, execution time is in the range of 0.3-7.3 ms. On all GPUs, the mean of the execution time increased linearly with respect to the number of pixels if sampling was used. The presented results outperform other results from the literature. This indicates that automatic 2D to 3D registration, which typically requires a couple of hundred DRR renderings to converge, can be performed quasi on-line, in less than a second or depending on the application and hardware in less than a couple of seconds. Accordingly, a whole new field of applications is opened for image guided interventions, where the registration is continuously performed to match the real-time x-ray.

  5. Low-Cost 3D Printing Orbital Implant Templates in Secondary Orbital Reconstructions.

    PubMed

    Callahan, Alison B; Campbell, Ashley A; Petris, Carisa; Kazim, Michael

    Despite its increasing use in craniofacial reconstructions, three-dimensional (3D) printing of customized orbital implants has not been widely adopted. Limitations include the cost of 3D printers able to print in a biocompatible material suitable for implantation in the orbit and the breadth of available implant materials. The authors report the technique of low-cost 3D printing of orbital implant templates used in complex, often secondary, orbital reconstructions. A retrospective case series of 5 orbital reconstructions utilizing a technique of 3D printed orbital implant templates is presented. Each patient's Digital Imaging and Communications in Medicine data were uploaded and processed to create 3D renderings upon which a customized implant was designed and sent electronically to printers open for student use at our affiliated institutions. The mock implants were sterilized and used intraoperatively as a stencil and mold. The final implant material was chosen by the surgeons based on the requirements of the case. Five orbital reconstructions were performed with this technique: 3 tumor reconstructions and 2 orbital fractures. Four of the 5 cases were secondary reconstructions. Molded Medpor Titan (Stryker, Kalamazoo, MI) implants were used in 4 cases and titanium mesh in 1 case. The stenciled and molded implants were adjusted no more than 2 times before anchored in place (mean 1). No case underwent further revision. The technique and cases presented demonstrate 1) the feasibility and accessibility of low-cost, independent use of 3D printing technology to fashion patient-specific implants in orbital reconstructions, 2) the ability to apply this technology to the surgeon's preference of any routinely implantable material, and 3) the utility of this technique in complex, secondary reconstructions.

  6. Automatic transfer function generation for volume rendering of high-resolution x-ray 3D digital mammography images

    NASA Astrophysics Data System (ADS)

    Alyassin, Abdal M.

    2002-05-01

    3D Digital mammography (3DDM) is a new technology that provides high resolution X-ray breast tomographic data. Like any other tomographic medical imaging modalities, viewing a stack of tomographic images may require time especially if the images are of large matrix size. In addition, it may cause difficulty to conceptually construct 3D breast structures. Therefore, there is a need to readily visualize the data in 3D. However, one of the issues that hinder the usage of volume rendering (VR) is finding an automatic way to generate transfer functions that efficiently map the important diagnostic information in the data. We have developed a method that randomly samples the volume. Based on the mean and the standard deviation of these samples, the technique determines the lower limit and upper limit of a piecewise linear ramp transfer function. We have volume rendered several 3DDM data using this technique and compared visually the outcome with the result from a conventional automatic technique. The transfer function generated through the proposed technique provided superior VR images over the conventional technique. Furthermore, the improvement in the reproducibility of the transfer function correlated with the number of samples taken from the volume at the expense of the processing time.

  7. Hierarchical storage of large volume of multidector CT data using distributed servers

    NASA Astrophysics Data System (ADS)

    Ratib, Osman; Rosset, Antoine; Heuberger, Joris; Bandon, David

    2006-03-01

    Multidector scanners and hybrid multimodality scanners have the ability to generate large number of high-resolution images resulting in very large data sets. In most cases, these datasets are generated for the sole purpose of generating secondary processed images and 3D rendered images as well as oblique and curved multiplanar reformatted images. It is therefore not essential to archive the original images after they have been processed. We have developed an architecture of distributed archive servers for temporary storage of large image datasets for 3D rendering and image processing without the need for long term storage in PACS archive. With the relatively low cost of storage devices it is possible to configure these servers to hold several months or even years of data, long enough for allowing subsequent re-processing if required by specific clinical situations. We tested the latest generation of RAID servers provided by Apple computers with a capacity of 5 TBytes. We implemented a peer-to-peer data access software based on our Open-Source image management software called OsiriX, allowing remote workstations to directly access DICOM image files located on the server through a new technology called "bonjour". This architecture offers a seamless integration of multiple servers and workstations without the need for central database or complex workflow management tools. It allows efficient access to image data from multiple workstation for image analysis and visualization without the need for image data transfer. It provides a convenient alternative to centralized PACS architecture while avoiding complex and time-consuming data transfer and storage.

  8. Modeling Images of Natural 3D Surfaces: Overview and Potential Applications

    NASA Technical Reports Server (NTRS)

    Jalobeanu, Andre; Kuehnel, Frank; Stutz, John

    2004-01-01

    Generative models of natural images have long been used in computer vision. However, since they only describe the of 2D scenes, they fail to capture all the properties of the underlying 3D world. Even though such models are sufficient for many vision tasks a 3D scene model is when it comes to inferring a 3D object or its characteristics. In this paper, we present such a generative model, incorporating both a multiscale surface prior model for surface geometry and reflectance, and an image formation process model based on realistic rendering, the computation of the posterior model parameter densities, and on the critical aspects of the rendering. We also how to efficiently invert the model within a Bayesian framework. We present a few potential applications, such as asteroid modeling and Planetary topography recovery, illustrated by promising results on real images.

  9. Mesophilic and thermophilic anaerobic co-digestion of rendering plant and slaughterhouse wastes.

    PubMed

    Bayr, Suvi; Rantanen, Marianne; Kaparaju, Prasad; Rintala, Jukka

    2012-01-01

    Co-digestion of rendering and slaughterhouse wastes was studied in laboratory scale semi-continuously fed continuously stirred tank reactors (CSTRs) at 35 and 55 °C. All in all, 10 different rendering plant and slaughterhouse waste fractions were characterised showing high contents of lipids and proteins, and methane potentials of 262-572 dm(3)CH(4)/kg volatile solids(VS)(added). In mesophilic CSTR methane yields of ca 720 dm(3) CH(4)/kg VS(fed) were obtained with organic loading rates (OLR) of 1.0 and 1.5 kg VS/m(3) d, and hydraulic retention time (HRT) of 50 d. For thermophilic process, the lowest studied OLR of 1.5 kg VS/m(3) d, turned to be unstable after operation of 1.5 HRT, due to accumulating ammonia, volatile fatty acids (VFAs) and probably also long chain fatty acids (LCFAs). In conclusion, mesophilic process was found to be more feasible for co-digestion than thermophilic process, methane yields being higher and process more stable in mesophilic conditions. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. Ultrasonic simulation—Imagine3D and SimScan: Tools to solve the inverse problem for complex turbine components

    NASA Astrophysics Data System (ADS)

    Mair, H. D.; Ciorau, P.; Owen, D.; Hazelton, T.; Dunning, G.

    2000-05-01

    Two ultrasonic simulation packages: Imagine 3D and SIMSCAN have specifically been developed to solve the inverse problem for blade root and rotor steeple of low-pressure turbine. The software was integrated with the 3D drawing of the inspected parts, and with the dimensions of linear phased-array probes. SIMSCAN simulates the inspection scenario in both optional conditions: defect location and probe movement/refracted angle range. The results are displayed into Imagine 3-D, with a variety of options: rendering, display 1:1, grid, generated UT beam. The results are very useful for procedure developer, training and to optimize the phased-array probe inspection sequence. A spreadsheet is generated to correlate the defect coordinates with UT data (probe position, skew and refracted angle, UT path, and probe movement). The simulation models were validated during experimental work with phased-array systems. The accuracy in probe position is ±1 mm, and the refracted/skew angle is within ±0.5°. Representative examples of phased array focal laws/probe movement for a specific defect location, are also included.

  11. Fast interactive real-time volume rendering of real-time three-dimensional echocardiography: an implementation for low-end computers

    NASA Technical Reports Server (NTRS)

    Saracino, G.; Greenberg, N. L.; Shiota, T.; Corsi, C.; Lamberti, C.; Thomas, J. D.

    2002-01-01

    Real-time three-dimensional echocardiography (RT3DE) is an innovative cardiac imaging modality. However, partly due to lack of user-friendly software, RT3DE has not been widely accepted as a clinical tool. The object of this study was to develop and implement a fast and interactive volume renderer of RT3DE datasets designed for a clinical environment where speed and simplicity are not secondary to accuracy. Thirty-six patients (20 regurgitation, 8 normal, 8 cardiomyopathy) were imaged using RT3DE. Using our newly developed software, all 3D data sets were rendered in real-time throughout the cardiac cycle and assessment of cardiac function and pathology was performed for each case. The real-time interactive volume visualization system is user friendly and instantly provides consistent and reliable 3D images without expensive workstations or dedicated hardware. We believe that this novel tool can be used clinically for dynamic visualization of cardiac anatomy.

  12. Pole Figure Explorer v. 1.8

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Benthem, Mark H.

    2016-05-04

    This software is employed for 3D visualization of X-ray diffraction (XRD) data with functionality for slicing, reorienting, isolating and plotting of 2D color contour maps and 3D renderings of large datasets. The program makes use of the multidimensionality of textured XRD data where diffracted intensity is not constant over a given set of angular positions (as dictated by the three defined dimensional angles of phi, chi, and two-theta). Datasets are rendered in 3D with intensity as a scaler which is represented as a rainbow color scale. A GUI interface and scrolling tools along with interactive function via the mouse allowmore » for fast manipulation of these large datasets so as to perform detailed analysis of diffraction results with full dimensionality of the diffraction space.« less

  13. Three-Dimensional Reconstruction of Thoracic Structures: Based on Chinese Visible Human

    PubMed Central

    Luo, Na; Tan, Liwen; Fang, Binji; Li, Ying; Xie, Bing; Liu, Kaijun; Chu, Chun; Li, Min

    2013-01-01

    We managed to establish three-dimensional digitized visible model of human thoracic structures and to provide morphological data for imaging diagnosis and thoracic and cardiovascular surgery. With Photoshop software, the contour line of lungs and mediastinal structures including heart, aorta and its ramus, azygos vein, superior vena cava, inferior vena cava, thymus, esophagus, diaphragm, phrenic nerve, vagus nerve, sympathetic trunk, thoracic vertebrae, sternum, thoracic duct, and so forth were segmented from the Chinese Visible Human (CVH)-1 data set. The contour data set of segmented thoracic structures was imported to Amira software and 3D thorax models were reconstructed via surface rendering and volume rendering. With Amira software, surface rendering reconstructed model of thoracic organs and its volume rendering reconstructed model were 3D reconstructed and can be displayed together clearly and accurately. It provides a learning tool of interpreting human thoracic anatomy and virtual thoracic and cardiovascular surgery for medical students and junior surgeons. PMID:24369489

  14. Fast algorithm for the rendering of three-dimensional surfaces

    NASA Astrophysics Data System (ADS)

    Pritt, Mark D.

    1994-02-01

    It is often desirable to draw a detailed and realistic representation of surface data on a computer graphics display. One such representation is a 3D shaded surface. Conventional techniques for rendering shaded surfaces are slow, however, and require substantial computational power. Furthermore, many techniques suffer from aliasing effects, which appear as jagged lines and edges. This paper describes an algorithm for the fast rendering of shaded surfaces without aliasing effects. It is much faster than conventional ray tracing and polygon-based rendering techniques and is suitable for interactive use. On an IBM RISC System/6000TM workstation it renders a 1000 X 1000 surface in about 7 seconds.

  15. AirShow 1.0 CFD Software Users' Guide

    NASA Technical Reports Server (NTRS)

    Mohler, Stanley R., Jr.

    2005-01-01

    AirShow is visualization post-processing software for Computational Fluid Dynamics (CFD). Upon reading binary PLOT3D grid and solution files into AirShow, the engineer can quickly see how hundreds of complex 3-D structured blocks are arranged and numbered. Additionally, chosen grid planes can be displayed and colored according to various aerodynamic flow quantities such as Mach number and pressure. The user may interactively rotate and translate the graphical objects using the mouse. The software source code was written in cross-platform Java, C++, and OpenGL, and runs on Unix, Linux, and Windows. The graphical user interface (GUI) was written using Java Swing. Java also provides multiple synchronized threads. The Java Native Interface (JNI) provides a bridge between the Java code and the C++ code where the PLOT3D files are read, the OpenGL graphics are rendered, and numerical calculations are performed. AirShow is easy to learn and simple to use. The source code is available for free from the NASA Technology Transfer and Partnership Office.

  16. Designing stereoscopic information visualization for 3D-TV: What can we can learn from S3D gaming?

    NASA Astrophysics Data System (ADS)

    Schild, Jonas; Masuch, Maic

    2012-03-01

    This paper explores graphical design and spatial alignment of visual information and graphical elements into stereoscopically filmed content, e.g. captions, subtitles, and especially more complex elements in 3D-TV productions. The method used is a descriptive analysis of existing computer- and video games that have been adapted for stereoscopic display using semi-automatic rendering techniques (e.g. Nvidia 3D Vision) or games which have been specifically designed for stereoscopic vision. Digital games often feature compelling visual interfaces that combine high usability with creative visual design. We explore selected examples of game interfaces in stereoscopic vision regarding their stereoscopic characteristics, how they draw attention, how we judge effect and comfort and where the interfaces fail. As a result, we propose a list of five aspects which should be considered when designing stereoscopic visual information: explicit information, implicit information, spatial reference, drawing attention, and vertical alignment. We discuss possible consequences, opportunities and challenges for integrating visual information elements into 3D-TV content. This work shall further help to improve current editing systems and identifies a need for future editing systems for 3DTV, e.g., live editing and real-time alignment of visual information into 3D footage.

  17. "Just-In-Time" Simulation Training Using 3-D Printed Cardiac Models After Congenital Cardiac Surgery.

    PubMed

    Olivieri, Laura J; Su, Lillian; Hynes, Conor F; Krieger, Axel; Alfares, Fahad A; Ramakrishnan, Karthik; Zurakowski, David; Marshall, M Blair; Kim, Peter C W; Jonas, Richard A; Nath, Dilip S

    2016-03-01

    High-fidelity simulation using patient-specific three-dimensional (3D) models may be effective in facilitating pediatric cardiac intensive care unit (PCICU) provider training for clinical management of congenital cardiac surgery patients. The 3D-printed heart models were rendered from preoperative cross-sectional cardiac imaging for 10 patients undergoing congenital cardiac surgery. Immediately following surgical repair, a congenital cardiac surgeon and an intensive care physician conducted a simulation training session regarding postoperative care utilizing the patient-specific 3D model for the PCICU team. After the simulation, Likert-type 0 to 10 scale questionnaire assessed participant perception of impact of the training session. Seventy clinicians participated in training sessions, including 22 physicians, 38 nurses, and 10 ancillary care providers. Average response to whether 3D models were more helpful than standard hand off was 8.4 of 10. Questions regarding enhancement of understanding and clinical ability received average responses of 9.0 or greater, and 90% of participants scored 8 of 10 or higher. Nurses scored significantly higher than other clinicians on self-reported familiarity with the surgery (7.1 vs. 5.8; P = .04), clinical management ability (8.6 vs. 7.7; P = .02), and ability enhancement (9.5 vs. 8.7; P = .02). Compared to physicians, nurses and ancillary providers were more likely to consider 3D models more helpful than standard hand off (8.7 vs. 7.7; P = .05). Higher case complexity predicted greater enhancement of understanding of surgery (P = .04). The 3D heart models can be used to enhance congenital cardiac critical care via simulation training of multidisciplinary intensive care teams. Benefit may be dependent on provider type and case complexity. © The Author(s) 2016.

  18. Scalable algorithms for 3D extended MHD.

    NASA Astrophysics Data System (ADS)

    Chacon, Luis

    2007-11-01

    In the modeling of plasmas with extended MHD (XMHD), the challenge is to resolve long time scales while rendering the whole simulation manageable. In XMHD, this is particularly difficult because fast (dispersive) waves are supported, resulting in a very stiff set of PDEs. In explicit schemes, such stiffness results in stringent numerical stability time-step constraints, rendering them inefficient and algorithmically unscalable. In implicit schemes, it yields very ill-conditioned algebraic systems, which are difficult to invert. In this talk, we present recent theoretical and computational progress that demonstrate a scalable 3D XMHD solver (i.e., CPU ˜N, with N the number of degrees of freedom). The approach is based on Newton-Krylov methods, which are preconditioned for efficiency. The preconditioning stage admits suitable approximations without compromising the quality of the overall solution. In this work, we employ optimal (CPU ˜N) multilevel methods on a parabolized XMHD formulation, which renders the whole algorithm scalable. The (crucial) parabolization step is required to render XMHD multilevel-friendly. Algebraically, the parabolization step can be interpreted as a Schur factorization of the Jacobian matrix, thereby providing a solid foundation for the current (and future extensions of the) approach. We will build towards 3D extended MHDootnotetextL. Chac'on, Comput. Phys. Comm., 163 (3), 143-171 (2004)^,ootnotetextL. Chac'on et al., 33rd EPS Conf. Plasma Physics, Rome, Italy, 2006 by discussing earlier algorithmic breakthroughs in 2D reduced MHDootnotetextL. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) and 2D Hall MHD.ootnotetextL. Chac'on et al., J. Comput. Phys., 188 (2), 573-592 (2003)

  19. 3D Geovisualization & Stylization to Manage Comprehensive and Participative Local Urban Plans

    NASA Astrophysics Data System (ADS)

    Brasebin, M.; Christophe, S.; Jacquinod, F.; Vinesse, A.; Mahon, H.

    2016-10-01

    3D geo-visualization is more and more used and appreciated to support public participation, and is generally used to present predesigned planned projects. Nevertheless, other participatory processes may benefit from such technology such as the elaboration of urban planning documents. In this article, we present one of the objectives of the PLU++ project: the design of a 3D geo-visualization system that eases the participation concerning local urban plans. Through a pluridisciplinary approach, it aims at covering the different aspects of such a system: the simulation of built configurations to represent regulation information, the efficient stylization of these objects to make people understand their meanings and the interaction between 3D simulation and stylization. The system aims at being adaptive according to the participation context and to the dynamic of the participation. It will offer the possibility to modify simulation results and the rendering styles of the 3D representations to support participation. The proposed 3D rendering styles will be used in a set of practical experiments in order to test and validate some hypothesis from past researches of the project members about 3D simulation, 3D semiotics and knowledge about uses.

  20. Real-time free-viewpoint DIBR for large-size 3DLED

    NASA Astrophysics Data System (ADS)

    Wang, NengWen; Sang, Xinzhu; Guo, Nan; Wang, Kuiru

    2017-10-01

    Three-dimensional (3D) display technologies make great progress in recent years, and lenticular array based 3D display is a relatively mature technology, which most likely to commercial. In naked-eye-3D display, the screen size is one of the most important factors that affect the viewing experience. In order to construct a large-size naked-eye-3D display system, the LED display is used. However, the pixel misalignment is an inherent defect of the LED screen, which will influences the rendering quality. To address this issue, an efficient image synthesis algorithm is proposed. The Texture-Plus-Depth(T+D) format is chosen for the display content, and the modified Depth Image Based Rendering (DIBR) method is proposed to synthesize new views. In order to achieve realtime, the whole algorithm is implemented on GPU. With the state-of-the-art hardware and the efficient algorithm, a naked-eye-3D display system with a LED screen size of 6m × 1.8m is achieved. Experiment shows that the algorithm can process the 43-view 3D video with 4K × 2K resolution in real time on GPU, and vivid 3D experience is perceived.

  1. Viewing CAD Drawings on the Internet

    ERIC Educational Resources Information Center

    Schwendau, Mark

    2004-01-01

    Computer aided design (CAD) has been producing 3-D models for years. AutoCAD software is frequently used to create sophisticated 3-D models. These CAD files can be exported as 3DS files for import into Autodesk's 3-D Studio Viz. In this program, the user can render and modify the 3-D model before exporting it out as a WRL (world file hyperlinked)…

  2. Segmentation, surface rendering, and surface simplification of 3-D skull images for the repair of a large skull defect

    NASA Astrophysics Data System (ADS)

    Wan, Weibing; Shi, Pengfei; Li, Shuguang

    2009-10-01

    Given the potential demonstrated by research into bone-tissue engineering, the use of medical image data for the rapid prototyping (RP) of scaffolds is a subject worthy of research. Computer-aided design and manufacture and medical imaging have created new possibilities for RP. Accurate and efficient design and fabrication of anatomic models is critical to these applications. We explore the application of RP computational methods to the repair of a pediatric skull defect. The focus of this study is the segmentation of the defect region seen in computerized tomography (CT) slice images of this patient's skull and the three-dimensional (3-D) surface rendering of the patient's CT-scan data. We see if our segmentation and surface rendering software can improve the generation of an implant model to fill a skull defect.

  3. Cell shape characterization and classification with discrete Fourier transforms and self-organizing maps.

    PubMed

    Kriegel, Fabian L; Köhler, Ralf; Bayat-Sarmadi, Jannike; Bayerl, Simon; Hauser, Anja E; Niesner, Raluca; Luch, Andreas; Cseresnyes, Zoltan

    2018-03-01

    Cells in their natural environment often exhibit complex kinetic behavior and radical adjustments of their shapes. This enables them to accommodate to short- and long-term changes in their surroundings under physiological and pathological conditions. Intravital multi-photon microscopy is a powerful tool to record this complex behavior. Traditionally, cell behavior is characterized by tracking the cells' movements, which yields numerous parameters describing the spatiotemporal characteristics of cells. Cells can be classified according to their tracking behavior using all or a subset of these kinetic parameters. This categorization can be supported by the a priori knowledge of experts. While such an approach provides an excellent starting point for analyzing complex intravital imaging data, faster methods are required for automated and unbiased characterization. In addition to their kinetic behavior, the 3D shape of these cells also provide essential clues about the cells' status and functionality. New approaches that include the study of cell shapes as well may also allow the discovery of correlations amongst the track- and shape-describing parameters. In the current study, we examine the applicability of a set of Fourier components produced by Discrete Fourier Transform (DFT) as a tool for more efficient and less biased classification of complex cell shapes. By carrying out a number of 3D-to-2D projections of surface-rendered cells, the applied method reduces the more complex 3D shape characterization to a series of 2D DFTs. The resulting shape factors are used to train a Self-Organizing Map (SOM), which provides an unbiased estimate for the best clustering of the data, thereby characterizing groups of cells according to their shape. We propose and demonstrate that such shape characterization is a powerful addition to, or a replacement for kinetic analysis. This would make it especially useful in situations where live kinetic imaging is less practical or not possible at all. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.

  4. Employing WebGL to develop interactive stereoscopic 3D content for use in biomedical visualization

    NASA Astrophysics Data System (ADS)

    Johnston, Semay; Renambot, Luc; Sauter, Daniel

    2013-03-01

    Web Graphics Library (WebGL), the forthcoming web standard for rendering native 3D graphics in a browser, represents an important addition to the biomedical visualization toolset. It is projected to become a mainstream method of delivering 3D online content due to shrinking support for third-party plug-ins. Additionally, it provides a virtual reality (VR) experience to web users accommodated by the growing availability of stereoscopic displays (3D TV, desktop, and mobile). WebGL's value in biomedical visualization has been demonstrated by applications for interactive anatomical models, chemical and molecular visualization, and web-based volume rendering. However, a lack of instructional literature specific to the field prevents many from utilizing this technology. This project defines a WebGL design methodology for a target audience of biomedical artists with a basic understanding of web languages and 3D graphics. The methodology was informed by the development of an interactive web application depicting the anatomy and various pathologies of the human eye. The application supports several modes of stereoscopic displays for a better understanding of 3D anatomical structures.

  5. Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering

    PubMed Central

    Sicat, Ronell; Krüger, Jens; Möller, Torsten; Hadwiger, Markus

    2015-01-01

    This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs. PMID:26146475

  6. LONI visualization environment.

    PubMed

    Dinov, Ivo D; Valentino, Daniel; Shin, Bae Cheol; Konstantinidis, Fotios; Hu, Guogang; MacKenzie-Graham, Allan; Lee, Erh-Fang; Shattuck, David; Ma, Jeff; Schwartz, Craig; Toga, Arthur W

    2006-06-01

    Over the past decade, the use of informatics to solve complex neuroscientific problems has increased dramatically. Many of these research endeavors involve examining large amounts of imaging, behavioral, genetic, neurobiological, and neuropsychiatric data. Superimposing, processing, visualizing, or interpreting such a complex cohort of datasets frequently becomes a challenge. We developed a new software environment that allows investigators to integrate multimodal imaging data, hierarchical brain ontology systems, on-line genetic and phylogenic databases, and 3D virtual data reconstruction models. The Laboratory of Neuro Imaging visualization environment (LONI Viz) consists of the following components: a sectional viewer for imaging data, an interactive 3D display for surface and volume rendering of imaging data, a brain ontology viewer, and an external database query system. The synchronization of all components according to stereotaxic coordinates, region name, hierarchical ontology, and genetic labels is achieved via a comprehensive BrainMapper functionality, which directly maps between position, structure name, database, and functional connectivity information. This environment is freely available, portable, and extensible, and may prove very useful for neurobiologists, neurogenetisists, brain mappers, and for other clinical, pedagogical, and research endeavors.

  7. Three-dimensional rendering in medicine: some common misconceptions

    NASA Astrophysics Data System (ADS)

    Udupa, Jayaram K.

    2001-05-01

    As seen in the medical imaging literature and in the poster presentations at the annual conference of the Radiological Society of North America during the past 10 years, several mis conceptions are held relating to 3D rendering of medical images. The purpose of this presentation is to illustrate and clarify these with medical examples. Most of the misconceptions have to do with a mix up of the issues related to the common visualization techniques, viz., surface rendering (SR) and volume rendering (VR), and methods of image segmentation. In our survey, we came across the following most commonly held conceptions which we believe (and shall demonstrate) are not correct: (1) SR equated to thresholding. (2) VR considered not requiring segmentation. (3) VR considered to achieve higher resolution than SR. (4) SR/VR considered to require specialized hardware to achieve adequate speed. We shall briefly define and establish some fundamental terms to obviate any potential for terminology-related misconceptions. Subsequently, we shall sort out these issues and illustrate with examples as to why the above conceptions are incorrect. There are many SR methods that use segmentations that are far superior to thresholding. All VR techniques (except the straightforward MIP) require some form of fuzzy object specification, that is, fuzzy segmentation. The details seen in renditions depend fundamentally on, in addition to the rendering method, segmentation techniques also. There are fast-software-based rendering methods that give a performance on PCs similar to or exceeding that of expensive hardware systems. Most of the difficulties encountered in visualization (and also in image processing and analysis) stem from the difficulties in segmentation. It is important to identify these and separate them from the issues related purely to 3D rendering.

  8. Structure of collagen-glycosaminoglycan matrix and the influence to its integrity and stability.

    PubMed

    Bi, Yuying; Patra, Prabir; Faezipour, Miad

    2014-01-01

    Glycosaminoglycan (GAG) is a chain-like disaccharide that is linked to polypeptide core to connect two collagen fibrils/fibers and provide the intermolecular force in Collagen-GAG matrix (C-G matrix). Thus, the distribution of GAG in C-G matrix contributes to the integrity and mechanical properties of the matrix and related tissue. This paper analyzes the transverse isotropic distribution of GAG in C-G matrix. The angle of GAGs related to collagen fibrils is used as parameters to qualify the GAGs isotropic characteristic in both 3D and 2D rendering. Statistical results included that over one third of GAGs were perpendicular directed to collagen fibril with symmetrical distribution for both 3D matrix and 2D plane cross through collagen fibrils. The three factors tested in this paper: collagen radius, collagen distribution, and GAGs density, were not statistically significant for the strength of Collagen-GAG matrix in 3D rendering. However in 2D rendering, a significant factor found was the radius of collagen in matrix for the GAGs directed to orthogonal plane of Collagen-GAG matrix. Between two cross-section selected from Collagen-GAG matrix model, the plane cross through collagen fibrils was symmetrically distributed but the total percentage of perpendicular directed GAG was deducted by decreasing collagen radius. There were some symmetry features of GAGs angle distribution in selected 2D plane that passed through space between collagen fibrils, but most models showed multiple peaks in GAGs angle distribution. With less GAGs directed to perpendicular of collagen fibril, strength in collagen cross-section weakened. Collagen distribution was also a factor that influences GAGs angle distribution in 2D rendering. True hexagonal collagen packaging is reported in this paper to have less strength at collagen cross-section compared to quasi-hexagonal collagen arrangement. In this work focus is on GAGs matrix within the collagen and its relevance to anisotropy.

  9. Design and implementation of three-dimension texture mapping algorithm for panoramic system based on smart platform

    NASA Astrophysics Data System (ADS)

    Liu, Zhi; Zhou, Baotong; Zhang, Changnian

    2017-03-01

    Vehicle-mounted panoramic system is important safety assistant equipment for driving. However, traditional systems only render fixed top-down perspective view of limited view field, which may have potential safety hazard. In this paper, a texture mapping algorithm for 3D vehicle-mounted panoramic system is introduced, and an implementation of the algorithm utilizing OpenGL ES library based on Android smart platform is presented. Initial experiment results show that the proposed algorithm can render a good 3D panorama, and has the ability to change view point freely.

  10. 3D Scientific Visualization with Blender

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.

    2015-03-01

    This is the first book written on using Blender for scientific visualization. It is a practical and interesting introduction to Blender for understanding key parts of 3D rendering and animation that pertain to the sciences via step-by-step guided tutorials. 3D Scientific Visualization with Blender takes you through an understanding of 3D graphics and modelling for different visualization scenarios in the physical sciences.

  11. Strain memory of 2D and 3D rigid inclusion populations in viscous flows - What is clast SPO telling us?

    NASA Astrophysics Data System (ADS)

    Stahr, Donald W.; Law, Richard D.

    2014-11-01

    We model the development of shape preferred orientation (SPO) of a large population of two- and three-dimensional (2D and 3D) rigid clasts suspended in a linear viscous matrix deformed by superposed steady and continuously non-steady plane strain flows to investigate the sensitivity of clasts to changing boundary conditions during a single or superposed deformation events. Resultant clast SPOs are compared to one developed by an identical initial population that experienced a steady flow history of constant kinematic vorticity and reached an identical finite strain state, allowing examination of SPO sensitivity to deformation path. Rotation paths of individual triaxial inclusions are complex, even for steady plane strain flow histories. It has been suggested that the 3D nature of the system renders predictions based on 2D models inadequate for applied clast-based kinematic vorticity gauges. We demonstrate that for a large population of clasts, simplification to a 2D model does provide a good approximation to the SPO predicted by full 3D analysis for steady and non-steady plane strain deformation paths. Predictions of shape fabric development from 2D models are not only qualitatively similar to the more complex 3D analysis, but they display the same limitations of techniques based on clast SPO commonly used as a quantitative kinematic vorticity gauge. Our model results from steady, superposed, and non-steady flow histories with a significant pure shearing component at a wide range of finite strain resemble predictions for an identical initial population that experienced a single steady simple shearing deformation. We conclude that individual 2D and 3D clasts respond instantaneously to changes in boundary conditions, however, in aggregate, the SPO of a population of rigid inclusions does not reflect the late-stage kinematics of deformation, nor is it an indicator of the unique 'mean' kinematic vorticity experienced by a deformed rock volume.

  12. Femtosecond two-photon high-resolution 3D imaging, spatial-volume rendering and microspectral characterization of immunolocalized MHC-II and mLangerin/CD207 antigens in the mouse epidermis.

    PubMed

    Tirlapur, Uday K; Mulholland, William J; Bellhouse, Brian J; Kendall, Mark; Cornhill, J Fredrick; Cui, Zhanfeng

    2006-10-01

    Langerhans cells (LCs) play a sentinel role by initiating both adaptive and innate immune responses to antigens pertinent to the skin. With the discovery of various LCs markers including antibodies to major histocompatibility complex class II (MHC-II) molecules and CD1a, intracellular presence of racket-shaped "Birbeck granules," and very recently Langerin/CD207, LCs can be readily distinguished from other subsets of dendritic cells. Femtosecond two-photon laser scanning microscopy (TPLSM) in recent years has emerged as an alternative to the single photon-excitation based confocal laser scanning microscope (CLSM), particularly for minimally-invasive deep-tissue 3D and 4D vital as well as nonvital biomedical imaging. We have recently combined high resolution two-photon immunofluorescence (using anti MHC-II and Langerin/CD207 antibodies) imaging with microspectroscopy and advanced image-processing/volume-rendering modalities. In this work, we demonstrate the use of this novel state-of-the-art combinational approach to characterize the steady state 3D organization and spectral features of the mouse epidermis, particularly to identify the spatial distribution of LCs. Our findings provide unequivocal direct evidence that, in the mouse epidermis, the MHC-II and mLangerin/CD207 antigens do indeed manifest a high degree of colocalization around the nucleus of the LCs, while in the distal dendritic processes, mLangerin/CD207 antigens are rather sparsely distributed as punctuate structures. This unique possibility to simultaneously visualize high resolution 3D-resolved spatial distributions of two different immuno-reactive antigens, namely MHC-II and mLangerin/CD207, along with the nuclei of LCs and the adjacent epidermal cells can find interesting applications. These could involve aspects associated with pragmatic analysis of the kinetics of LCs migration as a function of immuno-dermatological responses during (1) human Immunodeficiency virus disease progression, (2) vaccination and targeted gene therapy, (3) skin transplantation/plastic surgery, (4) ultraviolet and other radiation exposure, (5) tissue-engineering of 3D skin constructs, as well as in (6) cosmetic industry, to unravel the influence of cosmeceuticals.

  13. Interactive distributed hardware-accelerated LOD-sprite terrain rendering with stable frame rates

    NASA Astrophysics Data System (ADS)

    Swan, J. E., II; Arango, Jesus; Nakshatrala, Bala K.

    2002-03-01

    A stable frame rate is important for interactive rendering systems. Image-based modeling and rendering (IBMR) techniques, which model parts of the scene with image sprites, are a promising technique for interactive systems because they allow the sprite to be manipulated instead of the underlying scene geometry. However, with IBMR techniques a frequent problem is an unstable frame rate, because generating an image sprite (with 3D rendering) is time-consuming relative to manipulating the sprite (with 2D image resampling). This paper describes one solution to this problem, by distributing an IBMR technique into a collection of cooperating threads and executable programs across two computers. The particular IBMR technique distributed here is the LOD-Sprite algorithm. This technique uses a multiple level-of-detail (LOD) scene representation. It first renders a keyframe from a high-LOD representation, and then caches the frame as an image sprite. It renders subsequent spriteframes by texture-mapping the cached image sprite into a lower-LOD representation. We describe a distributed architecture and implementation of LOD-Sprite, in the context of terrain rendering, which takes advantage of graphics hardware. We present timing results which indicate we have achieved a stable frame rate. In addition to LOD-Sprite, our distribution method holds promise for other IBMR techniques.

  14. HDlive rendering images of the fetal stomach: a preliminary report.

    PubMed

    Inubashiri, Eisuke; Abe, Kiyotaka; Watanabe, Yukio; Akutagawa, Noriyuki; Kuroki, Katumaru; Sugawara, Masaki; Maeda, Nobuhiko; Minami, Kunihiro; Nomura, Yasuhiro

    2015-01-01

    This study aimed to show reconstruction of the fetal stomach using the HDlive rendering mode in ultrasound. Seventeen healthy singleton fetuses at 18-34 weeks' gestational age were observed using the HDlive rendering mode of ultrasound in utero. In all of the fetuses, we identified specific spatial structures, including macroscopic anatomical features (e.g., the pyrous, cardia, fundus, and great curvature) of the fetal stomach, using the HDlive rendering mode. In particular, HDlive rendering images showed remarkably fine details that appeared as if they were being viewed under an endoscope, with visible rugal folds after 27 weeks' gestational age. Our study suggests that the HDlive rendering mode can be used as an additional method for evaluating the fetal stomach. The HDlive rendering mode shows detailed 3D structural images and anatomically realistic images of the fetal stomach. This technique may be effective in prenatal diagnosis for examining detailed information of fetal organs.

  15. Helical CT scan with 2D and 3D reconstructions and virtual endoscopy versus conventional endoscopy in the assessment of airway disease in neonates, infants and children.

    PubMed

    Yunus, Mahira

    2012-11-01

    To study the use of helical computed tomography 2-D and 3-D images, and virtual endoscopy in the evaluation of airway disease in neonates, infants and children and its value in lesion detection, characterisation and extension. Conducted at Al-Noor Hospital, Makkah, Saudi Arabia, from January 1 to June 30, 2006, the study comprised of 40 patients with strider, having various causes of airway obstruction. They were examined by helical CT scan with 2-D and 3-D reconstructions and virtual endoscopy. The level and characterisation of lesions were carried out and results were compared with actual endoscopic findings. Conventional endoscopy was chosen as the gold standard, and the evaluation of endoscopy was done in terms of sensitivity and specificity of the procedure. For statistical purposes, SPSS version 10 was used. All CT methods detected airway stenosis or obstruction. Accuracy was 98% (n=40) for virtual endoscopy, 96% (n=48) for 3-D external rendering, 90% (n=45) for multiplanar reconstructions and 86% (n=43) for axial images. Comparing the results of 3-D internal and external volume rendering images with conventional endoscopy for detection and grading of stenosis were closer than with 2-D minimum intensity multiplanar reconstruction and axial CT slices. Even high-grade stenosis could be evaluated with virtual endoscope through which conventional endoscope cannot be passed. A case of 4-year-old patient with tracheomalacia could not be diagnosed by helical CT scan and virtual bronchoscopy which was diagriosed on conventional endoscopy and needed CT scan in inspiration and expiration. Virtual endoscopy [VE] enabled better assessment of stenosis compared to the reading of 3-D external rendering, 2-D multiplanar reconstruction [MPR] or axial slices. It can replace conventional endoscopy in the assessment of airway disease without any additional risk.

  16. a Cache Design Method for Spatial Information Visualization in 3d Real-Time Rendering Engine

    NASA Astrophysics Data System (ADS)

    Dai, X.; Xiong, H.; Zheng, X.

    2012-07-01

    A well-designed cache system has positive impacts on the 3D real-time rendering engine. As the amount of visualization data getting larger, the effects become more obvious. They are the base of the 3D real-time rendering engine to smoothly browsing through the data, which is out of the core memory, or from the internet. In this article, a new kind of caches which are based on multi threads and large file are introduced. The memory cache consists of three parts, the rendering cache, the pre-rendering cache and the elimination cache. The rendering cache stores the data that is rendering in the engine; the data that is dispatched according to the position of the view point in the horizontal and vertical directions is stored in the pre-rendering cache; the data that is eliminated from the previous cache is stored in the eliminate cache and is going to write to the disk cache. Multi large files are used in the disk cache. When a disk cache file size reaches the limit length(128M is the top in the experiment), no item will be eliminated from the file, but a new large cache file will be created. If the large file number is greater than the maximum number that is pre-set, the earliest file will be deleted from the disk. In this way, only one file is opened for writing and reading, and the rest are read-only so the disk cache can be used in a high asynchronous way. The size of the large file is limited in order to map to the core memory to save loading time. Multi-thread is used to update the cache data. The threads are used to load data to the rendering cache as soon as possible for rendering, to load data to the pre-rendering cache for rendering next few frames, and to load data to the elimination cache which is not necessary for the moment. In our experiment, two threads are designed. The first thread is to organize the memory cache according to the view point, and created two threads: the adding list and the deleting list, the adding list index the data that should be loaded to the pre-rendering cache immediately, the deleting list index the data that is no longer visible in the rendering scene and should be moved to the eliminate cache; the other thread is to move the data in the memory and disk cache according to the adding and the deleting list, and create the download requests when the data is indexed in the adding but cannot be found either in memory cache or disk cache, eliminate cache data is moved to the disk cache when the adding list and deleting are empty. The cache designed as described above in our experiment shows reliable and efficient, and the data loading time and files I/O time decreased sharply, especially when the rendering data getting larger.

  17. Resolution-independent surface rendering using programmable graphics hardware

    DOEpatents

    Loop, Charles T.; Blinn, James Frederick

    2008-12-16

    Surfaces defined by a Bezier tetrahedron, and in particular quadric surfaces, are rendered on programmable graphics hardware. Pixels are rendered through triangular sides of the tetrahedra and locations on the shapes, as well as surface normals for lighting evaluations, are computed using pixel shader computations. Additionally, vertex shaders are used to aid interpolation over a small number of values as input to the pixel shaders. Through this, rendering of the surfaces is performed independently of viewing resolution, allowing for advanced level-of-detail management. By individually rendering tetrahedrally-defined surfaces which together form complex shapes, the complex shapes can be rendered in their entirety.

  18. Fast software-based volume rendering using multimedia instructions on PC platforms and its application to virtual endoscopy

    NASA Astrophysics Data System (ADS)

    Mori, Kensaku; Suenaga, Yasuhito; Toriwaki, Jun-ichiro

    2003-05-01

    This paper describes a software-based fast volume rendering (VolR) method on a PC platform by using multimedia instructions, such as SIMD instructions, which are currently available in PCs' CPUs. This method achieves fast rendering speed through highly optimizing software rather than an improved rendering algorithm. In volume rendering using a ray casting method, the system requires fast execution of the following processes: (a) interpolation of voxel or color values at sample points, (b) computation of normal vectors (gray-level gradient vectors), (c) calculation of shaded values obtained by dot-products of normal vectors and light source direction vectors, (d) memory access to a huge area, and (e) efficient ray skipping at translucent regions. The proposed software implements these fundamental processes in volume rending by using special instruction sets for multimedia processing. The proposed software can generate virtual endoscopic images of a 3-D volume of 512x512x489 voxel size by volume rendering with perspective projection, specular reflection, and on-the-fly normal vector computation on a conventional PC without any special hardware at thirteen frames per second. Semi-translucent display is also possible.

  19. Direct volumetric rendering based on point primitives in OpenGL.

    PubMed

    da Rosa, André Luiz Miranda; de Almeida Souza, Ilana; Yuuji Hira, Adilson; Zuffo, Marcelo Knörich

    2006-01-01

    The aim of this project is to present a renderization by software algorithm of acquired volumetric data. The algorithm was implemented in Java language and the LWJGL graphical library was used, allowing the volume renderization by software and thus preventing the necessity to acquire specific graphical boards for the 3D reconstruction. The considered algorithm creates a model in OpenGL, through point primitives, where each voxel becomes a point with the color values related to this pixel position in the corresponding images.

  20. Ray-based approach to integrated 3D visual communication

    NASA Astrophysics Data System (ADS)

    Naemura, Takeshi; Harashima, Hiroshi

    2001-02-01

    For a high sense of reality in the next-generation communications, it is very important to realize three-dimensional (3D) spatial media, instead of existing 2D image media. In order to comprehensively deal with a variety of 3D visual data formats, the authors first introduce the concept of "Integrated 3D Visual Communication," which reflects the necessity of developing a neutral representation method independent of input/output systems. Then, the following discussions are concentrated on the ray-based approach to this concept, in which any visual sensation is considered to be derived from a set of light rays. This approach is a simple and straightforward to the problem of how to represent 3D space, which is an issue shared by various fields including 3D image communications, computer graphics, and virtual reality. This paper mainly presents the several developments in this approach, including some efficient methods of representing ray data, a real-time video-based rendering system, an interactive rendering system based on the integral photography, a concept of virtual object surface for the compression of tremendous amount of data, and a light ray capturing system using a telecentric lens. Experimental results demonstrate the effectiveness of the proposed techniques.

  1. A new approach of building 3D visualization framework for multimodal medical images display and computed assisted diagnosis

    NASA Astrophysics Data System (ADS)

    Li, Zhenwei; Sun, Jianyong; Zhang, Jianguo

    2012-02-01

    As more and more CT/MR studies are scanning with larger volume of data sets, more and more radiologists and clinician would like using PACS WS to display and manipulate these larger data sets of images with 3D rendering features. In this paper, we proposed a design method and implantation strategy to develop 3D image display component not only with normal 3D display functions but also with multi-modal medical image fusion as well as compute-assisted diagnosis of coronary heart diseases. The 3D component has been integrated into the PACS display workstation of Shanghai Huadong Hospital, and the clinical practice showed that it is easy for radiologists and physicians to use these 3D functions such as multi-modalities' (e.g. CT, MRI, PET, SPECT) visualization, registration and fusion, and the lesion quantitative measurements. The users were satisfying with the rendering speeds and quality of 3D reconstruction. The advantages of the component include low requirements for computer hardware, easy integration, reliable performance and comfortable application experience. With this system, the radiologists and the clinicians can manipulate with 3D images easily, and use the advanced visualization tools to facilitate their work with a PACS display workstation at any time.

  2. Automatic detection of artifacts in converted S3D video

    NASA Astrophysics Data System (ADS)

    Bokov, Alexander; Vatolin, Dmitriy; Zachesov, Anton; Belous, Alexander; Erofeev, Mikhail

    2014-03-01

    In this paper we present algorithms for automatically detecting issues specific to converted S3D content. When a depth-image-based rendering approach produces a stereoscopic image, the quality of the result depends on both the depth maps and the warping algorithms. The most common problem with converted S3D video is edge-sharpness mismatch. This artifact may appear owing to depth-map blurriness at semitransparent edges: after warping, the object boundary becomes sharper in one view and blurrier in the other, yielding binocular rivalry. To detect this problem we estimate the disparity map, extract boundaries with noticeable differences, and analyze edge-sharpness correspondence between views. We pay additional attention to cases involving a complex background and large occlusions. Another problem is detection of scenes that lack depth volume: we present algorithms for detecting at scenes and scenes with at foreground objects. To identify these problems we analyze the features of the RGB image as well as uniform areas in the depth map. Testing of our algorithms involved examining 10 Blu-ray 3D releases with converted S3D content, including Clash of the Titans, The Avengers, and The Chronicles of Narnia: The Voyage of the Dawn Treader. The algorithms we present enable improved automatic quality assessment during the production stage.

  3. Quantitative Live-Cell Confocal Imaging of 3D Spheroids in a High-Throughput Format.

    PubMed

    Leary, Elizabeth; Rhee, Claire; Wilks, Benjamin T; Morgan, Jeffrey R

    2018-06-01

    Accurately predicting the human response to new compounds is critical to a wide variety of industries. Standard screening pipelines (including both in vitro and in vivo models) often lack predictive power. Three-dimensional (3D) culture systems of human cells, a more physiologically relevant platform, could provide a high-throughput, automated means to test the efficacy and/or toxicity of novel substances. However, the challenge of obtaining high-magnification, confocal z stacks of 3D spheroids and understanding their respective quantitative limitations must be overcome first. To address this challenge, we developed a method to form spheroids of reproducible size at precise spatial locations across a 96-well plate. Spheroids of variable radii were labeled with four different fluorescent dyes and imaged with a high-throughput confocal microscope. 3D renderings of the spheroid had a complex bowl-like appearance. We systematically analyzed these confocal z stacks to determine the depth of imaging and the effect of spheroid size and dyes on quantitation. Furthermore, we have shown that this loss of fluorescence can be addressed through the use of ratio imaging. Overall, understanding both the limitations of confocal imaging and the tools to correct for these limits is critical for developing accurate quantitative assays using 3D spheroids.

  4. Multi-viewpoint Image Array Virtual Viewpoint Rapid Generation Algorithm Based on Image Layering

    NASA Astrophysics Data System (ADS)

    Jiang, Lu; Piao, Yan

    2018-04-01

    The use of multi-view image array combined with virtual viewpoint generation technology to record 3D scene information in large scenes has become one of the key technologies for the development of integrated imaging. This paper presents a virtual viewpoint rendering method based on image layering algorithm. Firstly, the depth information of reference viewpoint image is quickly obtained. During this process, SAD is chosen as the similarity measure function. Then layer the reference image and calculate the parallax based on the depth information. Through the relative distance between the virtual viewpoint and the reference viewpoint, the image layers are weighted and panned. Finally the virtual viewpoint image is rendered layer by layer according to the distance between the image layers and the viewer. This method avoids the disadvantages of the algorithm DIBR, such as high-precision requirements of depth map and complex mapping operations. Experiments show that, this algorithm can achieve the synthesis of virtual viewpoints in any position within 2×2 viewpoints range, and the rendering speed is also very impressive. The average result proved that this method can get satisfactory image quality. The average SSIM value of the results relative to real viewpoint images can reaches 0.9525, the PSNR value can reaches 38.353 and the image histogram similarity can reaches 93.77%.

  5. [Depiction of the cranial nerves around the cavernous sinus by 3D reversed FISP with diffusion weighted imaging (3D PSIF-DWI)].

    PubMed

    Ishida, Go; Oishi, Makoto; Jinguji, Shinya; Yoneoka, Yuichiro; Sato, Mitsuya; Fujii, Yukihiko

    2011-10-01

    To evaluate the anatomy of cranial nerves running in and around the cavernous sinus, we employed three-dimensional reversed fast imaging with steady-state precession (FISP) with diffusion weighted imaging (3D PSIF-DWI) on 3-T magnetic resonance (MR) system. After determining the proper parameters to obtain sufficient resolution of 3D PSIF-DWI, we collected imaging data of 20-side cavernous regions in 10 normal subjects. 3D PSIF-DWI provided high contrast between the cranial nerves and other soft tissues, fluid, and blood in all subjects. We also created volume-rendered images of 3D PSIF-DWI and anatomically evaluated the reliability of visualizing optic, oculomotor, trochlear, trigeminal, and abducens nerves on 3D PSIF-DWI. All 20 sets of cranial nerves were visualized and 12 trochlear nerves and 6 abducens nerves were partially identified. We also presented preliminary clinical experiences in two cases with pituitary adenomas. The anatomical relationship between the tumor and cranial nerves running in and around the cavernous sinus could be three-dimensionally comprehended by 3D PSIF-DWI and the volume-rendered images. In conclusion, 3D PSIF-DWI has great potential to provide high resolution "cranial nerve imaging", which visualizes the whole length of the cranial nerves including the parts in the blood flow as in the cavernous sinus region.

  6. Synthetic vision in the cockpit: 3D systems for general aviation

    NASA Astrophysics Data System (ADS)

    Hansen, Andrew J.; Rybacki, Richard M.; Smith, W. Garth

    2001-08-01

    Synthetic vision has the potential to improve safety in aviation through better pilot situational awareness and enhanced navigational guidance. The technological advances enabling synthetic vision are GPS based navigation (position and attitude) systems and efficient graphical systems for rendering 3D displays in the cockpit. A benefit for military, commercial, and general aviation platforms alike is the relentless drive to miniaturize computer subsystems. Processors, data storage, graphical and digital signal processing chips, RF circuitry, and bus architectures are at or out-pacing Moore's Law with the transition to mobile computing and embedded systems. The tandem of fundamental GPS navigation services such as the US FAA's Wide Area and Local Area Augmentation Systems (WAAS) and commercially viable mobile rendering systems puts synthetic vision well with the the technological reach of general aviation. Given the appropriate navigational inputs, low cost and power efficient graphics solutions are capable of rendering a pilot's out-the-window view into visual databases with photo-specific imagery and geo-specific elevation and feature content. Looking beyond the single airframe, proposed aviation technologies such as ADS-B would provide a communication channel for bringing traffic information on-board and into the cockpit visually via the 3D display for additional pilot awareness. This paper gives a view of current 3D graphics system capability suitable for general aviation and presents a potential road map following the current trends.

  7. Repercussion of geometric and dynamic constraints on the 3D rendering quality in structurally adaptive multi-view shooting systems

    NASA Astrophysics Data System (ADS)

    Ali-Bey, Mohamed; Moughamir, Saïd; Manamanni, Noureddine

    2011-12-01

    in this paper a simulator of a multi-view shooting system with parallel optical axes and structurally variable configuration is proposed. The considered system is dedicated to the production of 3D contents for auto-stereoscopic visualization. The global shooting/viewing geometrical process, which is the kernel of this shooting system, is detailed and the different viewing, transformation and capture parameters are then defined. An appropriate perspective projection model is afterward derived to work out a simulator. At first, this latter is used to validate the global geometrical process in the case of a static configuration. Next, the simulator is used to show the limitations of a static configuration of this shooting system type by considering the case of dynamic scenes and then a dynamic scheme is achieved to allow a correct capture of this kind of scenes. After that, the effect of the different geometrical capture parameters on the 3D rendering quality and the necessity or not of their adaptation is studied. Finally, some dynamic effects and their repercussions on the 3D rendering quality of dynamic scenes are analyzed using error images and some image quantization tools. Simulation and experimental results are presented throughout this paper to illustrate the different studied points. Some conclusions and perspectives end the paper. [Figure not available: see fulltext.

  8. Semiconductive 3-D haloplumbate framework hybrids with high color rendering index white-light emission.

    PubMed

    Wang, Guan-E; Xu, Gang; Wang, Ming-Sheng; Cai, Li-Zhen; Li, Wen-Hua; Guo, Guo-Cong

    2015-12-01

    Single-component white light materials may create great opportunities for novel conventional lighting applications and display systems; however, their reported color rendering index (CRI) values, one of the key parameters for lighting, are less than 90, which does not satisfy the demand of color-critical upmarket applications, such as photography, cinematography, and art galleries. In this work, two semiconductive chloroplumbate (chloride anion of lead(ii)) hybrids, obtained using a new inorganic-organic hybrid strategy, show unprecedented 3-D inorganic framework structures and white-light-emitting properties with high CRI values around 90, one of which shows the highest value to date.

  9. Real-time interactive virtual tour on the World Wide Web (WWW)

    NASA Astrophysics Data System (ADS)

    Yoon, Sanghyuk; Chen, Hai-jung; Hsu, Tom; Yoon, Ilmi

    2003-12-01

    Web-based Virtual Tour has become a desirable and demanded application, yet challenging due to the nature of web application's running environment such as limited bandwidth and no guarantee of high computation power on the client side. Image-based rendering approach has attractive advantages over traditional 3D rendering approach in such Web Applications. Traditional approach, such as VRML, requires labor-intensive 3D modeling process, high bandwidth and computation power especially for photo-realistic virtual scenes. QuickTime VR and IPIX as examples of image-based approach, use panoramic photos and the virtual scenes that can be generated from photos directly skipping the modeling process. But, these image-based approaches may require special cameras or effort to take panoramic views and provide only one fixed-point look-around and zooming in-out rather than 'walk around', that is a very important feature to provide immersive experience to virtual tourists. The Web-based Virtual Tour using Tour into the Picture employs pseudo 3D geometry with image-based rendering approach to provide viewers with immersive experience of walking around the virtual space with several snap shots of conventional photos.

  10. Effects of impurity doping on ionic conductivity and polarization phenomenon in TlBr

    NASA Astrophysics Data System (ADS)

    Du, Mao-Hua

    2013-02-01

    Ionic conductivity due to vacancy diffusion and the resulting polarization phenomenon are major challenges to the development of TlBr radiation detector. It had been proposed that impurity doping of TlBr can suppress the ionic conductivity because the impurities can getter vacancies to form neutral complexes. This paper shows that the isolated vacancies can maintain their equilibrium concentrations even at room temperature, rendering any gettering methods ineffective. The main effect of doping is to change the Fermi level and consequently the vacancy concentration. The minimal ionic conductivity is reached at the donor concentration of [D+] = 4 × 1016 cm-3.

  11. True 3D display and BeoWulf connectivity

    NASA Astrophysics Data System (ADS)

    Jannson, Tomasz P.; Kostrzewski, Andrew A.; Kupiec, Stephen A.; Yu, Kevin H.; Aye, Tin M.; Savant, Gajendra D.

    2003-09-01

    We propose a novel true 3-D display based on holographic optics, called HAD (Holographic Autostereoscopic Display), or Holographic Inverse Look-around and Autostereoscopic Reality (HILAR), its latest generation. It does not require goggles, unlike the state of the art 3-D system which do not work without goggles, and has a table-like 360° look-around capability. Also, novel 3-D image-rendering software, based on Beowulf PC cluster hardware is discussed.

  12. Three-dimensional display of cortical anatomy and vasculature: MR angiography versus multimodality integration

    NASA Astrophysics Data System (ADS)

    Henri, Christopher J.; Pike, Gordon; Collins, D. Louis; Peters, Terence M.

    1990-07-01

    We present two methods for acquiring and viewing integrated 3-D images of cerebral vasculature and cortical anatomy. The aim of each technique is to provide the neurosurgeon or radiologist with a 3-D image containing information which cannot ordinarily be obtained from a single imaging modality. The first approach employs recent developments in MR which is now capable of imaging flowing blood as well as static tissue. Here, true 3-D data are acquired and displayed using volume or surface rendering techniques. The second approach is based on the integration of x-ray projection angiograms and tomographic image data, allowing a composite image of anatomy and vasculature to be viewed in 3-D. This is accomplished by superimposing an angiographic stereo-pair onto volume rendered images of either CT or MR data created from matched viewing geometries. The two approaches are outlined and compared. Results are presented for each technique and potential clinical applications discussed.

  13. Volume-rendering on a 3D hyperwall: A molecular visualization platform for research, education and outreach.

    PubMed

    MacDougall, Preston J; Henze, Christopher E; Volkov, Anatoliy

    2016-11-01

    We present a unique platform for molecular visualization and design that uses novel subatomic feature detection software in tandem with 3D hyperwall visualization technology. We demonstrate the fleshing-out of pharmacophores in drug molecules, as well as reactive sites in catalysts, focusing on subatomic features. Topological analysis with picometer resolution, in conjunction with interactive volume-rendering of the Laplacian of the electronic charge density, leads to new insight into docking and catalysis. Visual data-mining is done efficiently and in parallel using a 4×4 3D hyperwall (a tiled array of 3D monitors driven independently by slave GPUs but displaying high-resolution, synchronized and functionally-related images). The visual texture of images for a wide variety of molecular systems are intuitive to experienced chemists but also appealing to neophytes, making the platform simultaneously useful as a tool for advanced research as well as for pedagogical and STEM education outreach purposes. Copyright © 2016. Published by Elsevier Inc.

  14. A novel approach to segmentation and measurement of medical image using level set methods.

    PubMed

    Chen, Yao-Tien

    2017-06-01

    The study proposes a novel approach for segmentation and visualization plus value-added surface area and volume measurements for brain medical image analysis. The proposed method contains edge detection and Bayesian based level set segmentation, surface and volume rendering, and surface area and volume measurements for 3D objects of interest (i.e., brain tumor, brain tissue, or whole brain). Two extensions based on edge detection and Bayesian level set are first used to segment 3D objects. Ray casting and a modified marching cubes algorithm are then adopted to facilitate volume and surface visualization of medical-image dataset. To provide physicians with more useful information for diagnosis, the surface area and volume of an examined 3D object are calculated by the techniques of linear algebra and surface integration. Experiment results are finally reported in terms of 3D object extraction, surface and volume rendering, and surface area and volume measurements for medical image analysis. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Volumetric data analysis using Morse-Smale complexes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Natarajan, V; Pascucci, V

    2005-10-13

    The 3D Morse-Smale complex is a fundamental topological construct that partitions the domain of a real-valued function into regions having uniform gradient flow behavior. In this paper, we consider the construction and selective presentation of cells of the Morse-Smale complex and their use in the analysis and visualization of scientific datasets. We take advantage of the fact that cells of different dimension often characterize different types of features present in the data. For example, critical points pinpoint changes in topology by showing where components of the level sets are created, destroyed or modified in genus. Edges of the Morse-Smale complexmore » extract filament-like features that are not explicitly modeled in the original data. Interactive selection and rendering of portions of the Morse-Smale complex introduces fundamental data management challenges due to the unstructured nature of the complex even for structured inputs. We describe a data structure that stores the Morse-Smale complex and allows efficient selective traversal of regions of interest. Finally, we illustrate the practical use of this approach by applying it to cryo-electron microscopy data of protein molecules.« less

  16. 3D annotation and manipulation of medical anatomical structures

    NASA Astrophysics Data System (ADS)

    Vitanovski, Dime; Schaller, Christian; Hahn, Dieter; Daum, Volker; Hornegger, Joachim

    2009-02-01

    Although the medical scanners are rapidly moving towards a three-dimensional paradigm, the manipulation and annotation/labeling of the acquired data is still performed in a standard 2D environment. Editing and annotation of three-dimensional medical structures is currently a complex task and rather time-consuming, as it is carried out in 2D projections of the original object. A major problem in 2D annotation is the depth ambiguity, which requires 3D landmarks to be identified and localized in at least two of the cutting planes. Operating directly in a three-dimensional space enables the implicit consideration of the full 3D local context, which significantly increases accuracy and speed. A three-dimensional environment is as well more natural optimizing the user's comfort and acceptance. The 3D annotation environment requires the three-dimensional manipulation device and display. By means of two novel and advanced technologies, Wii Nintendo Controller and Philips 3D WoWvx display, we define an appropriate 3D annotation tool and a suitable 3D visualization monitor. We define non-coplanar setting of four Infrared LEDs with a known and exact position, which are tracked by the Wii and from which we compute the pose of the device by applying a standard pose estimation algorithm. The novel 3D renderer developed by Philips uses either the Z-value of a 3D volume, or it computes the depth information out of a 2D image, to provide a real 3D experience without having some special glasses. Within this paper we present a new framework for manipulation and annotation of medical landmarks directly in three-dimensional volume.

  17. Linking morphology with activity through the lifetime of pretreated PtNi nanostructured thin film catalysts

    DOE PAGES

    Cullen, David A.; Lopez-Haro, Miguel; Bayle-Guillemaud, Pascale; ...

    2015-04-10

    In this study, the nanoscale morphology of highly active Pt 3Ni 7 nanostructured thin film fuel cell catalysts is linked with catalyst surface area and activity following catalyst pretreatments, conditioning and potential cycling. The significant role of fuel cell conditioning on the structure and composition of these extended surface catalysts is demonstrated by high resolution imaging, elemental mapping and tomography. The dissolution of Ni during fuel cell conditioning leads to highly complex, porous structures which were visualized in 3D by electron tomography. Quantification of the rendered surfaces following catalyst pretreatment, conditioning, and cycling shows the important role pore structure playsmore » in surface area, activity, and durability.« less

  18. Computer-Assisted Orthognathic Surgery for Patients with Cleft Lip/Palate: From Traditional Planning to Three-Dimensional Surgical Simulation

    PubMed Central

    Lonic, Daniel; Pai, Betty Chien-Jung; Yamaguchi, Kazuaki; Chortrakarnkij, Peerasak; Lin, Hsiu-Hsia; Lo, Lun-Jou

    2016-01-01

    Background Although conventional two-dimensional (2D) methods for orthognathic surgery planning are still popular, the use of three-dimensional (3D) simulation is steadily increasing. In facial asymmetry cases such as in cleft lip/palate patients, the additional information can dramatically improve planning accuracy and outcome. The purpose of this study is to investigate which parameters are changed most frequently in transferring a traditional 2D plan to 3D simulation, and what planning parameters can be better adjusted by this method. Patients and Methods This prospective study enrolled 30 consecutive patients with cleft lip and/or cleft palate (mean age 18.6±2.9 years, range 15 to 32 years). All patients received two-jaw single-splint orthognathic surgery. 2D orthodontic surgery plans were transferred into a 3D setting. Severe bony collisions in the ramus area after 2D plan transfer were noted. The position of the maxillo-mandibular complex was evaluated and eventually adjusted. Position changes of roll, midline, pitch, yaw, genioplasty and their frequency within the patient group were recorded as an alternation of the initial 2D plan. Patients were divided in groups of no change from the original 2D plan and changes in one, two, three and four of the aforementioned parameters as well as subgroups of unilateral, bilateral cleft lip/palate and isolated cleft palate cases. Postoperative OQLQ scores were obtained for 20 patients who finished orthodontic treatment. Results 83.3% of 2D plans were modified, mostly concerning yaw (63.3%) and midline (36.7%) adjustments. Yaw adjustments had the highest mean values in total and in all subgroups. Severe bony collisions as a result of 2D planning were seen in 46.7% of patients. Possible asymmetry was regularly foreseen and corrected in the 3D simulation. Conclusion Based on our findings, 3D simulation renders important information for accurate planning in complex cleft lip/palate cases involving facial asymmetry that is regularly missed in conventional 2D planning. PMID:27002726

  19. Computer-Assisted Orthognathic Surgery for Patients with Cleft Lip/Palate: From Traditional Planning to Three-Dimensional Surgical Simulation.

    PubMed

    Lonic, Daniel; Pai, Betty Chien-Jung; Yamaguchi, Kazuaki; Chortrakarnkij, Peerasak; Lin, Hsiu-Hsia; Lo, Lun-Jou

    2016-01-01

    Although conventional two-dimensional (2D) methods for orthognathic surgery planning are still popular, the use of three-dimensional (3D) simulation is steadily increasing. In facial asymmetry cases such as in cleft lip/palate patients, the additional information can dramatically improve planning accuracy and outcome. The purpose of this study is to investigate which parameters are changed most frequently in transferring a traditional 2D plan to 3D simulation, and what planning parameters can be better adjusted by this method. This prospective study enrolled 30 consecutive patients with cleft lip and/or cleft palate (mean age 18.6±2.9 years, range 15 to 32 years). All patients received two-jaw single-splint orthognathic surgery. 2D orthodontic surgery plans were transferred into a 3D setting. Severe bony collisions in the ramus area after 2D plan transfer were noted. The position of the maxillo-mandibular complex was evaluated and eventually adjusted. Position changes of roll, midline, pitch, yaw, genioplasty and their frequency within the patient group were recorded as an alternation of the initial 2D plan. Patients were divided in groups of no change from the original 2D plan and changes in one, two, three and four of the aforementioned parameters as well as subgroups of unilateral, bilateral cleft lip/palate and isolated cleft palate cases. Postoperative OQLQ scores were obtained for 20 patients who finished orthodontic treatment. 83.3% of 2D plans were modified, mostly concerning yaw (63.3%) and midline (36.7%) adjustments. Yaw adjustments had the highest mean values in total and in all subgroups. Severe bony collisions as a result of 2D planning were seen in 46.7% of patients. Possible asymmetry was regularly foreseen and corrected in the 3D simulation. Based on our findings, 3D simulation renders important information for accurate planning in complex cleft lip/palate cases involving facial asymmetry that is regularly missed in conventional 2D planning.

  20. Chromium (D-phenylalanine)3 supplementation alters glucose disposal, insulin signaling, and glucose transporter-4 membrane translocation in insulin-resistant mice.

    PubMed

    Dong, Feng; Kandadi, Machender Reddy; Ren, Jun; Sreejayan, Nair

    2008-10-01

    Chromium has gained popularity as a nutritional supplement for diabetic and insulin-resistant subjects. This study was designed to evaluate the effect of chronic administration of a novel chromium complex of d-phenylalanine [Cr(D-phe)(3)] in insulin-resistant, sucrose-fed mice. Whole-body insulin resistance was generated in FVB mice by 9 wk of sucrose feeding, following which they were randomly assigned to be unsupplemented (S group) or to receive oral Cr(D-phe)(3) in drinking water (SCr group) at a dose of 45 mug.kg(-1).d(-1) ( approximately 3.8 mug of elemental chromium.kg(-1).d(-1)). A control group (C) did not consume sucrose and was not supplemented. Sucrose-fed mice had an elevated serum insulin concentration compared with controls and this was significantly lower in sucrose-fed mice that received Cr(D-phe)(3), which did not differ from controls. Impaired glucose tolerance in sucrose-fed mice, evidenced by the poor glucose disposal rate following an intraperitoneal glucose tolerance test, was significantly improved in mice receiving Cr(D-phe)(3). Chromium supplementation significantly enhanced insulin-stimulated Akt phosphorylation and membrane-associated glucose transporter-4 in skeletal muscles of sucrose-fed mice. In cultured adipocytes rendered insulin resistant by chronic exposure to high concentrations of glucose and insulin, Cr(D-phe)(3) augmented Akt phosphorylation and glucose uptake. These results indicate that dietary supplementation with Cr(D-phe)(3) may have potential beneficial effects in insulin-resistant, prediabetic conditions.

  1. PACS-based interface for 3D anatomical structure visualization and surgical planning

    NASA Astrophysics Data System (ADS)

    Koehl, Christophe; Soler, Luc; Marescaux, Jacques

    2002-05-01

    The interpretation of radiological image is routine but it remains a rather difficult task for physicians. It requires complex mental processes, that permit translation from 2D slices into 3D localization and volume determination of visible diseases. An easier and more extensive visualization and exploitation of medical images can be reached through the use of computer-based systems that provide real help from patient admission to post-operative followup. In this way, we have developed a 3D visualization interface linked to a PACS database that allows manipulation and interaction on virtual organs delineated from CT-scan or MRI. This software provides the 3D real-time surface rendering of anatomical structures, an accurate evaluation of volumes and distances and the improvement of radiological image analysis and exam annotation through a negatoscope tool. It also provides a tool for surgical planning allowing the positioning of an interactive laparoscopic instrument and the organ resection. The software system could revolutionize the field of computerized imaging technology. Indeed, it provides a handy and portable tool for pre-operative and intra-operative analysis of anatomy and pathology in various medical fields. This constitutes the first step of the future development of augmented reality and surgical simulation systems.

  2. Multi-Purpose Crew Vehicle Camera Asset Planning: Imagery Previsualization

    NASA Technical Reports Server (NTRS)

    Beaulieu, K.

    2014-01-01

    Using JSC-developed and other industry-standard off-the-shelf 3D modeling, animation, and rendering software packages, the Image Science Analysis Group (ISAG) supports Orion Project imagery planning efforts through dynamic 3D simulation and realistic previsualization of ground-, vehicle-, and air-based camera output.

  3. Polysaccharide-based hydrogels with tunable composition as 3D cell culture systems.

    PubMed

    Gentilini, Roberta; Munarin, Fabiola; Bloise, Nora; Secchi, Eleonora; Visai, Livia; Tanzi, Maria Cristina; Petrini, Paola

    2018-04-01

    To date, cell cultures have been created either on 2-dimensional (2D) polystyrene surfaces or in 3-dimensional (3D) systems, which do not offer a controlled chemical composition, and which lack the soft environment encountered in vivo and the chemical stimuli that promote cell proliferation and allow complex cellular behavior. In this study, pectin-based hydrogels were developed and are proposed as versatile cell culture systems. Pectin-based hydrogels were produced by internally crosslinking pectin with calcium carbonate at different initial pH, aiming to control crosslinking kinetics and degree. Additionally, glucose and glutamine were added as additives, and their effects on the viscoelastic properties of the hydrogels and on cell viability were investigated. Pectin hydrogels showed in high cell viability and shear-thinning behavior. Independently of hydrogel composition, an initial swelling was observed, followed by a low percentage of weight variation and a steady-state stage. The addition of glucose and glutamine to pectin-based hydrogels rendered higher cell viability up to 90%-98% after 1 hour of incubation, and these hydrogels were maintained for up to 7 days of culture, yet no effect on viscoelastic properties was detected. Pectin-based hydrogels that offer tunable composition were developed successfully. They are envisioned as synthetic extracellular matrix (ECM) either to study complex cellular behaviors or to be applied as tissue engineering substitutes.

  4. Simplifying the exploration of volumetric images: development of a 3D user interface for the radiologist's workplace.

    PubMed

    Teistler, M; Breiman, R S; Lison, T; Bott, O J; Pretschner, D P; Aziz, A; Nowinski, W L

    2008-10-01

    Volumetric imaging (computed tomography and magnetic resonance imaging) provides increased diagnostic detail but is associated with the problem of navigation through large amounts of data. In an attempt to overcome this problem, a novel 3D navigation tool has been designed and developed that is based on an alternative input device. A 3D mouse allows for simultaneous definition of position and orientation of orthogonal or oblique multiplanar reformatted images or slabs, which are presented within a virtual 3D scene together with the volume-rendered data set and additionally as 2D images. Slabs are visualized with maximum intensity projection, average intensity projection, or standard volume rendering technique. A prototype has been implemented based on PC technology that has been tested by several radiologists. It has shown to be easily understandable and usable after a very short learning phase. Our solution may help to fully exploit the diagnostic potential of volumetric imaging by allowing for a more efficient reading process compared to currently deployed solutions based on conventional mouse and keyboard.

  5. Dimensional accuracy of 3D printed vertebra

    NASA Astrophysics Data System (ADS)

    Ogden, Kent; Ordway, Nathaniel; Diallo, Dalanda; Tillapaugh-Fay, Gwen; Aslan, Can

    2014-03-01

    3D printer applications in the biomedical sciences and medical imaging are expanding and will have an increasing impact on the practice of medicine. Orthopedic and reconstructive surgery has been an obvious area for development of 3D printer applications as the segmentation of bony anatomy to generate printable models is relatively straightforward. There are important issues that should be addressed when using 3D printed models for applications that may affect patient care; in particular the dimensional accuracy of the printed parts needs to be high to avoid poor decisions being made prior to surgery or therapeutic procedures. In this work, the dimensional accuracy of 3D printed vertebral bodies derived from CT data for a cadaver spine is compared with direct measurements on the ex-vivo vertebra and with measurements made on the 3D rendered vertebra using commercial 3D image processing software. The vertebra was printed on a consumer grade 3D printer using an additive print process using PLA (polylactic acid) filament. Measurements were made for 15 different anatomic features of the vertebral body, including vertebral body height, endplate width and depth, pedicle height and width, and spinal canal width and depth, among others. It is shown that for the segmentation and printing process used, the results of measurements made on the 3D printed vertebral body are substantially the same as those produced by direct measurement on the vertebra and measurements made on the 3D rendered vertebra.

  6. Rendering of 3D-wavelet-compressed concentric mosaic scenery with progressive inverse wavelet synthesis (PIWS)

    NASA Astrophysics Data System (ADS)

    Wu, Yunnan; Luo, Lin; Li, Jin; Zhang, Ya-Qin

    2000-05-01

    The concentric mosaics offer a quick solution to the construction and navigation of a virtual environment. To reduce the vast data amount of the concentric mosaics, a compression scheme based on 3D wavelet transform has been proposed in a previous paper. In this work, we investigate the efficient implementation of the renderer. It is preferable not to expand the compressed bitstream as a whole, so that the memory consumption of the renderer can be reduced. Instead, only the data necessary to render the current view are accessed and decoded. The progressive inverse wavelet synthesis (PIWS) algorithm is proposed to provide the random data access and to reduce the calculation for the data access requests to a minimum. A mixed cache is used in PIWS, where the entropy decoded wavelet coefficient, intermediate result of lifting and fully synthesized pixel are all stored at the same memory unit because of the in- place calculation property of the lifting implementation. PIWS operates with a finite state machine, where each memory unit is attached with a state to indicate what type of content is currently stored. The computational saving achieved by PIWS is demonstrated with extensive experiment results.

  7. 3D Scientific Visualization with Blender

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.

    2015-03-01

    This is the first book written on using Blender (an open source visualization suite widely used in the entertainment and gaming industries) for scientific visualization. It is a practical and interesting introduction to Blender for understanding key parts of 3D rendering and animation that pertain to the sciences via step-by-step guided tutorials. 3D Scientific Visualization with Blender takes you through an understanding of 3D graphics and modelling for different visualization scenarios in the physical sciences.

  8. [Usefulness of volume rendering stereo-movie in neurosurgical craniotomies].

    PubMed

    Fukunaga, Tateya; Mokudai, Toshihiko; Fukuoka, Masaaki; Maeda, Tomonori; Yamamoto, Kouji; Yamanaka, Kozue; Minakuchi, Kiyomi; Miyake, Hirohisa; Moriki, Akihito; Uchida, Yasufumi

    2007-12-20

    In recent years, the advancements in MR technology combined with the development of the multi-channel coil have resulted in substantially shortened inspection times. In addition, rapid improvement in functional performance in the workstation has produced a more simplified imaging-making process. Consequently, graphical images of intra-cranial lesions can be easily created. For example, the use of three-dimensional spoiled gradient echo (3D-SPGR) volume rendering (VR) after injection of a contrast medium is applied clinically as a preoperative reference image. Recently, improvements in 3D-SPGR VR high-resolution have enabled accurate surface images of the brain to be obtained. We used stereo-imaging created by weighted maximum intensity projection (Weighted MIP) to determine the skin incision line. Furthermore, the stereo imaging technique utilizing 3D-SPGR VR was actually used in cases presented here. The techniques we report here seemed to be very useful in the pre-operative simulation of neurosurgical craniotomy.

  9. Gesture Interaction Browser-Based 3D Molecular Viewer.

    PubMed

    Virag, Ioan; Stoicu-Tivadar, Lăcrămioara; Crişan-Vida, Mihaela

    2016-01-01

    The paper presents an open source system that allows the user to interact with a 3D molecular viewer using associated hand gestures for rotating, scaling and panning the rendered model. The novelty of this approach is that the entire application is browser-based and doesn't require installation of third party plug-ins or additional software components in order to visualize the supported chemical file formats. This kind of solution is suitable for instruction of users in less IT oriented environments, like medicine or chemistry. For rendering various molecular geometries our team used GLmol (a molecular viewer written in JavaScript). The interaction with the 3D models is made with Leap Motion controller that allows real-time tracking of the user's hand gestures. The first results confirmed that the resulting application leads to a better way of understanding various types of translational bioinformatics related problems in both biomedical research and education.

  10. Sphere-enhanced microwave ablation (sMWA) versus bland microwave ablation (bMWA): technical parameters, specific CT 3D rendering and histopathology.

    PubMed

    Gockner, T L; Zelzer, S; Mokry, T; Gnutzmann, D; Bellemann, N; Mogler, C; Beierfuß, A; Köllensperger, E; Germann, G; Radeleff, B A; Stampfl, U; Kauczor, H U; Pereira, P L; Sommer, C M

    2015-04-01

    This study was designed to compare technical parameters during ablation as well as CT 3D rendering and histopathology of the ablation zone between sphere-enhanced microwave ablation (sMWA) and bland microwave ablation (bMWA). In six sheep-livers, 18 microwave ablations were performed with identical system presets (power output: 80 W, ablation time: 120 s). In three sheep, transarterial embolisation (TAE) was performed immediately before microwave ablation using spheres (diameter: 40 ± 10 μm) (sMWA). In the other three sheep, microwave ablation was performed without spheres embolisation (bMWA). Contrast-enhanced CT, sacrifice, and liver harvest followed immediately after microwave ablation. Study goals included technical parameters during ablation (resulting power output, ablation time), geometry of the ablation zone applying specific CT 3D rendering with a software prototype (short axis of the ablation zone, volume of the largest aligned ablation sphere within the ablation zone), and histopathology (hematoxylin-eosin, Masson Goldner and TUNEL). Resulting power output/ablation times were 78.7 ± 1.0 W/120 ± 0.0 s for bMWA and 78.4 ± 1.0 W/120 ± 0.0 s for sMWA (n.s., respectively). Short axis/volume were 23.7 ± 3.7 mm/7.0 ± 2.4 cm(3) for bMWA and 29.1 ± 3.4 mm/11.5 ± 3.9 cm(3) for sMWA (P < 0.01, respectively). Histopathology confirmed the signs of coagulation necrosis as well as early and irreversible cell death for bMWA and sMWA. For sMWA, spheres were detected within, at the rim, and outside of the ablation zone without conspicuous features. Specific CT 3D rendering identifies a larger ablation zone for sMWA compared with bMWA. The histopathological signs and the detectable amount of cell death are comparable for both groups. When comparing sMWA with bMWA, TAE has no effect on the technical parameters during ablation.

  11. Specificity Rendering ‘Hot-Spots’ for Aurora Kinase Inhibitor Design: The Role of Non-Covalent Interactions and Conformational Transitions

    PubMed Central

    Badrinarayan, Preethi; Sastry, G. Narahari

    2014-01-01

    The present study examines the conformational transitions occurring among the major structural motifs of Aurora kinase (AK) concomitant with the DFG-flip and deciphers the role of non-covalent interactions in rendering specificity. Multiple sequence alignment, docking and structural analysis of a repertoire of 56 crystal structures of AK from Protein Data Bank (PDB) has been carried out. The crystal structures were systematically categorized based on the conformational disposition of the DFG-loop [in (DI) 42, out (DO) 5 and out-up (DOU) 9], G-loop [extended (GE) 53 and folded (GF) 3] and αC-helix [in (CI) 42 and out (CO) 14]. The overlapping subsets on categorization show the inter-dependency among structural motifs. Therefore, the four distinct possibilities a) 2W1C (DI, CI, GE) b) 3E5A (DI, CI, GF) c) 3DJ6 (DI, CO, GF) d) 3UNZ (DOU, CO, GF) along with their co-crystals and apo-forms were subjected to molecular dynamics simulations of 40 ns each to evaluate the variations of individual residues and their impact on forming interactions. The non-covalent interactions formed by the 157 AK co-crystals with different regions of the binding site were initially studied with the docked complexes and structure interaction fingerprints. The frequency of the most prominent interactions was gauged in the AK inhibitors from PDB and the four representative conformations during 40 ns. Based on this study, seven major non-covalent interactions and their complementary sites in AK capable of rendering specificity have been prioritized for the design of different classes of inhibitors. PMID:25485544

  12. Real-time 3D video compression for tele-immersive environments

    NASA Astrophysics Data System (ADS)

    Yang, Zhenyu; Cui, Yi; Anwar, Zahid; Bocchino, Robert; Kiyanclar, Nadir; Nahrstedt, Klara; Campbell, Roy H.; Yurcik, William

    2006-01-01

    Tele-immersive systems can improve productivity and aid communication by allowing distributed parties to exchange information via a shared immersive experience. The TEEVE research project at the University of Illinois at Urbana-Champaign and the University of California at Berkeley seeks to foster the development and use of tele-immersive environments by a holistic integration of existing components that capture, transmit, and render three-dimensional (3D) scenes in real time to convey a sense of immersive space. However, the transmission of 3D video poses significant challenges. First, it is bandwidth-intensive, as it requires the transmission of multiple large-volume 3D video streams. Second, existing schemes for 2D color video compression such as MPEG, JPEG, and H.263 cannot be applied directly because the 3D video data contains depth as well as color information. Our goal is to explore from a different angle of the 3D compression space with factors including complexity, compression ratio, quality, and real-time performance. To investigate these trade-offs, we present and evaluate two simple 3D compression schemes. For the first scheme, we use color reduction to compress the color information, which we then compress along with the depth information using zlib. For the second scheme, we use motion JPEG to compress the color information and run-length encoding followed by Huffman coding to compress the depth information. We apply both schemes to 3D videos captured from a real tele-immersive environment. Our experimental results show that: (1) the compressed data preserves enough information to communicate the 3D images effectively (min. PSNR > 40) and (2) even without inter-frame motion estimation, very high compression ratios (avg. > 15) are achievable at speeds sufficient to allow real-time communication (avg. ~ 13 ms per 3D video frame).

  13. Computer 3D site model generation based on aerial images

    NASA Astrophysics Data System (ADS)

    Zheltov, Sergey Y.; Blokhinov, Yuri B.; Stepanov, Alexander A.; Skryabin, Sergei V.; Sibiriakov, Alexandre V.

    1997-07-01

    The technology for 3D model design of real world scenes and its photorealistic rendering are current topics of investigation. Development of such technology is very attractive to implement in vast varieties of applications: military mission planning, crew training, civil engineering, architecture, virtual reality entertainments--just a few were mentioned. 3D photorealistic models of urban areas are often discussed now as upgrade from existing 2D geographic information systems. Possibility of site model generation with small details depends on two main factors: available source dataset and computer power resources. In this paper PC based technology is presented, so the scenes of middle resolution (scale of 1:1000) be constructed. Types of datasets are the gray level aerial stereo pairs of photographs (scale of 1:14000) and true color on ground photographs of buildings (scale ca.1:1000). True color terrestrial photographs are also necessary for photorealistic rendering, that in high extent improves human perception of the scene.

  14. Appearance of bony lesions on 3-D CT reconstructions: a case study in variable renderings

    NASA Astrophysics Data System (ADS)

    Mankovich, Nicholas J.; White, Stuart C.

    1992-05-01

    This paper discusses conventional 3-D reconstruction for bone visualization and presents a case study to demonstrate the dangers of performing 3-D reconstructions without careful selection of the bone threshold. The visualization of midface bone lesions directly from axial CT images is difficult because of the complex anatomic relationships. Three-dimensional reconstructions made from the CT to provide graphic images showing lesions in relation to adjacent facial bones. Most commercially available 3-D image reconstruction requires that the radiologist or technologist identify a threshold image intensity value that can be used to distinguish bone from other tissues. Much has been made of the many disadvantages of this technique, but it continues as the predominant method in producing 3-D pictures for clinical use. This paper is intended to provide a clear demonstration for the physician of the caveats that should accompany 3-D reconstructions. We present a case of recurrent odontogenic keratocyst in the anterior maxilla where the 3-D reconstructions, made with different bone thresholds (windows), are compared to the resected specimen. A DMI 3200 computer was used to convert the scan data from a GE 9800 CT into a 3-D shaded surface image. Threshold values were assigned to (1) generate the most clinically pleasing image, (2) produce maximum theoretical fidelity (using the midpoint image intensity between average cortical bone and average soft tissue), and (3) cover stepped threshold intensities between these two methods. We compared the computer lesions with the resected specimen and noted measurement errors of up to 44 percent introduced by inappropriate bone threshold levels. We suggest clinically applicable standardization techniques in the 3-D reconstruction as well as cautionary language that should accompany the 3-D images.

  15. Synchrotron-based micro and nanotomographic investigations of soil aggregate microbial and pore structure

    NASA Astrophysics Data System (ADS)

    Kemner, K. M.; O'Brien, S.; Whiteside, M. D.; Sholto-Douglas, D.; Antipova, O.; Bailey, V.; Boyanov, M.; Dohnalkova, A.; Gursoy, D.; Kovarik, L.; Lai, B.; Roehrig, C.; Vogt, S.

    2017-12-01

    Soil is a highly complex network of pore spaces, minerals, and organic matter (e.g., roots, fungi, and bacteria), making it physically heterogeneous over nano- to macro-scales. Such complexity arises from feedbacks between physical processes and biological activity that generate a dynamic, self-organizing 3D complex. Since we first demonstrated the utility of synchrotron-based transmission tomography to image internal soil aggregate structure [Kemner et al., 1998], we and many other researchers have made use of and have advanced the application of this technique. However, our understanding of how microbes and microbial metabolism are distributed throughout soil aggregates is limited, because no technique is available to image the soil pore network and the life that inhabits it. X-ray transmission microtomography can provide highly detailed 3D renderings of soil structure but cannot distinguish cells from other electron-light material such as air or water. However, the use of CdSe quantum dots (QDs) as a reporter of bacterial presence enables us to overcome this constraint, instilling bacterial cells with enough contrast to detect them and their metabolic functions in their opaque soil habitat, with hard x-rays capable of penetrating 3D soil structures at high resolution. Previous transmission tomographic imaging of soil aggregates with high energy synchrotron x-rays has demonstrated 700 nm3 voxel spatial resolution. These and recent results from nanotomographic x-ray transmission imaging of soil aggregates with 30 nm3 voxel resolution will be presented. In addition, results of submicron voxel-sized x-ray fluorescence 3D imaging to determine microbial distributions within soil aggregates and the critical role to be played by the upgrade of the Advanced Photon Source for 100-1000X increases in hard x-ray brilliance will also be presented. *Kemner, et al., SPIE 3449, 45-53, 1998

  16. High resolution renderings and interactive visualization of the 2006 Huntington Beach experiment

    NASA Astrophysics Data System (ADS)

    Im, T.; Nayak, A.; Keen, C.; Samilo, D.; Matthews, J.

    2006-12-01

    The Visualization Center at the Scripps Institution of Oceanography investigates innovative ways to represent graphically interactive 3D virtual landscapes and to produce high resolution, high quality renderings of Earth sciences data and the sensors and instruments used to collect the data . Among the Visualization Center's most recent work is the visualization of the Huntington Beach experiment, a study launched in July 2006 by the Southern California Ocean Observing System (http://www.sccoos.org/) to record and synthesize data of the Huntington Beach coastal region. Researchers and students at the Visualization Center created visual presentations that combine bathymetric data provided by SCCOOS with USGS aerial photography and with 3D polygonal models of sensors created in Maya into an interactive 3D scene using the Fledermaus suite of visualization tools (http://www.ivs3d.com). In addition, the Visualization Center has produced high definition (HD) animations of SCCOOS sensor instruments (e.g. REMUS, drifters, spray glider, nearshore mooring, OCSD/USGS mooring and CDIP mooring) using the Maya modeling and animation software and rendered over multiple nodes of the OptIPuter Visualization Cluster at Scripps. These visualizations are aimed at providing researchers with a broader context of sensor locations relative to geologic characteristics, to promote their use as an educational resource for informal education settings and increasing public awareness, and also as an aid for researchers' proposals and presentations. These visualizations are available for download on the Visualization Center website at http://siovizcenter.ucsd.edu/sccoos/hb2006.php.

  17. 3D chromosome rendering from Hi-C data using virtual reality

    NASA Astrophysics Data System (ADS)

    Zhu, Yixin; Selvaraj, Siddarth; Weber, Philip; Fang, Jennifer; Schulze, Jürgen P.; Ren, Bing

    2015-01-01

    Most genome browsers display DNA linearly, using single-dimensional depictions that are useful to examine certain epigenetic mechanisms such as DNA methylation. However, these representations are insufficient to visualize intrachromosomal interactions and relationships between distal genome features. Relationships between DNA regions may be difficult to decipher or missed entirely if those regions are distant in one dimension but could be spatially proximal when mapped to three-dimensional space. For example, the visualization of enhancers folding over genes is only fully expressed in three-dimensional space. Thus, to accurately understand DNA behavior during gene expression, a means to model chromosomes is essential. Using coordinates generated from Hi-C interaction frequency data, we have created interactive 3D models of whole chromosome structures and its respective domains. We have also rendered information on genomic features such as genes, CTCF binding sites, and enhancers. The goal of this article is to present the procedure, findings, and conclusions of our models and renderings.

  18. A stereoscopic system for viewing the temporal evolution of brain activity clusters in response to linguistic stimuli

    NASA Astrophysics Data System (ADS)

    Forbes, Angus; Villegas, Javier; Almryde, Kyle R.; Plante, Elena

    2014-03-01

    In this paper, we present a novel application, 3D+Time Brain View, for the stereoscopic visualization of functional Magnetic Resonance Imaging (fMRI) data gathered from participants exposed to unfamiliar spoken languages. An analysis technique based on Independent Component Analysis (ICA) is used to identify statistically significant clusters of brain activity and their changes over time during different testing sessions. That is, our system illustrates the temporal evolution of participants' brain activity as they are introduced to a foreign language through displaying these clusters as they change over time. The raw fMRI data is presented as a stereoscopic pair in an immersive environment utilizing passive stereo rendering. The clusters are presented using a ray casting technique for volume rendering. Our system incorporates the temporal information and the results of the ICA into the stereoscopic 3D rendering, making it easier for domain experts to explore and analyze the data.

  19. A software system for evaluation and training of spatial reasoning and neuroanatomical knowledge in a virtual environment.

    PubMed

    Armstrong, Ryan; de Ribaupierre, Sandrine; Eagleson, Roy

    2014-04-01

    This paper describes the design and development of a software tool for the evaluation and training of surgical residents using an interactive, immersive, virtual environment. Our objective was to develop a tool to evaluate user spatial reasoning skills and knowledge in a neuroanatomical context, as well as to augment their performance through interactivity. In the visualization, manually segmented anatomical surface images of MRI scans of the brain were rendered using a stereo display to improve depth cues. A magnetically tracked wand was used as a 3D input device for localization tasks within the brain. The movement of the wand was made to correspond to movement of a spherical cursor within the rendered scene, providing a reference for localization. Users can be tested on their ability to localize structures within the 3D scene, and their ability to place anatomical features at the appropriate locations within the rendering. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  20. Three-dimensional representations of complex carbohydrates and polysaccharides--SweetUnityMol: a video game-based computer graphic software.

    PubMed

    Pérez, Serge; Tubiana, Thibault; Imberty, Anne; Baaden, Marc

    2015-05-01

    A molecular visualization program tailored to deal with the range of 3D structures of complex carbohydrates and polysaccharides, either alone or in their interactions with other biomacromolecules, has been developed using advanced technologies elaborated by the video games industry. All the specific structural features displayed by the simplest to the most complex carbohydrate molecules have been considered and can be depicted. This concerns the monosaccharide identification and classification, conformations, location in single or multiple branched chains, depiction of secondary structural elements and the essential constituting elements in very complex structures. Particular attention was given to cope with the accepted nomenclature and pictorial representation used in glycoscience. This achievement provides a continuum between the most popular ways to depict the primary structures of complex carbohydrates to visualizing their 3D structures while giving the users many options to select the most appropriate modes of representations including new features such as those provided by the use of textures to depict some molecular properties. These developments are incorporated in a stand-alone viewer capable of displaying molecular structures, biomacromolecule surfaces and complex interactions of biomacromolecules, with powerful, artistic and illustrative rendering methods. They result in an open source software compatible with multiple platforms, i.e., Windows, MacOS and Linux operating systems, web pages, and producing publication-quality figures. The algorithms and visualization enhancements are demonstrated using a variety of carbohydrate molecules, from glycan determinants to glycoproteins and complex protein-carbohydrate interactions, as well as very complex mega-oligosaccharides and bacterial polysaccharides and multi-stranded polysaccharide architectures. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  1. Interactive Computer-Enhanced Remote Viewing System (ICERVS): Final report, November 1994--September 1996

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1997-05-01

    The Interactive Computer-Enhanced Remote Viewing System (ICERVS) is a software tool for complex three-dimensional (3-D) visualization and modeling. Its primary purpose is to facilitate the use of robotic and telerobotic systems in remote and/or hazardous environments, where spatial information is provided by 3-D mapping sensors. ICERVS provides a robust, interactive system for viewing sensor data in 3-D and combines this with interactive geometric modeling capabilities that allow an operator to construct CAD models to match the remote environment. Part I of this report traces the development of ICERVS through three evolutionary phases: (1) development of first-generation software to render orthogonalmore » view displays and wireframe models; (2) expansion of this software to include interactive viewpoint control, surface-shaded graphics, material (scalar and nonscalar) property data, cut/slice planes, color and visibility mapping, and generalized object models; (3) demonstration of ICERVS as a tool for the remediation of underground storage tanks (USTs) and the dismantlement of contaminated processing facilities. Part II of this report details the software design of ICERVS, with particular emphasis on its object-oriented architecture and user interface.« less

  2. Image simulation for HardWare In the Loop simulation in EO domain

    NASA Astrophysics Data System (ADS)

    Cathala, Thierry; Latger, Jean

    2015-10-01

    Infrared camera as a weapon sub system for automatic guidance is a key component for military carrier such as missile for example. The associated Image Processing, that controls the navigation, needs to be intensively assessed. Experimentation in the real world is very expensive. This is the main reason why hybrid simulation also called HardWare In the Loop (HWIL) is more and more required nowadays. In that field, IR projectors are able to cast IR fluxes of photons directly onto the IR camera of a given weapon system, typically a missile seeker head. Though in laboratory, the missile is so stimulated exactly like in the real world, provided a realistic simulation tool enables to perform synthetic images to be displayed by the IR projectors. The key technical challenge is to render the synthetic images at the required frequency. This paper focuses on OKTAL-SE experience in this domain through its product SE-FAST-HWIL. It shows the methodology and Return of Experience from OKTAL-SE. Examples are given, in the frame of the SE-Workbench. The presentation focuses on trials on real operational complex 3D cases. In particular, three important topics, that are very sensitive with regards to IG performance, are detailed: first the 3D sea surface representation, then particle systems rendering especially to simulate flares and at last sensor effects modelling. Beyond "projection mode", some information will be given on the SE-FAST-HWIL new capabilities dedicated to "injection mode".

  3. Natural 3D content on glasses-free light-field 3D cinema

    NASA Astrophysics Data System (ADS)

    Balogh, Tibor; Nagy, Zsolt; Kovács, Péter Tamás.; Adhikarla, Vamsi K.

    2013-03-01

    This paper presents a complete framework for capturing, processing and displaying the free viewpoint video on a large scale immersive light-field display. We present a combined hardware-software solution to visualize free viewpoint 3D video on a cinema-sized screen. The new glasses-free 3D projection technology can support larger audience than the existing autostereoscopic displays. We introduce and describe our new display system including optical and mechanical design considerations, the capturing system and render cluster for producing the 3D content, and the various software modules driving the system. The indigenous display is first of its kind, equipped with front-projection light-field HoloVizio technology, controlling up to 63 MP. It has all the advantages of previous light-field displays and in addition, allows a more flexible arrangement with a larger screen size, matching cinema or meeting room geometries, yet simpler to set-up. The software system makes it possible to show 3D applications in real-time, besides the natural content captured from dense camera arrangements as well as from sparse cameras covering a wider baseline. Our software system on the GPU accelerated render cluster, can also visualize pre-recorded Multi-view Video plus Depth (MVD4) videos on this light-field glasses-free cinema system, interpolating and extrapolating missing views.

  4. Digital photography and 3D MRI-based multimodal imaging for individualized planning of resective neocortical epilepsy surgery.

    PubMed

    Wellmer, Jörg; von Oertzen, Joachim; Schaller, Carlo; Urbach, Horst; König, Roy; Widman, Guido; Van Roost, Dirk; Elger, Christian E

    2002-12-01

    Invasive presurgical work up of pharmacoresistant epilepsies presumes integration of multiple diagnostic modalities into a comprehensive picture of seizure onset and eloquent brain areas. During resection, reliable transfer of evaluation results to the patient's individual anatomy must be made. We investigated the value of digital photography-based grid localization in combination with preoperative three-dimensional (3D) magnetic resonance imaging (MRI) for clinical routine. Digital photographs of the exposed cortex were taken before and after grid placement. Location of electrode contacts on the cortex was identified and schematically indicated on native cortex prints. Accordingly, transfer of contact positions to a 3D MRI brain-surface rendering was carried out manually by using the rendering software. Results of the electrophysiologic evaluation were transferred to either electrode contact reproduction and co-registered with imaging-based techniques such as single-photon emission computed tomography (SPECT), positron emission tomography (PET), and functional MRI (fMRI). Digital photography allows precise and highly realistic documentation of electrode contact positions on the individual neocortical surface. Lesions underneath grids can be highlighted by semitransparent MRI surface rendering, and lobar boundaries can be identified. Because of integrating electrode contact positions into the postprocessed 3D MRI data set, imaging-based techniques can be codisplayed with the results of the electrophysiologic evaluation. Comparison with CT/MRI co-registration showed good accuracy of the method. However, grids not sewn to the dura at implantation can become subject to significant displacement. Digital photography in combination with preimplantation 3D MRI allows the generation of reliable tailored resection plans in neocortical epilepsy surgery. The method enhances surgical safety and confidence.

  5. Semi-automated delineation of breast cancer tumors and subsequent materialization using three-dimensional printing (rapid prototyping).

    PubMed

    Schulz-Wendtland, Rüdiger; Harz, Markus; Meier-Meitinger, Martina; Brehm, Barbara; Wacker, Till; Hahn, Horst K; Wagner, Florian; Wittenberg, Thomas; Beckmann, Matthias W; Uder, Michael; Fasching, Peter A; Emons, Julius

    2017-03-01

    Three-dimensional (3D) printing has become widely available, and a few cases of its use in clinical practice have been described. The aim of this study was to explore facilities for the semi-automated delineation of breast cancer tumors and to assess the feasibility of 3D printing of breast cancer tumors. In a case series of five patients, different 3D imaging methods-magnetic resonance imaging (MRI), digital breast tomosynthesis (DBT), and 3D ultrasound-were used to capture 3D data for breast cancer tumors. The volumes of the breast tumors were calculated to assess the comparability of the breast tumor models, and the MRI information was used to render models on a commercially available 3D printer to materialize the tumors. The tumor volumes calculated from the different 3D methods appeared to be comparable. Tumor models with volumes between 325 mm 3 and 7,770 mm 3 were printed and compared with the models rendered from MRI. The materialization of the tumors reflected the computer models of them. 3D printing (rapid prototyping) appears to be feasible. Scenarios for the clinical use of the technology might include presenting the model to the surgeon to provide a better understanding of the tumor's spatial characteristics in the breast, in order to improve decision-making in relation to neoadjuvant chemotherapy or surgical approaches. J. Surg. Oncol. 2017;115:238-242. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  6. Three-dimensional image signals: processing methods

    NASA Astrophysics Data System (ADS)

    Schiopu, Paul; Manea, Adrian; Craciun, Anca-Ileana; Craciun, Alexandru

    2010-11-01

    Over the years extensive studies have been carried out to apply coherent optics methods in real-time processing, communications and transmission image. This is especially true when a large amount of information needs to be processed, e.g., in high-resolution imaging. The recent progress in data-processing networks and communication systems has considerably increased the capacity of information exchange. We describe the results of literature investigation research of processing methods for the signals of the three-dimensional images. All commercially available 3D technologies today are based on stereoscopic viewing. 3D technology was once the exclusive domain of skilled computer-graphics developers with high-end machines and software. The images capture from the advanced 3D digital camera can be displayed onto screen of the 3D digital viewer with/ without special glasses. For this is needed considerable processing power and memory to create and render the complex mix of colors, textures, and virtual lighting and perspective necessary to make figures appear three-dimensional. Also, using a standard digital camera and a technique called phase-shift interferometry we can capture "digital holograms." These are holograms that can be stored on computer and transmitted over conventional networks. We present some research methods to process "digital holograms" for the Internet transmission and results.

  7. An interactive, stereoscopic virtual environment for medical imaging visualization, simulation and training

    NASA Astrophysics Data System (ADS)

    Krueger, Evan; Messier, Erik; Linte, Cristian A.; Diaz, Gabriel

    2017-03-01

    Recent advances in medical image acquisition allow for the reconstruction of anatomies with 3D, 4D, and 5D renderings. Nevertheless, standard anatomical and medical data visualization still relies heavily on the use of traditional 2D didactic tools (i.e., textbooks and slides), which restrict the presentation of image data to a 2D slice format. While these approaches have their merits beyond being cost effective and easy to disseminate, anatomy is inherently three-dimensional. By using 2D visualizations to illustrate more complex morphologies, important interactions between structures can be missed. In practice, such as in the planning and execution of surgical interventions, professionals require intricate knowledge of anatomical complexities, which can be more clearly communicated and understood through intuitive interaction with 3D volumetric datasets, such as those extracted from high-resolution CT or MRI scans. Open source, high quality, 3D medical imaging datasets are freely available, and with the emerging popularity of 3D display technologies, affordable and consistent 3D anatomical visualizations can be created. In this study we describe the design, implementation, and evaluation of one such interactive, stereoscopic visualization paradigm for human anatomy extracted from 3D medical images. A stereoscopic display was created by projecting the scene onto the lab floor using sequential frame stereo projection and viewed through active shutter glasses. By incorporating a PhaseSpace motion tracking system, a single viewer can navigate an augmented reality environment and directly manipulate virtual objects in 3D. While this paradigm is sufficiently versatile to enable a wide variety of applications in need of 3D visualization, we designed our study to work as an interactive game, which allows users to explore the anatomy of various organs and systems. In this study we describe the design, implementation, and evaluation of an interactive and stereoscopic visualization platform for exploring and understanding human anatomy. This system can present medical imaging data in three dimensions and allows for direct physical interaction and manipulation by the viewer. This should provide numerous benefits over traditional, 2D display and interaction modalities, and in our analysis, we aim to quantify and qualify users' visual and motor interactions with the virtual environment when employing this interactive display as a 3D didactic tool.

  8. Using the stereokinetic effect to convey depth - Computationally efficient depth-from-motion displays

    NASA Technical Reports Server (NTRS)

    Kaiser, Mary K.; Proffitt, Dennis R.

    1992-01-01

    Recent developments in microelectronics have encouraged the use of 3D data bases to create compelling volumetric renderings of graphical objects. However, even with the computational capabilities of current-generation graphical systems, real-time displays of such objects are difficult, particularly when dynamic spatial transformations are involved. In this paper we discuss a type of visual stimulus (the stereokinetic effect display) that is computationally far less complex than a true three-dimensional transformation but yields an equally compelling depth impression, often perceptually indiscriminable from the true spatial transformation. Several possible applications for this technique are discussed (e.g., animating contour maps and air traffic control displays so as to evoke accurate depth percepts).

  9. Improving the visualization of 3D ultrasound data with 3D filtering

    NASA Astrophysics Data System (ADS)

    Shamdasani, Vijay; Bae, Unmin; Managuli, Ravi; Kim, Yongmin

    2005-04-01

    3D ultrasound imaging is quickly gaining widespread clinical acceptance as a visualization tool that allows clinicians to obtain unique views not available with traditional 2D ultrasound imaging and an accurate understanding of patient anatomy. The ability to acquire, manipulate and interact with the 3D data in real time is an important feature of 3D ultrasound imaging. Volume rendering is often used to transform the 3D volume into 2D images for visualization. Unlike computed tomography (CT) and magnetic resonance imaging (MRI), volume rendering of 3D ultrasound data creates noisy images in which surfaces cannot be readily discerned due to speckles and low signal-to-noise ratio. The degrading effect of speckles is especially severe when gradient shading is performed to add depth cues to the image. Several researchers have reported that smoothing the pre-rendered volume with a 3D convolution kernel, such as 5x5x5, can significantly improve the image quality, but at the cost of decreased resolution. In this paper, we have analyzed the reasons for the improvement in image quality with 3D filtering and determined that the improvement is due to two effects. The filtering reduces speckles in the volume data, which leads to (1) more accurate gradient computation and better shading and (2) decreased noise during compositing. We have found that applying a moderate-size smoothing kernel (e.g., 7x7x7) to the volume data before gradient computation combined with some smoothing of the volume data (e.g., with a 3x3x3 lowpass filter) before compositing yielded images with good depth perception and no appreciable loss in resolution. Providing the clinician with the flexibility to control both of these effects (i.e., shading and compositing) independently could improve the visualization of the 3D ultrasound data. Introducing this flexibility into the ultrasound machine requires 3D filtering to be performed twice on the volume data, once before gradient computation and again before compositing. 3D filtering of an ultrasound volume containing millions of voxels requires a large amount of computation, and doing it twice decreases the number of frames that can be visualized per second. To address this, we have developed several techniques to make computation efficient. For example, we have used the moving average method to filter a 128x128x128 volume with a 3x3x3 boxcar kernel in 17 ms on a single MAP processor running at 400 MHz. The same methods reduced the computing time on a Pentium 4 running at 3 GHz from 110 ms to 62 ms. We believe that our proposed method can improve 3D ultrasound visualization without sacrificing resolution and incurring an excessive computing time.

  10. Metallic behavior in the graphene analogue Ni 3(HITP) 2 and a strategy to render the material a semiconductor.

    DOE PAGES

    Foster, Michael E.; Sohlberg, Karl; Spataru, Dan Catalin; ...

    2016-06-19

    The metal organic framework material Ni 3(2,3,6,7,10,11 - hexaiminotriphenylene) 2, (Ni 3(HITP) 2) is composed of layers of extended conjugated planes analogous to graphene. We carried out Density functional theory (DFT) calculations to model the electronic structure of bulk and monolayer Ni 3(HITP) 2. The layered 3D material is metallic, similar to graphene. Our calculations predict that there is appreciable band dispersion not only in-plane, but perpendicular to the stacking planes as well, suggesting that, unlike graphene, the conductivity may be nearly isotropic. In contrast, a 2D monolayer of the material exhibits a band gap, consistent with previously published results.more » Insight obtained from studies of the evolution of the material from semiconducting to metallic as the material is transitioned from 2D to 3D suggests the possibility of modifying the material to render it semiconducting by changing the metal center and inserting spacer moieties between the layers. Furthermore, the DFT calculations predict that the modified material will be structurally stable and exhibit a band gap.« less

  11. 3D Graphene-Infused Polyimide with Enhanced Electrothermal Performance for Long-Term Flexible Space Applications.

    PubMed

    Loeblein, Manuela; Bolker, Asaf; Tsang, Siu Hon; Atar, Nurit; Uzan-Saguy, Cecile; Verker, Ronen; Gouzman, Irina; Grossman, Eitan; Teo, Edwin Hang Tong

    2015-12-22

    Polyimides (PIs) have been praised for their high thermal stability, high modulus of elasticity and tensile strength, ease of fabrication, and moldability. They are currently the standard choice for both substrates for flexible electronics and space shielding, as they render high temperature and UV stability and toughness. However, their poor thermal conductivity and completely electrically insulating characteristics have caused other limitations, such as thermal management challenges for flexible high-power electronics and spacecraft electrostatic charging. In order to target these issues, a hybrid of PI with 3D-graphene (3D-C), 3D-C/PI, is developed here. This composite renders extraordinary enhancements of thermal conductivity (one order of magnitude) and electrical conductivity (10 orders of magnitude). It withstands and keeps a stable performance throughout various bending and thermal cycles, as well as the oxidative and aggressive environment of ground-based, simulated space environments. This makes this new hybrid film a suitable material for flexible space applications. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Probing Novel Microstructural Evolution Mechanisms in Aluminum Alloys Using 4D Nanoscale Characterization

    DOE PAGES

    Kaira, C. Shashank; De Andrade, V.; Singh, Sudhanshu S.; ...

    2017-09-14

    Dispersions of nanoscale precipitates in metallic alloys have been known to play a key role in strengthening, by increasing their strain hardenability and providing resistance to deformation. Although these phenomena have been extensively investigated in the last century, the traditional approaches employed in the past have not rendered an authoritative microstructural understanding in such materials. The effect of the precipitates’ inherent complex morphology and their 3D spatial distribution on evolution and deformation behavior have often been precluded. This study reports, for the first time, implementation of synchrotron-based hard X-ray nanotomography in Al–Cu alloys to measure kinetics of different nanoscale phasesmore » in 3D, and reveals insights behind some of the observed novel phase transformation reactions. The experimental results of the present study reconcile with coarsening models from the Lifshitz–Slyozov–Wagner theory to an unprecedented extent, thereby establishing a new paradigm for thermodynamic analysis of precipitate assemblies. Lastly, this study sheds light on the possibilities for establishing new theories for dislocation–particle interactions, based on the limitations of using the Orowan equation in estimating precipitation strengthening.« less

  13. Probing Novel Microstructural Evolution Mechanisms in Aluminum Alloys Using 4D Nanoscale Characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaira, C. Shashank; De Andrade, V.; Singh, Sudhanshu S.

    Dispersions of nanoscale precipitates in metallic alloys have been known to play a key role in strengthening, by increasing their strain hardenability and providing resistance to deformation. Although these phenomena have been extensively investigated in the last century, the traditional approaches employed in the past have not rendered an authoritative microstructural understanding in such materials. The effect of the precipitates’ inherent complex morphology and their 3D spatial distribution on evolution and deformation behavior have often been precluded. This study reports, for the first time, implementation of synchrotron-based hard X-ray nanotomography in Al–Cu alloys to measure kinetics of different nanoscale phasesmore » in 3D, and reveals insights behind some of the observed novel phase transformation reactions. The experimental results of the present study reconcile with coarsening models from the Lifshitz–Slyozov–Wagner theory to an unprecedented extent, thereby establishing a new paradigm for thermodynamic analysis of precipitate assemblies. Lastly, this study sheds light on the possibilities for establishing new theories for dislocation–particle interactions, based on the limitations of using the Orowan equation in estimating precipitation strengthening.« less

  14. Fully automatic three-dimensional visualization of intravascular optical coherence tomography images: methods and feasibility in vivo

    PubMed Central

    Ughi, Giovanni J; Adriaenssens, Tom; Desmet, Walter; D’hooge, Jan

    2012-01-01

    Intravascular optical coherence tomography (IV-OCT) is an imaging modality that can be used for the assessment of intracoronary stents. Recent publications pointed to the fact that 3D visualizations have potential advantages compared to conventional 2D representations. However, 3D imaging still requires a time consuming manual procedure not suitable for on-line application during coronary interventions. We propose an algorithm for a rapid and fully automatic 3D visualization of IV-OCT pullbacks. IV-OCT images are first processed for the segmentation of the different structures. This also allows for automatic pullback calibration. Then, according to the segmentation results, different structures are depicted with different colors to visualize the vessel wall, the stent and the guide-wire in details. Final 3D rendering results are obtained through the use of a commercial 3D DICOM viewer. Manual analysis was used as ground-truth for the validation of the segmentation algorithms. A correlation value of 0.99 and good limits of agreement (Bland Altman statistics) were found over 250 images randomly extracted from 25 in vivo pullbacks. Moreover, 3D rendering was compared to angiography, pictures of deployed stents made available by the manufacturers and to conventional 2D imaging corroborating visualization results. Computational time for the visualization of an entire data sets resulted to be ~74 sec. The proposed method allows for the on-line use of 3D IV-OCT during percutaneous coronary interventions, potentially allowing treatments optimization. PMID:23243578

  15. Quantifying uncertainty and computational complexity for pore-scale simulations

    NASA Astrophysics Data System (ADS)

    Chen, C.; Yuan, Z.; Wang, P.; Yang, X.; Zhenyan, L.

    2016-12-01

    Pore-scale simulation is an essential tool to understand the complex physical process in many environmental problems, from multi-phase flow in the subsurface to fuel cells. However, in practice, factors such as sample heterogeneity, data sparsity and in general, our insufficient knowledge of the underlying process, render many simulation parameters and hence the prediction results uncertain. Meanwhile, most pore-scale simulations (in particular, direct numerical simulation) incur high computational cost due to finely-resolved spatio-temporal scales, which further limits our data/samples collection. To address those challenges, we propose a novel framework based on the general polynomial chaos (gPC) and build a surrogate model representing the essential features of the underlying system. To be specific, we apply the novel framework to analyze the uncertainties of the system behavior based on a series of pore-scale numerical experiments, such as flow and reactive transport in 2D heterogeneous porous media and 3D packed beds. Comparing with recent pore-scale uncertainty quantification studies using Monte Carlo techniques, our new framework requires fewer number of realizations and hence considerably reduce the overall computational cost, while maintaining the desired accuracy.

  16. CT-guided Irreversible Electroporation in an Acute Porcine Liver Model: Effect of Previous Transarterial Iodized Oil Tissue Marking on Technical Parameters, 3D Computed Tomographic Rendering of the Electroporation Zone, and Histopathology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sommer, C. M., E-mail: christof.sommer@med.uni-heidelberg.de; Fritz, S., E-mail: stefan.fritz@med.uni-heidelberg.de; Vollherbst, D., E-mail: dominikvollherbst@web.de

    PurposeTo evaluate the effect of previous transarterial iodized oil tissue marking (ITM) on technical parameters, three-dimensional (3D) computed tomographic (CT) rendering of the electroporation zone, and histopathology after CT-guided irreversible electroporation (IRE) in an acute porcine liver model as a potential strategy to improve IRE performance.MethodsAfter Ethics Committee approval was obtained, in five landrace pigs, two IREs of the right and left liver (RL and LL) were performed under CT guidance with identical electroporation parameters. Before IRE, transarterial marking of the LL was performed with iodized oil. Nonenhanced and contrast-enhanced CT examinations followed. One hour after IRE, animals were killedmore » and livers collected. Mean resulting voltage and amperage during IRE were assessed. For 3D CT rendering of the electroporation zone, parameters for size and shape were analyzed. Quantitative data were compared by the Mann–Whitney test. Histopathological differences were assessed.ResultsMean resulting voltage and amperage were 2,545.3 ± 66.0 V and 26.1 ± 1.8 A for RL, and 2,537.3 ± 69.0 V and 27.7 ± 1.8 A for LL without significant differences. Short axis, volume, and sphericity index were 16.5 ± 4.4 mm, 8.6 ± 3.2 cm{sup 3}, and 1.7 ± 0.3 for RL, and 18.2 ± 3.4 mm, 9.8 ± 3.8 cm{sup 3}, and 1.7 ± 0.3 for LL without significant differences. For RL and LL, the electroporation zone consisted of severely widened hepatic sinusoids containing erythrocytes and showed homogeneous apoptosis. For LL, iodized oil could be detected in the center and at the rim of the electroporation zone.ConclusionThere is no adverse effect of previous ITM on technical parameters, 3D CT rendering of the electroporation zone, and histopathology after CT-guided IRE of the liver.« less

  17. Photogrammetric Modeling and Image-Based Rendering for Rapid Virtual Environment Creation

    DTIC Science & Technology

    2004-12-01

    area and different methods have been proposed. Pertinent methods include: Camera Calibration , Structure from Motion, Stereo Correspondence, and Image...Based Rendering 1.1.1 Camera Calibration Determining the 3D structure of a model from multiple views becomes simpler if the intrinsic (or internal...can introduce significant nonlinearities into the image. We have found that camera calibration is a straightforward process which can simplify the

  18. Image fusion for visualization of hepatic vasculature and tumors

    NASA Astrophysics Data System (ADS)

    Chou, Jin-Shin; Chen, Shiuh-Yung J.; Sudakoff, Gary S.; Hoffmann, Kenneth R.; Chen, Chin-Tu; Dachman, Abraham H.

    1995-05-01

    We have developed segmentation and simultaneous display techniques to facilitate the visualization of the three-dimensional spatial relationships between organ structures and organ vasculature. We concentrate on the visualization of the liver based on spiral computed tomography images. Surface-based 3-D rendering and maximal intensity projection algorithms are used for data visualization. To extract the liver in the serial of images accurately and efficiently, we have developed a user-friendly interactive program with a deformable-model segmentation. Surface rendering techniques are used to visualize the extracted structures, adjacent contours are aligned and fitted with a Bezier surface to yield a smooth surface. Visualization of the vascular structures, portal and hepatic veins, is achieved by applying a MIP technique to the extracted liver volume. To integrate the extracted structures they are surface-rendered and their MIP images are aligned and a color table is designed for simultaneous display of the combined liver/tumor and vasculature images. By combining the 3-D surface rendering and MIP techniques, portal veins, hepatic veins, and hepatic tumor can be inspected simultaneously and their spatial relationships can be more easily perceived. The proposed technique will be useful for visualization of both hepatic neoplasm and vasculature in surgical planning for tumor resection or living-donor liver transplantation.

  19. 3D Lasers Increase Efficiency, Safety of Moving Machines

    NASA Technical Reports Server (NTRS)

    2015-01-01

    Canadian company Neptec Design Group Ltd. developed its Laser Camera System, used by shuttles to render 3D maps of their hulls for assessing potential damage. Using NASA funding, the firm incorporated LiDAR technology and created the TriDAR 3D sensor. Its commercial arm, Neptec Technologies Corp., has sold the technology to Orbital Sciences, which uses it to guide its Cygnus spacecraft during rendezvous and dock operations at the International Space Station.

  20. An image encryption algorithm based on 3D cellular automata and chaotic maps

    NASA Astrophysics Data System (ADS)

    Del Rey, A. Martín; Sánchez, G. Rodríguez

    2015-05-01

    A novel encryption algorithm to cipher digital images is presented in this work. The digital image is rendering into a three-dimensional (3D) lattice and the protocol consists of two phases: the confusion phase where 24 chaotic Cat maps are applied and the diffusion phase where a 3D cellular automata is evolved. The encryption method is shown to be secure against the most important cryptanalytic attacks.

  1. Enabling Real-Time Volume Rendering of Functional Magnetic Resonance Imaging on an iOS Device.

    PubMed

    Holub, Joseph; Winer, Eliot

    2017-12-01

    Powerful non-invasive imaging technologies like computed tomography (CT), ultrasound, and magnetic resonance imaging (MRI) are used daily by medical professionals to diagnose and treat patients. While 2D slice viewers have long been the standard, many tools allowing 3D representations of digital medical data are now available. The newest imaging advancement, functional MRI (fMRI) technology, has changed medical imaging from viewing static to dynamic physiology (4D) over time, particularly to study brain activity. Add this to the rapid adoption of mobile devices for everyday work and the need to visualize fMRI data on tablets or smartphones arises. However, there are few mobile tools available to visualize 3D MRI data, let alone 4D fMRI data. Building volume rendering tools on mobile devices to visualize 3D and 4D medical data is challenging given the limited computational power of the devices. This paper describes research that explored the feasibility of performing real-time 3D and 4D volume raycasting on a tablet device. The prototype application was tested on a 9.7" iPad Pro using two different fMRI datasets of brain activity. The results show that mobile raycasting is able to achieve between 20 and 40 frames per second for traditional 3D datasets, depending on the sampling interval, and up to 9 frames per second for 4D data. While the prototype application did not always achieve true real-time interaction, these results clearly demonstrated that visualizing 3D and 4D digital medical data is feasible with a properly constructed software framework.

  2. Spherical Panorama Visualization of Astronomical Data with Blender and Python

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.

    2016-06-01

    We describe methodology to generate 360 degree spherical panoramas of both 2D and 3D data. The techniques apply to a variety of astronomical data types - all sky maps, 2D and 3D catalogs as well as planetary surface maps. The results can be viewed in a desktop browser or interactively with a mobile phone or tablet. Static displays or panoramic video renderings of the data can be produced. We review the Python code and usage of the 3D Blender software for projecting maps onto 3D surfaces and the various tools for distributing visualizations.

  3. An Interactive Virtual 3D Tool for Scientific Exploration of Planetary Surfaces

    NASA Astrophysics Data System (ADS)

    Traxler, Christoph; Hesina, Gerd; Gupta, Sanjeev; Paar, Gerhard

    2014-05-01

    In this paper we present an interactive 3D visualization tool for scientific analysis and planning of planetary missions. At the moment scientists have to look at individual camera images separately. There is no tool to combine them in three dimensions and look at them seamlessly as a geologist would do (by walking backwards and forwards resulting in different scales). For this reason a virtual 3D reconstruction of the terrain that can be interactively explored is necessary. Such a reconstruction has to consider multiple scales ranging from orbital image data to close-up surface image data from rover cameras. The 3D viewer allows seamless zooming between these various scales, giving scientists the possibility to relate small surface features (e.g. rock outcrops) to larger geological contexts. For a reliable geologic assessment a realistic surface rendering is important. Therefore the material properties of the rock surfaces will be considered for real-time rendering. This is achieved by an appropriate Bidirectional Reflectance Distribution Function (BRDF) estimated from the image data. The BRDF is implemented to run on the Graphical Processing Unit (GPU) to enable realistic real-time rendering, which allows a naturalistic perception for scientific analysis. Another important aspect for realism is the consideration of natural lighting conditions, which means skylight to illuminate the reconstructed scene. In our case we provide skylights from Mars and Earth, which allows switching between these two modes of illumination. This gives geologists the opportunity to perceive rock outcrops from Mars as they would appear on Earth facilitating scientific assessment. Besides viewing the virtual reconstruction on multiple scales, scientists can also perform various measurements, i.e. geo-coordinates of a selected point or distance between two surface points. Rover or other models can be placed into the scene and snapped onto certain location of the terrain. These are important features to support the planning of rover paths. In addition annotations can be placed directly into the 3D scene, which also serve as landmarks to aid navigation. The presented visualization and planning tool is a valuable asset for scientific analysis of planetary mission data. It complements traditional methods by giving access to an interactive virtual 3D reconstruction, which is realistically rendered. Representative examples and further information about the interactive 3D visualization tool can be found on the FP7-SPACE Project PRoViDE web page http://www.provide-space.eu/interactive-virtual-3d-tool/. The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 312377 'PRoViDE'.

  4. Research on techniques for computer three-dimensional simulation of satellites and night sky

    NASA Astrophysics Data System (ADS)

    Yan, Guangwei; Hu, Haitao

    2007-11-01

    To study space attack-defense technology, a simulation of satellites is needed. We design and implement a 3d simulating system of satellites. The satellites are rendered under the Night sky background. The system structure is as follows: one computer is used to simulate the orbital of satellites, the other computers are used to render 3d simulation scene. To get a realistic effect, a three-channel multi-projector display system is constructed. We use MultiGen Creator to construct satellite and star models. We use MultiGen Distributed Vega to render the three-channel scene. There are one master and three slaves. The master controls the three slaves to render three channels separately. To get satellites' positions and attitudes, the master communicates with the satellite orbit simulator based on TCP/IP protocol. Then it calculates the observer's position, the satellites' position, the moon's and the sun's position and transmits the data to the slaves. To get a smooth orbit of target satellites, an orbit prediction method is used. Because the target satellite data packets and the attack satellite data packets cannot keep synchronization in the network, a target satellite dithering phenomenon will occur when the scene is rendered. To resolve this problem, an anti-dithering algorithm is designed. To render Night sky background, a file which stores stars' position and brightness data is used. According to the brightness of each star, the stars are classified into different magnitude. The star model is scaled according to the magnitude. All the stars are distributed on a celestial sphere. Experiments show, the whole system can run correctly, and the frame rate can reach 30Hz. The system can be used in a space attack-defense simulation field.

  5. Three-dimensional printing of X-ray computed tomography datasets with multiple materials using open-source data processing.

    PubMed

    Sander, Ian M; McGoldrick, Matthew T; Helms, My N; Betts, Aislinn; van Avermaete, Anthony; Owers, Elizabeth; Doney, Evan; Liepert, Taimi; Niebur, Glen; Liepert, Douglas; Leevy, W Matthew

    2017-07-01

    Advances in three-dimensional (3D) printing allow for digital files to be turned into a "printed" physical product. For example, complex anatomical models derived from clinical or pre-clinical X-ray computed tomography (CT) data of patients or research specimens can be constructed using various printable materials. Although 3D printing has the potential to advance learning, many academic programs have been slow to adopt its use in the classroom despite increased availability of the equipment and digital databases already established for educational use. Herein, a protocol is reported for the production of enlarged bone core and accurate representation of human sinus passages in a 3D printed format using entirely consumer-grade printers and a combination of free-software platforms. The comparative resolutions of three surface rendering programs were also determined using the sinuses, a human body, and a human wrist data files to compare the abilities of different software available for surface map generation of biomedical data. Data shows that 3D Slicer provided highest compatibility and surface resolution for anatomical 3D printing. Generated surface maps were then 3D printed via fused deposition modeling (FDM printing). In conclusion, a methodological approach that explains the production of anatomical models using entirely consumer-grade, fused deposition modeling machines, and a combination of free software platforms is presented in this report. The methods outlined will facilitate the incorporation of 3D printed anatomical models in the classroom. Anat Sci Educ 10: 383-391. © 2017 American Association of Anatomists. © 2017 American Association of Anatomists.

  6. Visualizing Three-dimensional Slab Geometries with ShowEarthModel

    NASA Astrophysics Data System (ADS)

    Chang, B.; Jadamec, M. A.; Fischer, K. M.; Kreylos, O.; Yikilmaz, M. B.

    2017-12-01

    Seismic data that characterize the morphology of modern subducted slabs on Earth suggest that a two-dimensional paradigm is no longer adequate to describe the subduction process. Here we demonstrate the effect of data exploration of three-dimensional (3D) global slab geometries with the open source program ShowEarthModel. ShowEarthModel was designed specifically to support data exploration, by focusing on interactivity and real-time response using the Vrui toolkit. Sixteen movies are presented that explore the 3D complexity of modern subduction zones on Earth. The first movie provides a guided tour through the Earth's major subduction zones, comparing the global slab geometry data sets of Gudmundsson and Sambridge (1998), Syracuse and Abers (2006), and Hayes et al. (2012). Fifteen regional movies explore the individual subduction zones and regions intersecting slabs, using the Hayes et al. (2012) slab geometry models where available and the Engdahl and Villasenor (2002) global earthquake data set. Viewing the subduction zones in this way provides an improved conceptualization of the 3D morphology within a given subduction zone as well as the 3D spatial relations between the intersecting slabs. This approach provides a powerful tool for rendering earth properties and broadening capabilities in both Earth Science research and education by allowing for whole earth visualization. The 3D characterization of global slab geometries is placed in the context of 3D slab-driven mantle flow and observations of shear wave splitting in subduction zones. These visualizations contribute to the paradigm shift from a 2D to 3D subduction framework by facilitating the conceptualization of the modern subduction system on Earth in 3D space.

  7. Factor VIII organisation on nanodiscs with different lipid composition.

    PubMed

    Grushin, Kirill; Miller, Jaimy; Dalm, Daniela; Stoilova-McPhie, Svetla

    2015-04-01

    Nanodiscs (ND) are lipid bilayer membrane patches held by amphiphilic scaffolding proteins (MSP) of ~10 nm in diameter. Nanodiscs have been developed as lipid nanoplatforms for structural and functional studies of membrane and membrane associated proteins. Their size and monodispersity have rendered them unique for electron microscopy (EM) and single particle analysis studies of proteins and complexes either spanning or associated to the ND membrane. Binding of blood coagulation factors and complexes, such as the Factor VIII (FVIII) and the Factor VIIIa - Factor IXa (intrinsic tenase) complex to the negatively charged activated platelet membrane is required for normal haemostasis. In this study we present our work on optimising ND, specifically designed to bind FVIII at close to physiological conditions. The binding of FVIII to the negatively charged ND rich in phosphatidylserine (PS) was followed by electron microscopy at three different PS compositions and two different membrane scaffolding protein (MSP1D1) to lipid ratios. Our results show that the ND with highest PS content (80 %) and lowest MSP1D1 to lipid ratio (1:47) are the most suitable for structure determination of the membrane-bound FVIII by single particle EM. Our preliminary FVIII 3D reconstruction as bound to PS containing ND demonstrates the suitability of the optimised ND for structural studies by EM. Further assembly of the activated FVIII form (FVIIIa) and the whole FVIIIa-FIXa complex on ND, followed by EM and single particle reconstruction will help to identify the protein-protein and protein-membrane interfaces critical for the intrinsic tenase complex assembly and function.

  8. Science data visualization in planetary and heliospheric contexts with 3DView

    NASA Astrophysics Data System (ADS)

    Génot, V.; Beigbeder, L.; Popescu, D.; Dufourg, N.; Gangloff, M.; Bouchemit, M.; Caussarieu, S.; Toniutti, J.-P.; Durand, J.; Modolo, R.; André, N.; Cecconi, B.; Jacquey, C.; Pitout, F.; Rouillard, A.; Pinto, R.; Erard, S.; Jourdane, N.; Leclercq, L.; Hess, S.; Khodachenko, M.; Al-Ubaidi, T.; Scherf, M.; Budnik, E.

    2018-01-01

    We present a 3D orbit viewer application capable of displaying science data. 3DView, a web tool designed by the French Plasma Physics Data Center (CDPP) for the planetology and heliophysics community, has extended functionalities to render space physics data (observations and models alike) in their original 3D context. Time series, vectors, dynamic spectra, celestial body maps, magnetic field or flow lines, 2D cuts in simulation cubes, etc, are among the variety of data representation enabled by 3DView. The direct connection to several large databases, the use of VO standards and the possibility to upload user data makes 3DView a versatile tool able to cover a wide range of space physics contexts. The code is open source and the software is regularly used at Masters Degree level or summer school for pedagogical purposes. The present paper describes the general architecture and all major functionalities, and offers several science cases (simulation rendering, mission preparation, etc.) which can be easily replayed by the interested readers. Future developments are finally outlined.

  9. Hybrid 2D photonic crystal-assisted Lu3Al5O12:Ce ceramic-plate phosphor and free-standing red film phosphor for white LEDs with high color-rendering index.

    PubMed

    Park, Hoo Keun; Oh, Ji Hye; Kang, Heejoon; Zhang, Jian; Do, Young Rag

    2015-03-04

    This paper reports the combined optical effects of a two-dimensional (2D) SiNx photonic crystal layer (PCL)-assisted Lu3Al5O12:Ce (LuAG:Ce) green ceramic-plate phosphor (CPP) and a free-standing (Sr,Ca)AlSiN3:Eu red film phosphor to enhance luminous efficacy, color rendering index (CRI), and special CRI (R9) of LuAG:Ce CPP-capped white light-emitting diodes (LEDs) for high-power white LEDs at 350 mA. By introducing the 2D SiNx PCL, the luminous efficacy was improved by a factor of 1.25 and 1.15 compared to that of the conventional flat CPP-capped LED and the thickness-increased CPP-capped LED (with a thickness of 0.15 mm), respectively, while maintaining low color-rendering properties. The combining of the free-standing red film phosphor in the flat CPP-capped, the 2D PCL-assisted CPP-capped, and the thickness-increased CPP-capped LEDs led to enhancement of the CRI and the special CRI (R9); it also led to a decrease of the correlated color temperature (CCT) due to broad wavelength coverage via the addition of red emission. High CRI (94), natural white CCT (4450 K), and acceptable luminous efficacy (71.1 lm/W) were attained from the 2D PCL-assisted LuAG:Ce CPP/free-standing red film phosphor-based LED using a red phosphor concentration of 7.5 wt %. It is expected that the combination of the 2D PCL and the free-standing red film phosphor will be a good candidate for achieving a high-power white CPP-capped LED with excellent CRI.

  10. Tools for Analysis and Visualization of Large Time-Varying CFD Data Sets

    NASA Technical Reports Server (NTRS)

    Wilhelms, Jane; VanGelder, Allen

    1997-01-01

    In the second year, we continued to built upon and improve our scanline-based direct volume renderer that we developed in the first year of this grant. This extremely general rendering approach can handle regular or irregular grids, including overlapping multiple grids, and polygon mesh surfaces. It runs in parallel on multi-processors. It can also be used in conjunction with a k-d tree hierarchy, where approximate models and error terms are stored in the nodes of the tree, and approximate fast renderings can be created. We have extended our software to handle time-varying data where the data changes but the grid does not. We are now working on extending it to handle more general time-varying data. We have also developed a new extension of our direct volume renderer that uses automatic decimation of the 3D grid, as opposed to an explicit hierarchy. We explored this alternative approach as being more appropriate for very large data sets, where the extra expense of a tree may be unacceptable. We also describe a new approach to direct volume rendering using hardware 3D textures and incorporates lighting effects. Volume rendering using hardware 3D textures is extremely fast, and machines capable of using this technique are becoming more moderately priced. While this technique, at present, is limited to use with regular grids, we are pursuing possible algorithms extending the approach to more general grid types. We have also begun to explore a new method for determining the accuracy of approximate models based on the light field method described at ACM SIGGRAPH '96. In our initial implementation, we automatically image the volume from 32 equi-distant positions on the surface of an enclosing tessellated sphere. We then calculate differences between these images under different conditions of volume approximation or decimation. We are studying whether this will give a quantitative measure of the effects of approximation. We have created new tools for exploring the differences between images produced by various rendering methods. Images created by our software can be stored in the SGI RGB format. Our idtools software reads in pair of images and compares them using various metrics. The differences of the images using the RGB, HSV, and HSL color models can be calculated and shown. We can also calculate the auto-correlation function and the Fourier transform of the image and image differences. We will explore how these image differences compare in order to find useful metrics for quantifying the success of various visualization approaches. In general, progress was consistent with our research plan for the second year of the grant.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lindsey, Nicholas C.

    The growth of additive manufacturing as a disruptive technology poses nuclear proliferation concerns worthy of serious consideration. Additive manufacturing began in the early 1980s with technological advances in polymer manipulation, computer capabilities, and computer-aided design (CAD) modeling. It was originally limited to rapid prototyping; however, it eventually developed into a complete means of production that has slowly penetrated the consumer market. Today, additive manufacturing machines can produce complex and unique items in a vast array of materials including plastics, metals, and ceramics. These capabilities have democratized the manufacturing industry, allowing almost anyone to produce items as simple as cup holdersmore » or as complex as jet fuel nozzles. Additive manufacturing, or three-dimensional (3D) printing as it is commonly called, relies on CAD files created or shared by individuals with additive manufacturing machines to produce a 3D object from a digital model. This sharing of files means that a 3D object can be scanned or rendered as a CAD model in one country, and then downloaded and printed in another country, allowing items to be shared globally without physically crossing borders. The sharing of CAD files online has been a challenging task for the export controls regime to manage over the years, and additive manufacturing could make these transfers more common. In this sense, additive manufacturing is a disruptive technology not only within the manufacturing industry but also within the nuclear nonproliferation world. This paper provides an overview of additive manufacturing concerns of proliferation.« less

  12. An augmented reality tool for learning spatial anatomy on mobile devices.

    PubMed

    Jain, Nishant; Youngblood, Patricia; Hasel, Matthew; Srivastava, Sakti

    2017-09-01

    Augmented Realty (AR) offers a novel method of blending virtual and real anatomy for intuitive spatial learning. Our first aim in the study was to create a prototype AR tool for mobile devices. Our second aim was to complete a technical evaluation of our prototype AR tool focused on measuring the system's ability to accurately render digital content in the real world. We imported Computed Tomography (CT) data derived virtual surface models into a 3D Unity engine environment and implemented an AR algorithm to display these on mobile devices. We investigated the accuracy of the virtual renderings by comparing a physical cube with an identical virtual cube for dimensional accuracy. Our comparative study confirms that our AR tool renders 3D virtual objects with a high level of accuracy as evidenced by the degree of similarity between measurements of the dimensions of a virtual object (a cube) and the corresponding physical object. We developed an inexpensive and user-friendly prototype AR tool for mobile devices that creates highly accurate renderings. This prototype demonstrates an intuitive, portable, and integrated interface for spatial interaction with virtual anatomical specimens. Integrating this AR tool with a library of CT derived surface models provides a platform for spatial learning in the anatomy curriculum. The segmentation methodology implemented to optimize human CT data for mobile viewing can be extended to include anatomical variations and pathologies. The ability of this inexpensive educational platform to deliver a library of interactive, 3D models to students worldwide demonstrates its utility as a supplemental teaching tool that could greatly benefit anatomical instruction. Clin. Anat. 30:736-741, 2017. © 2017Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  13. Spatial Visualization by Realistic 3D Views

    ERIC Educational Resources Information Center

    Yue, Jianping

    2008-01-01

    In this study, the popular Purdue Spatial Visualization Test-Visualization by Rotations (PSVT-R) in isometric drawings was recreated with CAD software that allows 3D solid modeling and rendering to provide more realistic pictorial views. Both the original and the modified PSVT-R tests were given to students and their scores on the two tests were…

  14. NeuroTessMesh: A Tool for the Generation and Visualization of Neuron Meshes and Adaptive On-the-Fly Refinement.

    PubMed

    Garcia-Cantero, Juan J; Brito, Juan P; Mata, Susana; Bayona, Sofia; Pastor, Luis

    2017-01-01

    Gaining a better understanding of the human brain continues to be one of the greatest challenges for science, largely because of the overwhelming complexity of the brain and the difficulty of analyzing the features and behavior of dense neural networks. Regarding analysis, 3D visualization has proven to be a useful tool for the evaluation of complex systems. However, the large number of neurons in non-trivial circuits, together with their intricate geometry, makes the visualization of a neuronal scenario an extremely challenging computational problem. Previous work in this area dealt with the generation of 3D polygonal meshes that approximated the cells' overall anatomy but did not attempt to deal with the extremely high storage and computational cost required to manage a complex scene. This paper presents NeuroTessMesh, a tool specifically designed to cope with many of the problems associated with the visualization of neural circuits that are comprised of large numbers of cells. In addition, this method facilitates the recovery and visualization of the 3D geometry of cells included in databases, such as NeuroMorpho, and provides the tools needed to approximate missing information such as the soma's morphology. This method takes as its only input the available compact, yet incomplete, morphological tracings of the cells as acquired by neuroscientists. It uses a multiresolution approach that combines an initial, coarse mesh generation with subsequent on-the-fly adaptive mesh refinement stages using tessellation shaders. For the coarse mesh generation, a novel approach, based on the Finite Element Method, allows approximation of the 3D shape of the soma from its incomplete description. Subsequently, the adaptive refinement process performed in the graphic card generates meshes that provide good visual quality geometries at a reasonable computational cost, both in terms of memory and rendering time. All the described techniques have been integrated into NeuroTessMesh, available to the scientific community, to generate, visualize, and save the adaptive resolution meshes.

  15. Toward a 3D video format for auto-stereoscopic displays

    NASA Astrophysics Data System (ADS)

    Vetro, Anthony; Yea, Sehoon; Smolic, Aljoscha

    2008-08-01

    There has been increased momentum recently in the production of 3D content for cinema applications; for the most part, this has been limited to stereo content. There are also a variety of display technologies on the market that support 3DTV, each offering a different viewing experience and having different input requirements. More specifically, stereoscopic displays support stereo content and require glasses, while auto-stereoscopic displays avoid the need for glasses by rendering view-dependent stereo pairs for a multitude of viewing angles. To realize high quality auto-stereoscopic displays, multiple views of the video must either be provided as input to the display, or these views must be created locally at the display. The former approach has difficulties in that the production environment is typically limited to stereo, and transmission bandwidth for a large number of views is not likely to be available. This paper discusses an emerging 3D data format that enables the latter approach to be realized. A new framework for efficiently representing a 3D scene and enabling the reconstruction of an arbitrarily large number of views prior to rendering is introduced. Several design challenges are also highlighted through experimental results.

  16. Warm White Light-Emitting Diodes Based on a Novel Orange Cationic Iridium(III) Complex

    PubMed Central

    Tang, Huaijun; Meng, Guoyun; Chen, Zeyu; Wang, Kaimin; Zhou, Qiang; Wang, Zhengliang

    2017-01-01

    A novel orange cationic iridium(III) complex [(TPTA)2Ir(dPPOA)]PF6 (TPTA: 3,4,5-triphenyl-4H-1,2,4-triazole, dPPOA: N,N-diphenyl-4-(5-(pyridin-2-yl)-1,3,4-oxadiazol-2-yl)aniline) was synthesized and used as a phosphor in light-emitting diodes (LEDs). [(TPTA)2Ir(dPPOA)]PF6 has high thermal stability with a decomposition temperature (Td) of 375 °C, and its relative emission intensity at 100 °C is 88.8% of that at 25°C. When only [(TPTA)2Ir(dPPOA)]PF6 was used as a phosphor at 6.0 wt % in silicone and excited by a blue GaN (GaN: gallium nitride) chip (450 nm), an orange LED was obtained. A white LED fabricated by a blue GaN chip (450 nm) and only yellow phosphor Y3Al5O12:Ce3+ (YAG:Ce) (1.0 wt % in silicone) emitted cold white light, its CIE (CIE: Commission International de I’Eclairage) value was (0.32, 0.33), color rendering index (CRI) was 72.2, correlated color temperature (CCT) was 6877 K, and luminous efficiency (ηL) was 128.5 lm∙W−1. Such a cold white LED became a neutral white LED when [(TPTA)2Ir(dPPOA)]PF6 was added at 0.5 wt %; its corresponding CIE value was (0.35, 0.33), CRI was 78.4, CCT was 4896 K, and ηL was 85.2 lm∙W−1. It further became a warm white LED when [(TPTA)2Ir(dPPOA)]PF6 was added at 1.0 wt %; its corresponding CIE value was (0.39, 0.36), CRI was 80.2, CCT was 3473 K, and ηL was 46.1 lm∙W−1. The results show that [(TPTA)2Ir(dPPOA)]PF6 is a promising phosphor candidate for fabricating warm white LEDs. PMID:28773020

  17. SSM/OOM - SSM WITH OOM MANIPULATION CODE

    NASA Technical Reports Server (NTRS)

    Goza, S. P.

    1994-01-01

    Creating, animating, and recording solid-shaded and wireframe three-dimensional geometric models can be of great assistance in the research and design phases of product development, in project planning, and in engineering analyses. SSM and OOM are application programs which together allow for interactive construction and manipulation of three-dimensional models of real-world objects as simple as boxes or as complex as Space Station Freedom. The output of SSM, in the form of binary files defining geometric three dimensional models, is used as input to OOM. Animation in OOM is done using 3D models from SSM as well as cameras and light sources. The animated results of OOM can be output to videotape recorders, film recorders, color printers and disk files. SSM and OOM are also available separately as MSC-21914 and MSC-22263, respectively. The Solid Surface Modeler (SSM) is an interactive graphics software application for solid-shaded and wireframe three-dimensional geometric modeling. The program has a versatile user interface that, in many cases, allows mouse input for intuitive operation or keyboard input when accuracy is critical. SSM can be used as a stand-alone model generation and display program and offers high-fidelity still image rendering. Models created in SSM can also be loaded into the Object Orientation Manipulator for animation or engineering simulation. The Object Orientation Manipulator (OOM) is an application program for creating, rendering, and recording three-dimensional computer-generated still and animated images. This is done using geometrically defined 3D models, cameras, and light sources, referred to collectively as animation elements. OOM does not provide the tools necessary to construct 3D models; instead, it imports binary format model files generated by the Solid Surface Modeler (SSM). Model files stored in other formats must be converted to the SSM binary format before they can be used in OOM. SSM is available as MSC-21914 or as part of the SSM/OOM bundle, COS-10047. Among OOM's features are collision detection (with visual and audio feedback), the capability to define and manipulate hierarchical relationships between animation elements, stereographic display, and ray- traced rendering. OOM uses Euler angle transformations for calculating the results of translation and rotation operations. OOM and SSM are written in C-language for implementation on SGI IRIS 4D series workstations running the IRIX operating system. A minimum of 8Mb of RAM is recommended for each program. The standard distribution medium for this program package is a .25 inch streaming magnetic IRIX tape cartridge in UNIX tar format. These versions of OOM and SSM were released in 1993.

  18. Efficient in-situ visualization of unsteady flows in climate simulation

    NASA Astrophysics Data System (ADS)

    Vetter, Michael; Olbrich, Stephan

    2017-04-01

    The simulation of climate data tends to produce very large data sets, which hardly can be processed in classical post-processing visualization applications. Typically, the visualization pipeline consisting of the processes data generation, visualization mapping and rendering is distributed into two parts over the network or separated via file transfer. Within most traditional post-processing scenarios the simulation is done on a supercomputer whereas the data analysis and visualization is done on a graphics workstation. That way temporary data sets with huge volume have to be transferred over the network, which leads to bandwidth bottlenecks and volume limitations. The solution to this issue is the avoidance of temporary storage, or at least significant reduction of data complexity. Within the Climate Visualization Lab - as part of the Cluster of Excellence "Integrated Climate System Analysis and Prediction" (CliSAP) at the University of Hamburg, in cooperation with the German Climate Computing Center (DKRZ) - we develop and integrate an in-situ approach. Our software framework DSVR is based on the separation of the process chain between the mapping and the rendering processes. It couples the mapping process directly to the simulation by calling methods of a parallelized data extraction library, which create a time-based sequence of geometric 3D scenes. This sequence is stored on a special streaming server with an interactive post-filtering option and then played-out asynchronously in a separate 3D viewer application. Since the rendering is part of this viewer application, the scenes can be navigated interactively. In contrast to other in-situ approaches where 2D images are created as part of the simulation or synchronous co-visualization takes place, our method supports interaction in 3D space and in time, as well as fixed frame rates. To integrate in-situ processing based on our DSVR framework and methods in the ICON climate model, we are continuously evolving the data structures and mapping algorithms of the framework to support the ICON model's native grid structures, since DSVR originally was designed for rectilinear grids only. We now have implemented a new output module to ICON to take advantage of the DSVR visualization. The visualization can be configured as most output modules by using a specific namelist and is exemplarily integrated within the non-hydrostatic atmospheric model time loop. With the integration of a DSVR based in-situ pathline extraction within ICON, a further milestone is reached. The pathline algorithm as well as the grid data structures have been optimized for the domain decomposition used for the parallelization of ICON based on MPI and OpenMP. The software implementation and evaluation is done on the supercomputers at DKRZ. In principle, the data complexity is reduced from O(n3) to O(m), where n is the grid resolution and m the number of supporting point of all pathlines. The stability and scalability evaluation is done using Atmospheric Model Intercomparison Project (AMIP) runs. We will give a short introduction in our software framework, as well as a short overview on the implementation and usage of DSVR within ICON. Furthermore, we will present visualization and evaluation results of sample applications.

  19. Visualization of stereoscopic anatomic models of the paranasal sinuses and cervical vertebrae from the surgical and procedural perspective.

    PubMed

    Chen, Jian; Smith, Andrew D; Khan, Majid A; Sinning, Allan R; Conway, Marianne L; Cui, Dongmei

    2017-11-01

    Recent improvements in three-dimensional (3D) virtual modeling software allows anatomists to generate high-resolution, visually appealing, colored, anatomical 3D models from computed tomography (CT) images. In this study, high-resolution CT images of a cadaver were used to develop clinically relevant anatomic models including facial skull, nasal cavity, septum, turbinates, paranasal sinuses, optic nerve, pituitary gland, carotid artery, cervical vertebrae, atlanto-axial joint, cervical spinal cord, cervical nerve root, and vertebral artery that can be used to teach clinical trainees (students, residents, and fellows) approaches for trans-sphenoidal pituitary surgery and cervical spine injection procedure. Volume, surface rendering and a new rendering technique, semi-auto-combined, were applied in the study. These models enable visualization, manipulation, and interaction on a computer and can be presented in a stereoscopic 3D virtual environment, which makes users feel as if they are inside the model. Anat Sci Educ 10: 598-606. © 2017 American Association of Anatomists. © 2017 American Association of Anatomists.

  20. Sphere-Enhanced Microwave Ablation (sMWA) Versus Bland Microwave Ablation (bMWA): Technical Parameters, Specific CT 3D Rendering and Histopathology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gockner, T. L., E-mail: theresa.gockner@med.uni-heidelberg.de; Zelzer, S., E-mail: s.zelzer@dkfz-heidelberg.de; Mokry, T., E-mail: theresa.mokry@med.uni-heidelberg.de

    PurposeThis study was designed to compare technical parameters during ablation as well as CT 3D rendering and histopathology of the ablation zone between sphere-enhanced microwave ablation (sMWA) and bland microwave ablation (bMWA).MethodsIn six sheep-livers, 18 microwave ablations were performed with identical system presets (power output: 80 W, ablation time: 120 s). In three sheep, transarterial embolisation (TAE) was performed immediately before microwave ablation using spheres (diameter: 40 ± 10 μm) (sMWA). In the other three sheep, microwave ablation was performed without spheres embolisation (bMWA). Contrast-enhanced CT, sacrifice, and liver harvest followed immediately after microwave ablation. Study goals included technical parameters during ablation (resulting power output,more » ablation time), geometry of the ablation zone applying specific CT 3D rendering with a software prototype (short axis of the ablation zone, volume of the largest aligned ablation sphere within the ablation zone), and histopathology (hematoxylin-eosin, Masson Goldner and TUNEL).ResultsResulting power output/ablation times were 78.7 ± 1.0 W/120 ± 0.0 s for bMWA and 78.4 ± 1.0 W/120 ± 0.0 s for sMWA (n.s., respectively). Short axis/volume were 23.7 ± 3.7 mm/7.0 ± 2.4 cm{sup 3} for bMWA and 29.1 ± 3.4 mm/11.5 ± 3.9 cm{sup 3} for sMWA (P < 0.01, respectively). Histopathology confirmed the signs of coagulation necrosis as well as early and irreversible cell death for bMWA and sMWA. For sMWA, spheres were detected within, at the rim, and outside of the ablation zone without conspicuous features.ConclusionsSpecific CT 3D rendering identifies a larger ablation zone for sMWA compared with bMWA. The histopathological signs and the detectable amount of cell death are comparable for both groups. When comparing sMWA with bMWA, TAE has no effect on the technical parameters during ablation.« less

  1. Design of Mott and topological phases on buckled 3d-oxide honeycomb lattices

    NASA Astrophysics Data System (ADS)

    Pentcheva, Rossitza

    The honeycomb lattice, as realized e.g. in graphene, has rendered a robust platform for innovative science and potential applications. A much richer generalization of this lattice arises in (111)-oriented bilayers of perovskites, adding the complexity of the strongly correlated, multiorbital nature of electrons in transition metal oxides. Based on first principles calculations with an on-site Coulomb repulsion, here we provide trends in the evolution of ground states versus band filling in (111)-oriented (La XO3)2 /(LaAlO3)4 superlattices, with X spanning the entire 3d transition metal series. The competition between local quasi-cubic and global triangular symmetry triggers unanticipated broken symmetry phases, with mechanisms ranging from Jahn-Teller distortion, to charge-, spin-, and orbital-ordering. LaMnO3 and LaCoO3 bilayers, where spin-orbit coupling opens a sizable gap in the Dirac-point Fermi surface, emerge as much desired oxide-based Chern insulators, the latter displaying a gap capable of supporting room-temperature applications Further realizations of the honeycomb lattice and geometry patterns beyond the perovskite structure will be addressed. Research supported by the DFG, SFB/TR80.

  2. Automated abdominal plane and circumference estimation in 3D US for fetal screening

    NASA Astrophysics Data System (ADS)

    Lorenz, C.; Brosch, T.; Ciofolo-Veit, C.; Klinder, T.; Lefevre, T.; Cavallaro, A.; Salim, I.; Papageorghiou, A. T.; Raynaud, C.; Roundhill, D.; Rouet, L.; Schadewaldt, N.; Schmidt-Richberg, A.

    2018-03-01

    Ultrasound is increasingly becoming a 3D modality. Mechanical and matrix array transducers are able to deliver 3D images with good spatial and temporal resolution. The 3D imaging facilitates the application of automated image analysis to enhance workflows, which has the potential to make ultrasound a less operator dependent modality. However, the analysis of the more complex 3D images and definition of all examination standards on 2D images pose barriers to the use of 3D in daily clinical practice. In this paper, we address a part of the canonical fetal screening program, namely the localization of the abdominal cross-sectional plane with the corresponding measurement of the abdominal circumference in this plane. For this purpose, a fully automated pipeline has been designed starting with a random forest based anatomical landmark detection. A feature trained shape model of the fetal torso including inner organs with the abdominal cross-sectional plane encoded into the model is then transformed into the patient space using the landmark localizations. In a free-form deformation step, the model is individualized to the image, using a torso probability map generated by a convolutional neural network as an additional feature image. After adaptation, the abdominal plane and the abdominal torso contour in that plane are directly obtained. This allows the measurement of the abdominal circumference as well as the rendering of the plane for visual assessment. The method has been trained on 126 and evaluated on 42 abdominal 3D US datasets. An average plane offset error of 5.8 mm and an average relative circumference error of 4.9 % in the evaluation set could be achieved.

  3. Engineering graphene and TMDs based van der Waals heterostructures for photovoltaic and photoelectrochemical solar energy conversion.

    PubMed

    Li, Changli; Cao, Qi; Wang, Faze; Xiao, Yequan; Li, Yanbo; Delaunay, Jean-Jacques; Zhu, Hongwei

    2018-05-08

    Graphene and two-dimensional (2D) transition metal dichalcogenides (TMDs) have attracted significant interest due to their unique properties that cannot be obtained in their bulk counterparts. These atomically thin 2D materials have demonstrated strong light-matter interactions, tunable optical bandgap structures and unique structural and electrical properties, rendering possible the high conversion efficiency of solar energy with a minimal amount of active absorber material. The isolated 2D monolayer can be stacked into arbitrary van der Waals (vdWs) heterostructures without the need to consider lattice matching. Several combinations of 2D/3D and 2D/2D materials have been assembled to create vdWs heterojunctions for photovoltaic (PV) and photoelectrochemical (PEC) energy conversion. However, the complex, less-constrained, and more environmentally vulnerable interface in a vdWs heterojunction is different from that of a conventional, epitaxially grown heterojunction, engendering new challenges for surface and interface engineering. In this review, the physics of band alignment, the chemistry of surface modification and the behavior of photoexcited charge transfer at the interface during PV and PEC processes will be discussed. We will present a survey of the recent progress and challenges of 2D/3D and 2D/2D vdWs heterojunctions, with emphasis on their applicability to PV and PEC devices. Finally, we will discuss emerging issues yet to be explored for 2D materials to achieve high solar energy conversion efficiency and possible strategies to improve their performance.

  4. Approximating scatterplots of large datasets using distribution splats

    NASA Astrophysics Data System (ADS)

    Camuto, Matthew; Crawfis, Roger; Becker, Barry G.

    2000-02-01

    Many situations exist where the plotting of large data sets with categorical attributes is desired in a 3D coordinate system. For example, a marketing company may conduct a survey involving one million subjects and then plot peoples favorite car type against their weight, height and annual income. Scatter point plotting, in which each point is individually plotted at its correspond cartesian location using a defined primitive, is usually used to render a plot of this type. If the dependent variable is continuous, we can discretize the 3D space into bins or voxels and retain the average value of all records falling within each voxel. Previous work employed volume rendering techniques, in particular, splatting, to represent this aggregated data, by mapping each average value to a representative color.

  5. A new strategic neurosurgical planning tool for brainstem cavernous malformations using interactive computer graphics with multimodal fusion images.

    PubMed

    Kin, Taichi; Nakatomi, Hirofumi; Shojima, Masaaki; Tanaka, Minoru; Ino, Kenji; Mori, Harushi; Kunimatsu, Akira; Oyama, Hiroshi; Saito, Nobuhito

    2012-07-01

    In this study, the authors used preoperative simulation employing 3D computer graphics (interactive computer graphics) to fuse all imaging data for brainstem cavernous malformations. The authors evaluated whether interactive computer graphics or 2D imaging correlated better with the actual operative field, particularly in identifying a developmental venous anomaly (DVA). The study population consisted of 10 patients scheduled for surgical treatment of brainstem cavernous malformations. Data from preoperative imaging (MRI, CT, and 3D rotational angiography) were automatically fused using a normalized mutual information method, and then reconstructed by a hybrid method combining surface rendering and volume rendering methods. With surface rendering, multimodality and multithreshold techniques for 1 tissue were applied. The completed interactive computer graphics were used for simulation of surgical approaches and assumed surgical fields. Preoperative diagnostic rates for a DVA associated with brainstem cavernous malformation were compared between conventional 2D imaging and interactive computer graphics employing receiver operating characteristic (ROC) analysis. The time required for reconstruction of 3D images was 3-6 hours for interactive computer graphics. Observation in interactive mode required approximately 15 minutes. Detailed anatomical information for operative procedures, from the craniotomy to microsurgical operations, could be visualized and simulated three-dimensionally as 1 computer graphic using interactive computer graphics. Virtual surgical views were consistent with actual operative views. This technique was very useful for examining various surgical approaches. Mean (±SEM) area under the ROC curve for rate of DVA diagnosis was significantly better for interactive computer graphics (1.000±0.000) than for 2D imaging (0.766±0.091; p<0.001, Mann-Whitney U-test). The authors report a new method for automatic registration of preoperative imaging data from CT, MRI, and 3D rotational angiography for reconstruction into 1 computer graphic. The diagnostic rate of DVA associated with brainstem cavernous malformation was significantly better using interactive computer graphics than with 2D images. Interactive computer graphics was also useful in helping to plan the surgical access corridor.

  6. Warm White Light-Emitting Diodes Based on a Novel Orange Cationic Iridium(III) Complex.

    PubMed

    Tang, Huaijun; Meng, Guoyun; Chen, Zeyu; Wang, Kaimin; Zhou, Qiang; Wang, Zhengliang

    2017-06-16

    A novel orange cationic iridium(III) complex [(TPTA)₂Ir(dPPOA)]PF₆ (TPTA: 3,4,5-triphenyl-4 H -1,2,4-triazole, dPPOA: N,N-diphenyl-4-(5-(pyridin-2-yl)-1,3,4-oxadiazol-2-yl)aniline) was synthesized and used as a phosphor in light-emitting diodes (LEDs). [(TPTA)₂Ir(dPPOA)]PF₆ has high thermal stability with a decomposition temperature ( T d ) of 375 °C, and its relative emission intensity at 100 °C is 88.8% of that at 25°C. When only [(TPTA)₂Ir(dPPOA)]PF₆ was used as a phosphor at 6.0 wt % in silicone and excited by a blue GaN (GaN: gallium nitride) chip (450 nm), an orange LED was obtained. A white LED fabricated by a blue GaN chip (450 nm) and only yellow phosphor Y₃Al₅O 12 :Ce 3+ (YAG:Ce) (1.0 wt % in silicone) emitted cold white light, its CIE (CIE: Commission International de I'Eclairage ) value was (0.32, 0.33), color rendering index (CRI) was 72.2, correlated color temperature (CCT) was 6877 K, and luminous efficiency ( η L ) was 128.5 lm∙W -1 . Such a cold white LED became a neutral white LED when [(TPTA)₂Ir(dPPOA)]PF₆ was added at 0.5 wt %; its corresponding CIE value was (0.35, 0.33), CRI was 78.4, CCT was 4896 K, and η L was 85.2 lm∙W -1 . It further became a warm white LED when [(TPTA)₂Ir(dPPOA)]PF₆ was added at 1.0 wt %; its corresponding CIE value was (0.39, 0.36), CRI was 80.2, CCT was 3473 K, and η L was 46.1 lm∙W -1 . The results show that [(TPTA)₂Ir(dPPOA)]PF₆ is a promising phosphor candidate for fabricating warm white LEDs.

  7. 1. PHOTOCOPY OF RENDERING OF PSFS BUILDING BY D.E. SUTTON. ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    1. PHOTOCOPY OF RENDERING OF PSFS BUILDING BY D.E. SUTTON. Date possibly 1929 or 1930, when construction started. - Philadelphia Saving Fund Society, Twelfth & Market Streets, Philadelphia, Philadelphia County, PA

  8. Confocal imaging of whole vertebrate embryos reveals novel insights into molecular and cellular mechanisms of organ development

    NASA Astrophysics Data System (ADS)

    Hadel, Diana M.; Keller, Bradley B.; Sandell, Lisa L.

    2014-03-01

    Confocal microscopy has been an invaluable tool for studying cellular or sub-cellular biological processes. The study of vertebrate embryology is based largely on examination of whole embryos and organs. The application of confocal microscopy to immunostained whole mount embryos, combined with three dimensional (3D) image reconstruction technologies, opens new avenues for synthesizing molecular, cellular and anatomical analysis of vertebrate development. Optical cropping of the region of interest enables visualization of structures that are morphologically complex or obscured, and solid surface rendering of fluorescent signal facilitates understanding of 3D structures. We have applied these technologies to whole mount immunostained mouse embryos to visualize developmental morphogenesis of the mammalian inner ear and heart. Using molecular markers of neuron development and transgenic reporters of neural crest cell lineage we have examined development of inner ear neurons that originate from the otic vesicle, along with the supporting glial cells that derive from the neural crest. The image analysis reveals a previously unrecognized coordinated spatial organization between migratory neural crest cells and neurons of the cochleovestibular nerve. The images also enable visualization of early cochlear spiral nerve morphogenesis relative to the developing cochlea, demonstrating a heretofore unknown association of neural crest cells with extending peripheral neurite projections. We performed similar analysis of embryonic hearts in mouse and chick, documenting the distribution of adhesion molecules during septation of the outflow tract and remodeling of aortic arches. Surface rendering of lumen space defines the morphology in a manner similar to resin injection casting and micro-CT.

  9. PLOT3D user's manual

    NASA Technical Reports Server (NTRS)

    Walatka, Pamela P.; Buning, Pieter G.; Pierce, Larry; Elson, Patricia A.

    1990-01-01

    PLOT3D is a computer graphics program designed to visualize the grids and solutions of computational fluid dynamics. Seventy-four functions are available. Versions are available for many systems. PLOT3D can handle multiple grids with a million or more grid points, and can produce varieties of model renderings, such as wireframe or flat shaded. Output from PLOT3D can be used in animation programs. The first part of this manual is a tutorial that takes the reader, keystroke by keystroke, through a PLOT3D session. The second part of the manual contains reference chapters, including the helpfile, data file formats, advice on changing PLOT3D, and sample command files.

  10. [Design of visualized medical images network and web platform based on MeVisLab].

    PubMed

    Xiang, Jun; Ye, Qing; Yuan, Xun

    2017-04-01

    With the trend of the development of "Internet +", some further requirements for the mobility of medical images have been required in the medical field. In view of this demand, this paper presents a web-based visual medical imaging platform. First, the feasibility of medical imaging is analyzed and technical points. CT (Computed Tomography) or MRI (Magnetic Resonance Imaging) images are reconstructed three-dimensionally by MeVisLab and packaged as X3D (Extensible 3D Graphics) files shown in the present paper. Then, the B/S (Browser/Server) system specially designed for 3D image is designed by using the HTML 5 and WebGL rendering engine library, and the X3D image file is parsed and rendered by the system. The results of this study showed that the platform was suitable for multiple operating systems to realize the platform-crossing and mobilization of medical image data. The development of medical imaging platform is also pointed out in this paper. It notes that web application technology will not only promote the sharing of medical image data, but also facilitate image-based medical remote consultations and distance learning.

  11. Particle physics and polyedra proximity calculation for hazard simulations in large-scale industrial plants

    NASA Astrophysics Data System (ADS)

    Plebe, Alice; Grasso, Giorgio

    2016-12-01

    This paper describes a system developed for the simulation of flames inside an open-source 3D computer graphic software, Blender, with the aim of analyzing in virtual reality scenarios of hazards in large-scale industrial plants. The advantages of Blender are of rendering at high resolution the very complex structure of large industrial plants, and of embedding a physical engine based on smoothed particle hydrodynamics. This particle system is used to evolve a simulated fire. The interaction of this fire with the components of the plant is computed using polyhedron separation distance, adopting a Voronoi-based strategy that optimizes the number of feature distance computations. Results on a real oil and gas refining industry are presented.

  12. 3D laptop for defense applications

    NASA Astrophysics Data System (ADS)

    Edmondson, Richard; Chenault, David

    2012-06-01

    Polaris Sensor Technologies has developed numerous 3D display systems using a US Army patented approach. These displays have been developed as prototypes for handheld controllers for robotic systems and closed hatch driving, and as part of a TALON robot upgrade for 3D vision, providing depth perception for the operator for improved manipulation and hazard avoidance. In this paper we discuss the prototype rugged 3D laptop computer and its applications to defense missions. The prototype 3D laptop combines full temporal and spatial resolution display with the rugged Amrel laptop computer. The display is viewed through protective passive polarized eyewear, and allows combined 2D and 3D content. Uses include robot tele-operation with live 3D video or synthetically rendered scenery, mission planning and rehearsal, enhanced 3D data interpretation, and simulation.

  13. Overlapping bio-absorbable scaffolds: Aim for D2D technique?

    PubMed

    Khan, Asaad A; Dangas, George D

    2018-06-01

    The results of overlapping metallic stents have been concerning but this practice is often unavoidable in the setting of long or tortuous lesions, diameter discrepancy of proximal and distal vessel, and for residual dissections. Theoretically, bio-absorbable scaffolds may carry an advantage over metallic stents due to the progressive resorption of the scaffold theoretically rendering the overlap a non-issue; this has not been clinically evident. Since stent/scaffold overlap cannot be entirely avoided, improved stent delivery/deployment and scaffold design modification may reduce complications in this complex patient subset. © 2018 Wiley Periodicals, Inc.

  14. Evolution of the Varrier autostereoscopic VR display: 2001-2007

    NASA Astrophysics Data System (ADS)

    Peterka, Tom; Kooima, Robert L.; Girado, Javier I.; Ge, Jinghua; Sandin, Daniel J.; DeFanti, Thomas A.

    2007-02-01

    Autostereoscopy (AS) is an increasingly valuable virtual reality (VR) display technology; indeed, the IS&T / SPIE Electronic Imaging Conference has seen rapid growth in the number and scope of AS papers in recent years. The first Varrier paper appeared at SPIE in 2001, and much has changed since then. What began as a single-panel prototype has grown to a full scale VR autostereo display system, with a variety of form factors, features, and options. Varrier is a barrier strip AS display system that qualifies as a true VR display, offering a head-tracked ortho-stereo first person interactive VR experience without the need for glasses or other gear to be worn by the user. Since Varrier's inception, new algorithmic and systemic developments have produced performance and quality improvements. Visual acuity has increased by a factor of 1.4X with new fine-resolution barrier strip linescreens and computational algorithms that support variable sub-pixel resolutions. Performance has improved by a factor of 3X using a new GPU shader-based sub-pixel algorithm that accomplishes in one pass what previously required three passes. The Varrier modulation algorithm that began as a computationally expensive task is now no more costly than conventional stereoscopic rendering. Interactive rendering rates of 60 Hz are now possible in Varrier for complex scene geometry on the order of 100K vertices, and performance is GPU bound, hence it is expected to continue improving with graphics card enhancements. Head tracking is accomplished with a neural network camera-based tracking system developed at EVL for Varrier. Multiple cameras capture subjects at 120 Hz and the neural network recognizes known faces from a database and tracks them in 3D space. New faces are trained and added to the database in a matter of minutes, and accuracy is comparable to commercially available tracking systems. Varrier supports a variety of VR applications, including visualization of polygonal, ray traced, and volume rendered data. Both AS movie playback of pre-rendered stereo frames and interactive manipulation of 3D models are supported. Local as well as distributed computation is employed in various applications. Long-distance collaboration has been demonstrated with AS teleconferencing in Varrier. A variety of application domains such as art, medicine, and science have been exhibited, and Varrier exists in a variety of form factors from large tiled installations to smaller desktop forms to fit a variety of space and budget constraints. Newest developments include the use of a dynamic parallax barrier that affords features that were inconceivable with a static barrier.

  15. 3D surface rendered MR images of the brain and its vasculature.

    PubMed

    Cline, H E; Lorensen, W E; Souza, S P; Jolesz, F A; Kikinis, R; Gerig, G; Kennedy, T E

    1991-01-01

    Both time-of-flight and phase contrast magnetic resonance angiography images are combined with stationary tissue images to provide data depicting two contrast relationships yielding intrinsic discrimination of brain matter and flowing blood. A computer analysis is based on nearest neighbor segmentation and the connection between anatomical structures to partition the images into different tissue categories: from which, high resolution brain parenchymal and vascular surfaces are constructed and rendered in juxtaposition, aiding in surgical planning.

  16. Combined in-depth, 3D, en face imaging of the optic disc, optic disc pits and optic disc pit maculopathy using swept-source megahertz OCT at 1050 nm.

    PubMed

    Maertz, Josef; Kolb, Jan Philip; Klein, Thomas; Mohler, Kathrin J; Eibl, Matthias; Wieser, Wolfgang; Huber, Robert; Priglinger, Siegfried; Wolf, Armin

    2018-02-01

    To demonstrate papillary imaging of eyes with optic disc pits (ODP) or optic disc pit associated maculopathy (ODP-M) with ultrahigh-speed swept-source optical coherence tomography (SS-OCT) at 1.68 million A-scans/s. To generate 3D-renderings of the papillary area with 3D volume-reconstructions of the ODP and highly resolved en face images from a single densely-sampled megahertz-OCT (MHz-OCT) dataset for investigation of ODP-characteristics. A 1.68 MHz-prototype SS-MHz-OCT system at 1050 nm based on a Fourier-domain mode-locked laser was employed to acquire high-definition, 3D datasets with a dense sampling of 1600 × 1600 A-scans over a 45° field of view. Six eyes with ODPs, and two further eyes with glaucomatous alteration or without ocular pathology are presented. 3D-rendering of the deep papillary structures, virtual 3D-reconstructions of the ODPs and depth resolved isotropic en face images were generated using semiautomatic segmentation. 3D-rendering and en face imaging of the optic disc, ODPs and ODP associated pathologies showed a broad spectrum regarding ODP characteristics. Between individuals the shape of the ODP and the appending pathologies varied considerably. MHz-OCT en face imaging generates distinct top-view images of ODPs and ODP-M. MHz-OCT generates high resolution images of retinal pathologies associated with ODP-M and allows visualizing ODPs with depths of up to 2.7 mm. Different patterns of ODPs can be visualized in patients for the first time using 3D-reconstructions and co-registered high-definition en face images extracted from a single densely sampled 1050 nm megahertz-OCT (MHz-OCT) dataset. As the immediate vicinity to the SAS and the site of intrapapillary proliferation is located at the bottom of the ODP it is crucial to image the complete structure and the whole depth of ODPs. Especially in very deep pits, where non-swept-source OCT fails to reach the bottom, conventional swept-source devices and the MHz-OCT alike are feasible and beneficial methods to examine deep details of optic disc pathologies, while the MHz-OCT bears the advantage of an essentially swifter imaging process.

  17. A New Approach to the Visual Rendering of Mantle Tomography

    NASA Astrophysics Data System (ADS)

    Holtzman, B. K.; Pratt, M. J.; Turk, M.; Hannasch, D. A.

    2016-12-01

    Visualization of mantle tomographic models requires a range of subjective aesthetic decisions that are often made subconsciously or unarticulated by authors. Many of these decisions affect the interpretations of the model, and therefore should be articulated and understood. In 2D these decisions are manifest in the choice of colormap, including the data values associated with the neutral/transitional colorband, as well as the correspondence between the extrema in the colormap and the parameters of the extrema. For example, we generally choose warm color signifying slow- and cool colors signifying fast velocities (or perturbations), but where is the transition, and the color gradients from transition to extrema? In 3D, volumes are generally rendered by choosing an isosurface of a velocity perturbation (relative to a model at each depth) and coloring it slow to fast. The choice of isosurface is arbitrary or guided by a researcher's intuition, again strongly affecting (or driven by) the interpretation. Here, we present a different approach to 3-D rendering of tomography models, using true volumetric rendering with "yt", a python package for visualization and analysis of data. In our approach, we do not use isosurfaces; instead, we render the extrema in the tomographic model as the most opaque, with an opacity function that touches zero (totally transparent) at dynamically selected values, or at the average value at each depth. The intent is that the most robust aspects of the model are visually clear, and the visualization emphasizes the nature of the interfaces between regions as well as the form of distinct mantle regions. Much of the current scientific discussion in upper mantle tomography focuses on the nature of interfaces, so we will demonstrate how decisions in the definition of the transparent regions influence interpretation of tomographic models. Our aim is to develop a visual language for tomographic visualization that can help focus geodynamic questions.

  18. 3D Pathology Volumetric Technique: A Method for Calculating Breast Tumour Volume from Whole-Mount Serial Section Images

    PubMed Central

    Clarke, G. M.; Murray, M.; Holloway, C. M. B.; Liu, K.; Zubovits, J. T.; Yaffe, M. J.

    2012-01-01

    Tumour size, most commonly measured by maximum linear extent, remains a strong predictor of survival in breast cancer. Tumour volume, proportional to the number of tumour cells, may be a more accurate surrogate for size. We describe a novel “3D pathology volumetric technique” for lumpectomies and compare it with 2D measurements. Volume renderings and total tumour volume are computed from digitized whole-mount serial sections using custom software tools. Results are presented for two lumpectomy specimens selected for tumour features which may challenge accurate measurement of tumour burden with conventional, sampling-based pathology: (1) an infiltrative pattern admixed with normal breast elements; (2) a localized invasive mass separated from the in situ component by benign tissue. Spatial relationships between key features (tumour foci, close or involved margins) are clearly visualized in volume renderings. Invasive tumour burden can be underestimated using conventional pathology, compared to the volumetric technique (infiltrative pattern: 30% underestimation; localized mass: 3% underestimation for invasive tumour, 44% for in situ component). Tumour volume approximated from 2D measurements (i.e., maximum linear extent), assuming elliptical geometry, was seen to overestimate volume compared to the 3D volumetric calculation (by a factor of 7x for the infiltrative pattern; 1.5x for the localized invasive mass). PMID:23320179

  19. Cloud-based Monte Carlo modelling of BSSRDF for the rendering of human skin appearance (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Doronin, Alexander; Rushmeier, Holly E.; Meglinski, Igor; Bykov, Alexander V.

    2016-03-01

    We present a new Monte Carlo based approach for the modelling of Bidirectional Scattering-Surface Reflectance Distribution Function (BSSRDF) for accurate rendering of human skin appearance. The variations of both skin tissues structure and the major chromophores are taken into account correspondingly to the different ethnic and age groups. The computational solution utilizes HTML5, accelerated by the graphics processing units (GPUs), and therefore is convenient for the practical use at the most of modern computer-based devices and operating systems. The results of imitation of human skin reflectance spectra, corresponding skin colours and examples of 3D faces rendering are presented and compared with the results of phantom studies.

  20. Automatic 3D virtual scenes modeling for multisensors simulation

    NASA Astrophysics Data System (ADS)

    Latger, Jean; Le Goff, Alain; Cathala, Thierry; Larive, Mathieu

    2006-05-01

    SEDRIS that stands for Synthetic Environment Data Representation and Interchange Specification is a DoD/DMSO initiative in order to federate and make interoperable 3D mocks up in the frame of virtual reality and simulation. This paper shows an original application of SEDRIS concept for research physical multi sensors simulation, when SEDRIS is more classically known for training simulation. CHORALE (simulated Optronic Acoustic Radar battlefield) is used by the French DGA/DCE (Directorate for Test and Evaluation of the French Ministry of Defense) to perform multi-sensors simulations. CHORALE enables the user to create virtual and realistic multi spectral 3D scenes, and generate the physical signal received by a sensor, typically an IR sensor. In the scope of this CHORALE workshop, French DGA has decided to introduce a SEDRIS based new 3D terrain modeling tool that enables to create automatically 3D databases, directly usable by the physical sensor simulation CHORALE renderers. This AGETIM tool turns geographical source data (including GIS facilities) into meshed geometry enhanced with the sensor physical extensions, fitted to the ray tracing rendering of CHORALE, both for the infrared, electromagnetic and acoustic spectrum. The basic idea is to enhance directly the 2D source level with the physical data, rather than enhancing the 3D meshed level, which is more efficient (rapid database generation) and more reliable (can be generated many times, changing some parameters only). The paper concludes with the last current evolution of AGETIM in the scope mission rehearsal for urban war using sensors. This evolution includes indoor modeling for automatic generation of inner parts of buildings.

  1. Virtual environments from panoramic images

    NASA Astrophysics Data System (ADS)

    Chapman, David P.; Deacon, Andrew

    1998-12-01

    A number of recent projects have demonstrated the utility of Internet-enabled image databases for the documentation of complex, inaccessible and potentially hazardous environments typically encountered in the petrochemical and nuclear industries. Unfortunately machine vision and image processing techniques have not, to date, enabled the automatic extraction geometrical data from such images and thus 3D CAD modeling remains an expensive and laborious manual activity. Recent developments in panoramic image capture and presentation offer an alternative intermediate deliverable which, in turn, offers some of the benefits of a 3D model at a fraction of the cost. Panoramic image display tools such as Apple's QuickTime VR (QTVR) and Live Spaces RealVR provide compelling and accessible digital representations of the real world and justifiably claim to 'put the reality in Virtual Reality.' This paper will demonstrate how such technologies can be customized, extended and linked to facility management systems delivered over a corporate intra-net to enable end users to become familiar with remote sites and extract simple dimensional data. In addition strategies for the integration of such images with documents gathered from 2D or 3D CAD and Process and Instrumentation Diagrams (P&IDs) will be described as will techniques for precise 'As-Built' modeling using the calibrated images from which panoramas have been derived and the use of textures from these images to increase the realism of rendered scenes. A number of case studies relating to both nuclear and process engineering will demonstrate the extent to which such solution are scaleable in order to deal with the very large volumes of image data required to fully document the large, complex facilities typical of these industry sectors.

  2. A medical application integrating remote 3D visualization tools to access picture archiving and communication system on mobile devices.

    PubMed

    He, Longjun; Ming, Xing; Liu, Qian

    2014-04-01

    With computing capability and display size growing, the mobile device has been used as a tool to help clinicians view patient information and medical images anywhere and anytime. However, for direct interactive 3D visualization, which plays an important role in radiological diagnosis, the mobile device cannot provide a satisfactory quality of experience for radiologists. This paper developed a medical system that can get medical images from the picture archiving and communication system on the mobile device over the wireless network. In the proposed application, the mobile device got patient information and medical images through a proxy server connecting to the PACS server. Meanwhile, the proxy server integrated a range of 3D visualization techniques, including maximum intensity projection, multi-planar reconstruction and direct volume rendering, to providing shape, brightness, depth and location information generated from the original sectional images for radiologists. Furthermore, an algorithm that changes remote render parameters automatically to adapt to the network status was employed to improve the quality of experience. Finally, performance issues regarding the remote 3D visualization of the medical images over the wireless network of the proposed application were also discussed. The results demonstrated that this proposed medical application could provide a smooth interactive experience in the WLAN and 3G networks.

  3. Diagnostic accuracy of translucency rendering to differentiate polyps from pseudopolyps at 3D endoluminal CT colonography: a feasibility study.

    PubMed

    Guerrisi, A; Marin, D; Laghi, A; Di Martino, M; Iafrate, F; Iannaccone, R; Catalano, C; Passariello, R

    2010-08-01

    The aim of this study was to assess the accuracy of translucency rendering (TR) in computed tomographic (CT) colonography without cathartic preparation using primary 3D reading. From 350 patients with 482 endoscopically verified polyps, 50 pathologically proven polyps and 50 pseudopolyps were retrospectively examined. For faecal tagging, all patients ingested 140 ml of orally administered iodinated contrast agent (diatrizoate meglumine and diatrizoate sodium) at meals 48 h prior to CT colonography examination and two h prior to scanning. CT colonography was performed using a 64-section CT scanner. Colonoscopy with segmental unblinding was performed within 2 weeks after CT. Three independent radiologists retrospectively evaluated TRCT clonographic images using a dedicated software package (V3D-Colon System). To enable size-dependent statistical analysis, lesions were stratified into the following size categories: small (< or =5 mm), intermediate (6-9 mm), and large (> or =10 mm). Overall average TR sensitivity for polyp characterisation was 96.6%, and overall average specificity for pseudopolyp characterisation was 91.3%. Overall average diagnostic accuracy (area under the curve) of TR for characterising colonic lesions was 0.97. TR is an accurate tool that facilitates interpretation of images obtained with a primary 3D analysis, thus enabling easy differentiation of polyps from pseudopolyps.

  4. 7 CFR 1773.39 - Utility plant and accumulated depreciation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... all services contracted for were in fact rendered; (iii) Reviewed time cards and pay rates for several... differences in the workpapers; and (3) Commented, in the management letter, on any discrepancies. (d...

  5. General Purpose Electronegativity Relaxation Charge Models Applied to CoMFA and CoMSIA Study of GSK-3 Inhibitors.

    PubMed

    Tsareva, Daria A; Osolodkin, Dmitry I; Shulga, Dmitry A; Oliferenko, Alexander A; Pisarev, Sergey A; Palyulin, Vladimir A; Zefirov, Nikolay S

    2011-03-14

    Two fast empirical charge models, Kirchhoff Charge Model (KCM) and Dynamic Electronegativity Relaxation (DENR), had been developed in our laboratory previously for widespread use in drug design research. Both models are based on the electronegativity relaxation principle (Adv. Quantum Chem. 2006, 51, 139-156) and parameterized against ab initio dipole/quadrupole moments and molecular electrostatic potentials, respectively. As 3D QSAR studies comprise one of the most important fields of applied molecular modeling, they naturally have become the first topic to test our charges and thus, indirectly, the assumptions laid down to the charge model theories in a case study. Here these charge models are used in CoMFA and CoMSIA methods and tested on five glycogen synthase kinase 3 (GSK-3) inhibitor datasets, relevant to our current studies, and one steroid dataset. For comparison, eight other different charge models, ab initio through semiempirical and empirical, were tested on the same datasets. The complex analysis including correlation and cross-validation, charges robustness and predictability, as well as visual interpretability of 3D contour maps generated was carried out. As a result, our new electronegativity relaxation-based models both have shown stable results, which in conjunction with other benefits discussed render them suitable for building reliable 3D QSAR models. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Dynamic three-dimensional display of common congenital cardiac defects from reconstruction of two-dimensional echocardiographic images.

    PubMed

    Hsieh, K S; Lin, C C; Liu, W S; Chen, F L

    1996-01-01

    Two-dimensional echocardiography had long been a standard diagnostic modality for congenital heart disease. Further attempts of three-dimensional reconstruction using two-dimensional echocardiographic images to visualize stereotypic structure of cardiac lesions have been successful only recently. So far only very few studies have been done to display three-dimensional anatomy of the heart through two-dimensional image acquisition because such complex procedures were involved. This study introduced a recently developed image acquisition and processing system for dynamic three-dimensional visualization of various congenital cardiac lesions. From December 1994 to April 1995, 35 cases were selected in the Echo Laboratory here from about 3000 Echo examinations completed. Each image was acquired on-line with specially designed high resolution image grazmber with EKG and respiratory gating technique. Off-line image processing using a window-architectured interactive software package includes construction of 2-D ehcocardiographic pixel to 3-D "voxel" with conversion of orthogonal to rotatory axial system, interpolation, extraction of region of interest, segmentation, shading and, finally, 3D rendering. Three-dimensional anatomy of various congenital cardiac defects was shown, including four cases with ventricular septal defects, two cases with atrial septal defects, and two cases with aortic stenosis. Dynamic reconstruction of a "beating heart" is recorded as vedio tape with video interface. The potential application of 3D display of the reconstruction from 2D echocardiographic images for the diagnosis of various congenital heart defects has been shown. The 3D display was able to improve the diagnostic ability of echocardiography, and clear-cut display of the various congenital cardiac defects and vavular stenosis could be demonstrated. Reinforcement of current techniques will expand future application of 3D display of conventional 2D images.

  7. Grebe dysplasia - prenatal diagnosis based on rendered 3-D ultrasound images of fetal limbs.

    PubMed

    Goncalves, Luis F; Berger, Julie A; Macknis, Jacqueline K; Bauer, Samuel T; Bloom, David A

    2017-01-01

    Grebe dysplasia is a rare skeletal dysplasia characterized by severe acromesomelic shortening of the long bones in a proximal to distal gradient of severity, with bones of the hands and feet more severely affected than those of the forearms and legs, which in turn are more severely affected than the humeri and femora. In addition, the bones of the lower extremities tend to be more severely affected than the bones of the upper extremities. Despite the severe skeletal deformities, the condition is not lethal and surviving individuals can have normal intelligence. Herein we report a case of Grebe dysplasia diagnosed at 20 weeks of gestation. Rendered 3-D ultrasound images of the fetal limbs, particularly of the characteristic tiny and globular-looking fingers and toes, were instrumental in accurately characterizing the phenotype prenatally.

  8. Immersive volume rendering of blood vessels

    NASA Astrophysics Data System (ADS)

    Long, Gregory; Kim, Han Suk; Marsden, Alison; Bazilevs, Yuri; Schulze, Jürgen P.

    2012-03-01

    In this paper, we present a novel method of visualizing flow in blood vessels. Our approach reads unstructured tetrahedral data, resamples it, and uses slice based 3D texture volume rendering. Due to the sparse structure of blood vessels, we utilize an octree to efficiently store the resampled data by discarding empty regions of the volume. We use animation to convey time series data, wireframe surface to give structure, and utilize the StarCAVE, a 3D virtual reality environment, to add a fully immersive element to the visualization. Our tool has great value in interdisciplinary work, helping scientists collaborate with clinicians, by improving the understanding of blood flow simulations. Full immersion in the flow field allows for a more intuitive understanding of the flow phenomena, and can be a great help to medical experts for treatment planning.

  9. [Research on Three-dimensional Medical Image Reconstruction and Interaction Based on HTML5 and Visualization Toolkit].

    PubMed

    Gao, Peng; Liu, Peng; Su, Hongsen; Qiao, Liang

    2015-04-01

    Integrating visualization toolkit and the capability of interaction, bidirectional communication and graphics rendering which provided by HTML5, we explored and experimented on the feasibility of remote medical image reconstruction and interaction in pure Web. We prompted server-centric method which did not need to download the big medical data to local connections and avoided considering network transmission pressure and the three-dimensional (3D) rendering capability of client hardware. The method integrated remote medical image reconstruction and interaction into Web seamlessly, which was applicable to lower-end computers and mobile devices. Finally, we tested this method in the Internet and achieved real-time effects. This Web-based 3D reconstruction and interaction method, which crosses over internet terminals and performance limited devices, may be useful for remote medical assistant.

  10. Multilayered nonuniform sampling for three-dimensional scene representation

    NASA Astrophysics Data System (ADS)

    Lin, Huei-Yung; Xiao, Yu-Hua; Chen, Bo-Ren

    2015-09-01

    The representation of a three-dimensional (3-D) scene is essential in multiview imaging technologies. We present a unified geometry and texture representation based on global resampling of the scene. A layered data map representation with a distance-dependent nonuniform sampling strategy is proposed. It is capable of increasing the details of the 3-D structure locally and is compact in size. The 3-D point cloud obtained from the multilayered data map is used for view rendering. For any given viewpoint, image synthesis with different levels of detail is carried out using the quadtree-based nonuniformly sampled 3-D data points. Experimental results are presented using the 3-D models of reconstructed real objects.

  11. Information Visualization Techniques for Effective Cross-Discipline Communication

    NASA Astrophysics Data System (ADS)

    Fisher, Ward

    2013-04-01

    Collaboration between research groups in different fields is a common occurrence, but it can often be frustrating due to the absence of a common vocabulary. This lack of a shared context can make expressing important concepts and discussing results difficult. This problem may be further exacerbated when communicating to an audience of laypeople. Without a clear frame of reference, simple concepts are often rendered difficult-to-understand at best, and unintelligible at worst. An easy way to alleviate this confusion is with the use of clear, well-designed visualizations to illustrate an idea, process or conclusion. There exist a number of well-described machine-learning and statistical techniques which can be used to illuminate the information present within complex high-dimensional datasets. Once the information has been separated from the data, clear communication becomes a matter of selecting an appropriate visualization. Ideally, the visualization is information-rich but data-scarce. Anything from a simple bar chart, to a line chart with confidence intervals, to an animated set of 3D point-clouds can be used to render a complex idea as an easily understood image. Several case studies will be presented in this work. In the first study, we will examine how a complex statistical analysis was applied to a high-dimensional dataset, and how the results were succinctly communicated to an audience of microbiologists and chemical engineers. Next, we will examine a technique used to illustrate the concept of the singular value decomposition, as used in the field of computer vision, to a lay audience of undergraduate students from mixed majors. We will then examine a case where a simple animated line plot was used to communicate an approach to signal decomposition, and will finish with a discussion of the tools available to create these visualizations.

  12. Dimensionality of visual complexity in computer graphics scenes

    NASA Astrophysics Data System (ADS)

    Ramanarayanan, Ganesh; Bala, Kavita; Ferwerda, James A.; Walter, Bruce

    2008-02-01

    How do human observers perceive visual complexity in images? This problem is especially relevant for computer graphics, where a better understanding of visual complexity can aid in the development of more advanced rendering algorithms. In this paper, we describe a study of the dimensionality of visual complexity in computer graphics scenes. We conducted an experiment where subjects judged the relative complexity of 21 high-resolution scenes, rendered with photorealistic methods. Scenes were gathered from web archives and varied in theme, number and layout of objects, material properties, and lighting. We analyzed the subject responses using multidimensional scaling of pooled subject responses. This analysis embedded the stimulus images in a two-dimensional space, with axes that roughly corresponded to "numerosity" and "material / lighting complexity". In a follow-up analysis, we derived a one-dimensional complexity ordering of the stimulus images. We compared this ordering with several computable complexity metrics, such as scene polygon count and JPEG compression size, and did not find them to be very correlated. Understanding the differences between these measures can lead to the design of more efficient rendering algorithms in computer graphics.

  13. Model-based registration of multi-rigid-body for augmented reality

    NASA Astrophysics Data System (ADS)

    Ikeda, Sei; Hori, Hajime; Imura, Masataka; Manabe, Yoshitsugu; Chihara, Kunihiro

    2009-02-01

    Geometric registration between a virtual object and the real space is the most basic problem in augmented reality. Model-based tracking methods allow us to estimate three-dimensional (3-D) position and orientation of a real object by using a textured 3-D model instead of visual marker. However, it is difficult to apply existing model-based tracking methods to the objects that have movable parts such as a display of a mobile phone, because these methods suppose a single, rigid-body model. In this research, we propose a novel model-based registration method for multi rigid-body objects. For each frame, the 3-D models of each rigid part of the object are first rendered according to estimated motion and transformation from the previous frame. Second, control points are determined by detecting the edges of the rendered image and sampling pixels on these edges. Motion and transformation are then simultaneously calculated from distances between the edges and the control points. The validity of the proposed method is demonstrated through experiments using synthetic videos.

  14. Detection of compression vessels in trigeminal neuralgia by surface-rendering three-dimensional reconstruction of 1.5- and 3.0-T magnetic resonance imaging.

    PubMed

    Shimizu, Masahiro; Imai, Hideaki; Kagoshima, Kaiei; Umezawa, Eriko; Shimizu, Tsuneo; Yoshimoto, Yuhei

    2013-01-01

    Surface-rendered three-dimensional (3D) 1.5-T magnetic resonance (MR) imaging is useful for presurgical simulation of microvascular decompression. This study compared the sensitivity and specificity of 1.5- and 3.0-T surface-rendered 3D MR imaging for preoperative identification of the compression vessels of trigeminal neuralgia. One hundred consecutive patients underwent microvascular decompression for trigeminal neuralgia. Forty and 60 patients were evaluated by 1.5- and 3.0-T MR imaging, respectively. Three-dimensional MR images were constructed on the basis of MR imaging, angiography, and venography data and evaluated to determine the compression vessel before surgery. MR imaging findings were compared with the microsurgical findings to compare the sensitivity and specificity of 1.5- and 3.0-T MR imaging. The agreement between MR imaging and surgical findings depended on the compression vessels. For superior cerebellar artery, 1.5- and 3.0-T MR imaging had 84.4% and 82.7% sensitivity and 100% and 100% specificity, respectively. For anterior inferior cerebellar artery, 1.5- and 3.0-T MR imaging had 33.3% and 50% sensitivity and 92.9% and 95% specificity, respectively. For the petrosal vein, 1.5- and 3.0-T MR imaging had 75% and 64.3% sensitivity and 79.2% and 78.1% specificity, respectively. Complete pain relief was obtained in 36 of 40 and 55 of 60 patients undergoing 1.5- and 3.0-T MR imaging, respectively. The present study showed that both 1.5- and 3.0-T MR imaging provided high sensitivity and specificity for preoperative assessment of the compression vessels of trigeminal neuralgia. Preoperative 3D imaging provided very high quality presurgical simulation, resulting in excellent clinical outcomes. Copyright © 2013 Elsevier Inc. All rights reserved.

  15. Development of the mouse cochlea database (MCD).

    PubMed

    Santi, Peter A; Rapson, Ian; Voie, Arne

    2008-09-01

    The mouse cochlea database (MCD) provides an interactive, image database of the mouse cochlea for learning its anatomy and data mining of its resources. The MCD website is hosted on a centrally maintained, high-speed server at the following URL: (http://mousecochlea.umn.edu). The MCD contains two types of image resources, serial 2D image stacks and 3D reconstructions of cochlear structures. Complete image stacks of the cochlea from two different mouse strains were obtained using orthogonal plane fluorescence optical microscopy (OPFOS). 2D images of the cochlea are presented on the MCD website as: viewable images within a stack, 2D atlas of the cochlea, orthogonal sections, and direct volume renderings combined with isosurface reconstructions. In order to assess cochlear structures quantitatively, "true" cross-sections of the scala media along the length of the basilar membrane were generated by virtual resectioning of a cochlea orthogonal to a cochlear structure, such as the centroid of the basilar membrane or the scala media. 3D images are presented on the MCD website as: direct volume renderings, movies, interactive QuickTime VRs, flythrough, and isosurface 3D reconstructions of different cochlear structures. 3D computer models can also be used for solid model fabrication by rapid prototyping and models from different cochleas can be combined to produce an average 3D model. The MCD is the first comprehensive image resource on the mouse cochlea and is a new paradigm for understanding the anatomy of the cochlea, and establishing morphometric parameters of cochlear structures in normal and mutant mice.

  16. Clinical Application of an Open-Source 3D Volume Rendering Software to Neurosurgical Approaches.

    PubMed

    Fernandes de Oliveira Santos, Bruno; Silva da Costa, Marcos Devanir; Centeno, Ricardo Silva; Cavalheiro, Sergio; Antônio de Paiva Neto, Manoel; Lawton, Michael T; Chaddad-Neto, Feres

    2018-02-01

    Preoperative recognition of the anatomic individualities of each patient can help to achieve more precise and less invasive approaches. It also may help to anticipate potential complications and intraoperative difficulties. Here we describe the use, accuracy, and precision of a free tool for planning microsurgical approaches using 3-dimensional (3D) reconstructions from magnetic resonance imaging (MRI). We used the 3D volume rendering tool of a free open-source software program for 3D reconstruction of images of surgical sites obtained by MRI volumetric acquisition. We recorded anatomic reference points, such as the sulcus and gyrus, and vascularization patterns for intraoperative localization of lesions. Lesion locations were confirmed during surgery by intraoperative ultrasound and/or electrocorticography and later by postoperative MRI. Between August 2015 and September 2016, a total of 23 surgeries were performed using this technique for 9 low-grade gliomas, 7 high-grade gliomas, 4 cortical dysplasias, and 3 arteriovenous malformations. The technique helped delineate lesions with an overall accuracy of 2.6 ± 1.0 mm. 3D reconstructions were successfully performed in all patients, and images showed sulcus, gyrus, and venous patterns corresponding to the intraoperative images. All lesion areas were confirmed both intraoperatively and at the postoperative evaluation. With the technique described herein, it was possible to successfully perform 3D reconstruction of the cortical surface. This reconstruction tool may serve as an adjunct to neuronavigation systems or may be used alone when such a system is unavailable. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Expression and Association of the Yersinia pestis Translocon Proteins, YopB and YopD, Are Facilitated by Nanolipoprotein Particles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coleman, Matthew A.; Cappuccio, Jenny A.; Blanchette, Craig D.

    Yersinia pestis enters host cells and evades host defenses, in part, through interactions between Yersinia pestis proteins and host membranes. One such interaction is through the type III secretion system, which uses a highly conserved and ordered complex for Yersinia pestis outer membrane effector protein translocation called the injectisome. The portion of the injectisome that interacts directly with host cell membranes is referred to as the translocon. The translocon is believed to form a pore allowing effector molecules to enter host cells. To facilitate mechanistic studies of the translocon, we have developed a cell-free approach for expressing translocon pore proteinsmore » as a complex supported in a bilayer membrane mimetic nano-scaffold known as a nanolipoprotein particle (NLP) Initial results show cell-free expression of Yersinia pestis outer membrane proteins YopB and YopD was enhanced in the presence of liposomes. However, these complexes tended to aggregate and precipitate. With the addition of co-expressed (NLP) forming components, the YopB and/or YopD complex was rendered soluble, increasing the yield of protein for biophysical studies. Biophysical methods such as Atomic Force Microscopy and Fluorescence Correlation Spectroscopy were used to confirm that the soluble YopB/D complex was associated with NLPs. An interaction between the YopB/D complex and NLP was validated by immunoprecipitation. The YopB/D translocon complex embedded in a NLP provides a platform for protein interaction studies between pathogen and host proteins. Ultimately, these studies will help elucidate the poorly understood mechanism which enables this pathogen to inject effector proteins into host cells, thus evading host defenses.« less

  18. Expression and Association of the Yersinia pestis Translocon Proteins, YopB and YopD, Are Facilitated by Nanolipoprotein Particles

    DOE PAGES

    Coleman, Matthew A.; Cappuccio, Jenny A.; Blanchette, Craig D.; ...

    2016-03-25

    Yersinia pestis enters host cells and evades host defenses, in part, through interactions between Yersinia pestis proteins and host membranes. One such interaction is through the type III secretion system, which uses a highly conserved and ordered complex for Yersinia pestis outer membrane effector protein translocation called the injectisome. The portion of the injectisome that interacts directly with host cell membranes is referred to as the translocon. The translocon is believed to form a pore allowing effector molecules to enter host cells. To facilitate mechanistic studies of the translocon, we have developed a cell-free approach for expressing translocon pore proteinsmore » as a complex supported in a bilayer membrane mimetic nano-scaffold known as a nanolipoprotein particle (NLP) Initial results show cell-free expression of Yersinia pestis outer membrane proteins YopB and YopD was enhanced in the presence of liposomes. However, these complexes tended to aggregate and precipitate. With the addition of co-expressed (NLP) forming components, the YopB and/or YopD complex was rendered soluble, increasing the yield of protein for biophysical studies. Biophysical methods such as Atomic Force Microscopy and Fluorescence Correlation Spectroscopy were used to confirm that the soluble YopB/D complex was associated with NLPs. An interaction between the YopB/D complex and NLP was validated by immunoprecipitation. The YopB/D translocon complex embedded in a NLP provides a platform for protein interaction studies between pathogen and host proteins. Ultimately, these studies will help elucidate the poorly understood mechanism which enables this pathogen to inject effector proteins into host cells, thus evading host defenses.« less

  19. High-quality slab-based intermixing method for fusion rendering of multiple medical objects.

    PubMed

    Kim, Dong-Joon; Kim, Bohyoung; Lee, Jeongjin; Shin, Juneseuk; Kim, Kyoung Won; Shin, Yeong-Gil

    2016-01-01

    The visualization of multiple 3D objects has been increasingly required for recent applications in medical fields. Due to the heterogeneity in data representation or data configuration, it is difficult to efficiently render multiple medical objects in high quality. In this paper, we present a novel intermixing scheme for fusion rendering of multiple medical objects while preserving the real-time performance. First, we present an in-slab visibility interpolation method for the representation of subdivided slabs. Second, we introduce virtual zSlab, which extends an infinitely thin boundary (such as polygonal objects) into a slab with a finite thickness. Finally, based on virtual zSlab and in-slab visibility interpolation, we propose a slab-based visibility intermixing method with the newly proposed rendering pipeline. Experimental results demonstrate that the proposed method delivers more effective multiple-object renderings in terms of rendering quality, compared to conventional approaches. And proposed intermixing scheme provides high-quality intermixing results for the visualization of intersecting and overlapping surfaces by resolving aliasing and z-fighting problems. Moreover, two case studies are presented that apply the proposed method to the real clinical applications. These case studies manifest that the proposed method has the outstanding advantages of the rendering independency and reusability. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  20. The production of digital and printed resources from multiple modalities using visualization and three-dimensional printing techniques.

    PubMed

    Shui, Wuyang; Zhou, Mingquan; Chen, Shi; Pan, Zhouxian; Deng, Qingqiong; Yao, Yong; Pan, Hui; He, Taiping; Wang, Xingce

    2017-01-01

    Virtual digital resources and printed models have become indispensable tools for medical training and surgical planning. Nevertheless, printed models of soft tissue organs are still challenging to reproduce. This study adopts open source packages and a low-cost desktop 3D printer to convert multiple modalities of medical images to digital resources (volume rendering images and digital models) and lifelike printed models, which are useful to enhance our understanding of the geometric structure and complex spatial nature of anatomical organs. Neuroimaging technologies such as CT, CTA, MRI, and TOF-MRA collect serial medical images. The procedures for producing digital resources can be divided into volume rendering and medical image reconstruction. To verify the accuracy of reconstruction, this study presents qualitative and quantitative assessments. Subsequently, digital models are archived as stereolithography format files and imported to the bundled software of the 3D printer. The printed models are produced using polylactide filament materials. We have successfully converted multiple modalities of medical images to digital resources and printed models for both hard organs (cranial base and tooth) and soft tissue organs (brain, blood vessels of the brain, the heart chambers and vessel lumen, and pituitary tumor). Multiple digital resources and printed models were provided to illustrate the anatomical relationship between organs and complicated surrounding structures. Three-dimensional printing (3DP) is a powerful tool to produce lifelike and tangible models. We present an available and cost-effective method for producing both digital resources and printed models. The choice of modality in medical images and the processing approach is important when reproducing soft tissue organs models. The accuracy of the printed model is determined by the quality of organ models and 3DP. With the ongoing improvement of printing techniques and the variety of materials available, 3DP will become an indispensable tool in medical training and surgical planning.

  1. LivePhantom: Retrieving Virtual World Light Data to Real Environments.

    PubMed

    Kolivand, Hoshang; Billinghurst, Mark; Sunar, Mohd Shahrizal

    2016-01-01

    To achieve realistic Augmented Reality (AR), shadows play an important role in creating a 3D impression of a scene. Casting virtual shadows on real and virtual objects is one of the topics of research being conducted in this area. In this paper, we propose a new method for creating complex AR indoor scenes using real time depth detection to exert virtual shadows on virtual and real environments. A Kinect camera was used to produce a depth map for the physical scene mixing into a single real-time transparent tacit surface. Once this is created, the camera's position can be tracked from the reconstructed 3D scene. Real objects are represented by virtual object phantoms in the AR scene enabling users holding a webcam and a standard Kinect camera to capture and reconstruct environments simultaneously. The tracking capability of the algorithm is shown and the findings are assessed drawing upon qualitative and quantitative methods making comparisons with previous AR phantom generation applications. The results demonstrate the robustness of the technique for realistic indoor rendering in AR systems.

  2. LivePhantom: Retrieving Virtual World Light Data to Real Environments

    PubMed Central

    2016-01-01

    To achieve realistic Augmented Reality (AR), shadows play an important role in creating a 3D impression of a scene. Casting virtual shadows on real and virtual objects is one of the topics of research being conducted in this area. In this paper, we propose a new method for creating complex AR indoor scenes using real time depth detection to exert virtual shadows on virtual and real environments. A Kinect camera was used to produce a depth map for the physical scene mixing into a single real-time transparent tacit surface. Once this is created, the camera’s position can be tracked from the reconstructed 3D scene. Real objects are represented by virtual object phantoms in the AR scene enabling users holding a webcam and a standard Kinect camera to capture and reconstruct environments simultaneously. The tracking capability of the algorithm is shown and the findings are assessed drawing upon qualitative and quantitative methods making comparisons with previous AR phantom generation applications. The results demonstrate the robustness of the technique for realistic indoor rendering in AR systems. PMID:27930663

  3. 3D Visualization of Machine Learning Algorithms with Astronomical Data

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.

    2016-01-01

    We present innovative machine learning (ML) methods using unsupervised clustering with minimum spanning trees (MSTs) to study 3D astronomical catalogs. Utilizing Python code to build trees based on galaxy catalogs, we can render the results with the visualization suite Blender to produce interactive 360 degree panoramic videos. The catalogs and their ML results can be explored in a 3D space using mobile devices, tablets or desktop browsers. We compare the statistics of the MST results to a number of machine learning methods relating to optimization and efficiency.

  4. Smooth 2D manifold extraction from 3D image stack

    PubMed Central

    Shihavuddin, Asm; Basu, Sreetama; Rexhepaj, Elton; Delestro, Felipe; Menezes, Nikita; Sigoillot, Séverine M; Del Nery, Elaine; Selimi, Fekrije; Spassky, Nathalie; Genovesio, Auguste

    2017-01-01

    Three-dimensional fluorescence microscopy followed by image processing is routinely used to study biological objects at various scales such as cells and tissue. However, maximum intensity projection, the most broadly used rendering tool, extracts a discontinuous layer of voxels, obliviously creating important artifacts and possibly misleading interpretation. Here we propose smooth manifold extraction, an algorithm that produces a continuous focused 2D extraction from a 3D volume, hence preserving local spatial relationships. We demonstrate the usefulness of our approach by applying it to various biological applications using confocal and wide-field microscopy 3D image stacks. We provide a parameter-free ImageJ/Fiji plugin that allows 2D visualization and interpretation of 3D image stacks with maximum accuracy. PMID:28561033

  5. A Distributed GPU-Based Framework for Real-Time 3D Volume Rendering of Large Astronomical Data Cubes

    NASA Astrophysics Data System (ADS)

    Hassan, A. H.; Fluke, C. J.; Barnes, D. G.

    2012-05-01

    We present a framework to volume-render three-dimensional data cubes interactively using distributed ray-casting and volume-bricking over a cluster of workstations powered by one or more graphics processing units (GPUs) and a multi-core central processing unit (CPU). The main design target for this framework is to provide an in-core visualization solution able to provide three-dimensional interactive views of terabyte-sized data cubes. We tested the presented framework using a computing cluster comprising 64 nodes with a total of 128GPUs. The framework proved to be scalable to render a 204GB data cube with an average of 30 frames per second. Our performance analyses also compare the use of NVIDIA Tesla 1060 and 2050GPU architectures and the effect of increasing the visualization output resolution on the rendering performance. Although our initial focus, as shown in the examples presented in this work, is volume rendering of spectral data cubes from radio astronomy, we contend that our approach has applicability to other disciplines where close to real-time volume rendering of terabyte-order three-dimensional data sets is a requirement.

  6. Image-guided tissue engineering of anatomically shaped implants via MRI and micro-CT using injection molding.

    PubMed

    Ballyns, Jeffery J; Gleghorn, Jason P; Niebrzydowski, Vicki; Rawlinson, Jeremy J; Potter, Hollis G; Maher, Suzanne A; Wright, Timothy M; Bonassar, Lawrence J

    2008-07-01

    This study demonstrates for the first time the development of engineered tissues based on anatomic geometries derived from widely used medical imaging modalities such as computed tomography (CT) and magnetic resonance imaging (MRI). Computer-aided design and tissue injection molding techniques have demonstrated the ability to generate living implants of complex geometry. Due to its complex geometry, the meniscus of the knee was used as an example of this technique's capabilities. MRI and microcomputed tomography (microCT) were used to design custom-printed molds that enabled the generation of anatomically shaped constructs that retained shape throughout 8 weeks of culture. Engineered constructs showed progressive tissue formation indicated by increases in extracellular matrix content and mechanical properties. The paradigm of interfacing tissue injection molding technology can be applied to other medical imaging techniques that render 3D models of anatomy, demonstrating the potential to apply the current technique to engineering of many tissues and organs.

  7. Systems and Methods for Data Visualization Using Three-Dimensional Displays

    NASA Technical Reports Server (NTRS)

    Davidoff, Scott (Inventor); Djorgovski, Stanislav G. (Inventor); Estrada, Vicente (Inventor); Donalek, Ciro (Inventor)

    2017-01-01

    Data visualization systems and methods for generating 3D visualizations of a multidimensional data space are described. In one embodiment a 3D data visualization application directs a processing system to: load a set of multidimensional data points into a visualization table; create representations of a set of 3D objects corresponding to the set of data points; receive mappings of data dimensions to visualization attributes; determine the visualization attributes of the set of 3D objects based upon the selected mappings of data dimensions to 3D object attributes; update a visibility dimension in the visualization table for each of the plurality of 3D object to reflect the visibility of each 3D object based upon the selected mappings of data dimensions to visualization attributes; and interactively render 3D data visualizations of the 3D objects within the virtual space from viewpoints determined based upon received user input.

  8. Accuracy and repeatability of long-bone replicas of small animals fabricated by use of low-end and high-end commercial three-dimensional printers.

    PubMed

    Cone, Jamie A; Martin, Thomas M; Marcellin-Little, Denis J; Harrysson, Ola L A; Griffith, Emily H

    2017-08-01

    OBJECTIVE To assess the repeatability and accuracy of polymer replicas of small, medium, and large long bones of small animals fabricated by use of 2 low-end and 2 high-end 3-D printers. SAMPLE Polymer replicas of a cat femur, dog radius, and dog tibia were fabricated in triplicate by use of each of four 3-D printing methods. PROCEDURES 3-D renderings of the 3 bones reconstructed from CT images were prepared, and length, width of the proximal aspect, and width of the distal aspect of each CT image were measured in triplicate. Polymer replicas were fabricated by use of a high-end system that relied on jetting of curable liquid photopolymer, a high-end system that relied on polymer extrusion, a triple-nozzle polymer extrusion low-end system, and a dual-nozzle polymer extrusion low-end system. Polymer replicas were scanned by use of a laser-based coordinate measurement machine. Length, width of the proximal aspect, and width of the distal aspect of the scans of replicas were measured and compared with measurements for the 3-D renderings. RESULTS 129 measurements were collected for 34 replicas (fabrication of 1 large long-bone replica was unsuccessful on each of the 2 low-end printers). Replicas were highly repeatable for all 3-D printers. The 3-D printers overestimated dimensions of large replicas by approximately 1%. CONCLUSIONS AND CLINICAL RELEVANCE Low-end and high-end 3-D printers fabricated CT-derived replicas of bones of small animals with high repeatability. Replicas were slightly larger than the original bones.

  9. Differentiation between Symptomatic and Asymptomatic Extraforaminal Stenosis in Lumbosacral Transitional Vertebra: Role of Three-Dimensional Magnetic Resonance Lumbosacral Radiculography

    PubMed Central

    Kim, Jae Woon; Lee, Jae Kyo

    2012-01-01

    Objective To investigate the role of lumbosacral radiculography using 3-dimentional (3D) magnetic resonance (MR) rendering for diagnostic information of symptomatic extraforaminal stenosis in lumbosacral transitional vertebra. Materials and Methods The study population consisted of 18 patients with symptomatic (n = 10) and asymptomatic extraforaminal stenosis (n = 8) in lumbosacral transitional vertebra. Each patient underwent 3D coronal fast-field echo sequences with selective water excitation using the principles of the selective excitation technique (Proset imaging). Morphologic changes of the L5 nerve roots at the symptomatic and asymptomatic extraforaminal stenosis were evaluated on 3D MR rendered images of the lumbosacral spine. Results Ten cases with symptomatic extraforaminal stenosis showed hyperplasia and degenerative osteophytes of the sacral ala and/or osteophytes at the lateral margin of the L5 body. On 3D MR lumbosacral radiculography, indentation of the L5 nerve roots was found in two cases, while swelling of the nerve roots was seen in eight cases at the exiting nerve root. Eight cases with asymptomatic extraforaminal stenosis showed hyperplasia and degenerative osteophytes of the sacral ala and/or osteophytes at the lateral margin of the L5 body. Based on 3D MR lumbosacral radiculography, indentation or swelling of the L5 nerve roots was not found in any cases with asymptomatic extraforaminal stenosis. Conclusion Results from 3D MR lumbosacral radiculography Indicate the indentation or swelling of the L5 nerve root in symptomatic extraforaminal stenosis. Based on these findings, 3D MR radiculography may be helpful in the diagnosis of the symptomatic extraforaminal stenosis with lumbosacral transitional vertebra. PMID:22778561

  10. Synthesis of 1D Bragg gratings by a layer-aggregation method.

    PubMed

    Capmany, José; Muriel, Miguel A; Sales, Salvador

    2007-08-15

    We present what we believe to be a novel method for the synthesis of complex 1D (fiber and waveguide) Bragg gratings, which is based on an impedance reconstruction layer aggregation technique. The main advantage brought by the method is the possibility of synthesizing structures containing defects or discontinuities of the size of the local period, a feature that is not possible with prior reported methods. In addition, this enhanced spatial resolution allows the synthesis of very strong fiber Bragg grating devices providing convergent solutions. The method directly renders the refractive index profile n(z) as it does not rely on the coupled-mode theory.

  11. A web-based instruction module for interpretation of craniofacial cone beam CT anatomy.

    PubMed

    Hassan, B A; Jacobs, R; Scarfe, W C; Al-Rawi, W T

    2007-09-01

    To develop a web-based module for learner instruction in the interpretation and recognition of osseous anatomy on craniofacial cone-beam CT (CBCT) images. Volumetric datasets from three CBCT systems were acquired (i-CAT, NewTom 3G and AccuiTomo FPD) for various subjects using equipment-specific scanning protocols. The datasets were processed using multiple software to provide two-dimensional (2D) multiplanar reformatted (MPR) images (e.g. sagittal, coronal and axial) and three-dimensional (3D) visual representations (e.g. maximum intensity projection, minimum intensity projection, ray sum, surface and volume rendering). Distinct didactic modules which illustrate the principles of CBCT systems, guided navigation of the volumetric dataset, and anatomic correlation of 3D models and 2D MPR graphics were developed using a hybrid combination of web authoring and image analysis techniques. Interactive web multimedia instruction was facilitated by the use of dynamic highlighting and labelling, and rendered video illustrations, supplemented with didactic textual material. HTML coding and Java scripting were heavily implemented for the blending of the educational modules. An interactive, multimedia educational tool for visualizing the morphology and interrelationships of osseous craniofacial anatomy, as depicted on CBCT MPR and 3D images, was designed and implemented. The present design of a web-based instruction module may assist radiologists and clinicians in learning how to recognize and interpret the craniofacial anatomy of CBCT based images more efficiently.

  12. 3-D Sound for Virtual Reality and Multimedia

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Trejo, Leonard J. (Technical Monitor)

    2000-01-01

    Technology and applications for the rendering of virtual acoustic spaces are reviewed. Chapter 1 deals with acoustics and psychoacoustics. Chapters 2 and 3 cover cues to spatial hearing and review psychoacoustic literature. Chapter 4 covers signal processing and systems overviews of 3-D sound systems. Chapter 5 covers applications to computer workstations, communication systems, aeronautics and space, and sonic arts. Chapter 6 lists resources. This TM is a reprint of the 1994 book from Academic Press.

  13. Structure Sensor for mobile markerless augmented reality

    NASA Astrophysics Data System (ADS)

    Kilgus, T.; Bux, R.; Franz, A. M.; Johnen, W.; Heim, E.; Fangerau, M.; Müller, M.; Yen, K.; Maier-Hein, L.

    2016-03-01

    3D Visualization of anatomical data is an integral part of diagnostics and treatment in many medical disciplines, such as radiology, surgery and forensic medicine. To enable intuitive interaction with the data, we recently proposed a new concept for on-patient visualization of medical data which involves rendering of subsurface structures on a mobile display that can be moved along the human body. The data fusion is achieved with a range imaging device attached to the display. The range data is used to register static 3D medical imaging data with the patient body based on a surface matching algorithm. However, our previous prototype was based on the Microsoft Kinect camera and thus required a cable connection to acquire color and depth data. The contribution of this paper is two-fold. Firstly, we replace the Kinect with the Structure Sensor - a novel cable-free range imaging device - to improve handling and user experience and show that the resulting accuracy (target registration error: 4.8+/-1.5 mm) is comparable to that achieved with the Kinect. Secondly, a new approach to visualizing complex 3D anatomy based on this device, as well as 3D printed models of anatomical surfaces, is presented. We demonstrate that our concept can be applied to in vivo data and to a 3D printed skull of a forensic case. Our new device is the next step towards clinical integration and shows that the concept cannot only be applied during autopsy but also for presentation of forensic data to laypeople in court or medical education.

  14. The Cerrito Site (AR-4): A Piedra Lumbre Phase Settlement at Abiquiu Reservoir,

    DTIC Science & Technology

    1979-11-01

    entirely or partially with well-polished slip and * which were not painted. Diffeential firing and smudging techniques render the vessels red, black, gray...collapsed. The single carbon-14 date is rendered suspicious because of the Suess effect (Samon et al. 1974). However, the influence of this effect in the...dates of 0 A.D. 1107 and A.D. 792. The 2 oldest readings from the lithic areas (specimens 5621 and 5632) are rendered suspicious when computed with the

  15. Semiconductive 3-D haloplumbate framework hybrids with high color rendering index white-light emission† †Electronic supplementary information (ESI) available. CCDC 1055380 and 1055381. For ESI and crystallographic data in CIF or other electronic format see DOI: 10.1039/c5sc02501j Click here for additional data file. Click here for additional data file.

    PubMed Central

    Wang, Guan-E; Wang, Ming-Sheng; Cai, Li-Zhen; Li, Wen-Hua

    2015-01-01

    Single-component white light materials may create great opportunities for novel conventional lighting applications and display systems; however, their reported color rendering index (CRI) values, one of the key parameters for lighting, are less than 90, which does not satisfy the demand of color-critical upmarket applications, such as photography, cinematography, and art galleries. In this work, two semiconductive chloroplumbate (chloride anion of lead(ii)) hybrids, obtained using a new inorganic–organic hybrid strategy, show unprecedented 3-D inorganic framework structures and white-light-emitting properties with high CRI values around 90, one of which shows the highest value to date. PMID:28757985

  16. (DCT-FY08) Target Detection Using Multiple Modality Airborne and Ground Based Sensors

    DTIC Science & Technology

    2013-03-01

    Plenoptic modeling: an image-based rendering system,” in SIGGRAPH ’95: Proceedings of the 22nd annual conference on Computer graphics and interactive...techniques. New York, NY, USA: ACM, 1995, pp. 39–46. [21] D. G. Aliaga and I. Carlbom, “ Plenoptic stitching: a scalable method for reconstructing 3D

  17. Towards a 3d Based Platform for Cultural Heritage Site Survey and Virtual Exploration

    NASA Astrophysics Data System (ADS)

    Seinturier, J.; Riedinger, C.; Mahiddine, A.; Peloso, D.; Boï, J.-M.; Merad, D.; Drap, P.

    2013-07-01

    This paper present a 3D platform that enables to make both cultural heritage site survey and its virtual exploration. It provides a single and easy way to use framework for merging multi scaled 3D measurements based on photogrammetry, documentation produced by experts and the knowledge of involved domains leaving the experts able to extract and choose the relevant information to produce the final survey. Taking into account the interpretation of the real world during the process of archaeological surveys is in fact the main goal of a survey. New advances in photogrammetry and the capability to produce dense 3D point clouds do not solve the problem of surveys. New opportunities for 3D representation are now available and we must to use them and find new ways to link geometry and knowledge. The new platform is able to efficiently manage and process large 3D data (points set, meshes) thanks to the implementation of space partition methods coming from the state of the art such as octrees and kd-trees and thus can interact with dense point clouds (thousands to millions of points) in real time. The semantisation of raw 3D data relies on geometric algorithms such as geodetic path computation, surface extraction from dense points cloud and geometrical primitive optimization. The platform provide an interface that enables expert to describe geometric representations of interesting objects like ashlar blocs, stratigraphic units or generic items (contour, lines, … ) directly onto the 3D representation of the site and without explicit links to underlying algorithms. The platform provide two ways for describing geometric representation. If oriented photographs are available, the expert can draw geometry on a photograph and the system computes its 3D representation by projection on the underlying mesh or the points cloud. If photographs are not available or if the expert wants to only use the 3D representation then he can simply draw objects shape on it. When 3D representations of objects of a surveyed site are extracted from the mesh, the link with domain related documentation is done by means of a set of forms designed by experts. Information from these forms are linked with geometry such that documentation can be attached to the viewed objects. Additional semantisation methods related to specific domains have been added to the platform. Beyond realistic rendering of surveyed site, the platform embeds non photorealistic rendering (NPR) algorithms. These algorithms enable to dynamically illustrate objects of interest that are related to knowledge with specific styles. The whole platform is implemented with a Java framework and relies on an actual and effective 3D engine that make available latest rendering methods. We illustrate this work on various photogrammetric survey, in medieval archaeology with the Shawbak castle in Jordan and in underwater archaeology on different marine sites.

  18. A client–server framework for 3D remote visualization of radiotherapy treatment space

    PubMed Central

    Santhanam, Anand P.; Min, Yugang; Dou, Tai H.; Kupelian, Patrick; Low, Daniel A.

    2013-01-01

    Radiotherapy is safely employed for treating wide variety of cancers. The radiotherapy workflow includes a precise positioning of the patient in the intended treatment position. While trained radiation therapists conduct patient positioning, consultation is occasionally required from other experts, including the radiation oncologist, dosimetrist, or medical physicist. In many circumstances, including rural clinics and developing countries, this expertise is not immediately available, so the patient positioning concerns of the treating therapists may not get addressed. In this paper, we present a framework to enable remotely located experts to virtually collaborate and be present inside the 3D treatment room when necessary. A multi-3D camera framework was used for acquiring the 3D treatment space. A client–server framework enabled the acquired 3D treatment room to be visualized in real-time. The computational tasks that would normally occur on the client side were offloaded to the server side to enable hardware flexibility on the client side. On the server side, a client specific real-time stereo rendering of the 3D treatment room was employed using a scalable multi graphics processing units (GPU) system. The rendered 3D images were then encoded using a GPU-based H.264 encoding for streaming. Results showed that for a stereo image size of 1280 × 960 pixels, experts with high-speed gigabit Ethernet connectivity were able to visualize the treatment space at approximately 81 frames per second. For experts remotely located and using a 100 Mbps network, the treatment space visualization occurred at 8–40 frames per second depending upon the network bandwidth. This work demonstrated the feasibility of remote real-time stereoscopic patient setup visualization, enabling expansion of high quality radiation therapy into challenging environments. PMID:23440605

  19. Video-Game-Like Engine for Depicting Spacecraft Trajectories

    NASA Technical Reports Server (NTRS)

    Upchurch, Paul R.

    2009-01-01

    GoView is a video-game-like software engine, written in the C and C++ computing languages, that enables real-time, three-dimensional (3D)-appearing visual representation of spacecraft and trajectories (1) from any perspective; (2) at any spatial scale from spacecraft to Solar-system dimensions; (3) in user-selectable time scales; (4) in the past, present, and/or future; (5) with varying speeds; and (6) forward or backward in time. GoView constructs an interactive 3D world by use of spacecraft-mission data from pre-existing engineering software tools. GoView can also be used to produce distributable application programs for depicting NASA orbital missions on personal computers running the Windows XP, Mac OsX, and Linux operating systems. GoView enables seamless rendering of Cartesian coordinate spaces with programmable graphics hardware, whereas prior programs for depicting spacecraft trajectories variously require non-Cartesian coordinates and/or are not compatible with programmable hardware. GoView incorporates an algorithm for nonlinear interpolation between arbitrary reference frames, whereas the prior programs are restricted to special classes of inertial and non-inertial reference frames. Finally, whereas the prior programs present complex user interfaces requiring hours of training, the GoView interface provides guidance, enabling use without any training.

  20. Investigating Integration Capabilities Between Ifc and Citygml LOD3 for 3d City Modelling

    NASA Astrophysics Data System (ADS)

    Floros, G.; Pispidikis, I.; Dimopoulou, E.

    2017-10-01

    Smart cities are applied to an increasing number of application fields. This evolution though urges data collection and integration, hence major issues arise that need to be tackled. One of the most important challenges is the heterogeneity of collected data, especially if those data derive from different standards and vary in terms of geometry, topology and semantics. Another key challenge is the efficient analysis and visualization of spatial data, which due to the complexity of the physical reality in modern world, 2D GIS struggles to cope with. So, in order to facilitate data analysis and enhance the role of smart cities, the 3rd dimension needs to be implemented. Standards such as CityGML and IFC fulfill that necessity but they present major differences in their schemas that render their integration a challenging task. This paper focuses on addressing those differences, examining the up to date research work and investigates an alternative methodology in order to bridge the gap between those Standards. Within this framework, a generic IFC model is generated and converted to a CityGML Model, which is validated and evaluated on its geometrical correctness and semantical coherence. General results as well as future research considerations are presented.

  1. Sequence alignment visualization in HTML5 without Java.

    PubMed

    Gille, Christoph; Birgit, Weyand; Gille, Andreas

    2014-01-01

    Java has been extensively used for the visualization of biological data in the web. However, the Java runtime environment is an additional layer of software with an own set of technical problems and security risks. HTML in its new version 5 provides features that for some tasks may render Java unnecessary. Alignment-To-HTML is the first HTML-based interactive visualization for annotated multiple sequence alignments. The server side script interpreter can perform all tasks like (i) sequence retrieval, (ii) alignment computation, (iii) rendering, (iv) identification of a homologous structural models and (v) communication with BioDAS-servers. The rendered alignment can be included in web pages and is displayed in all browsers on all platforms including touch screen tablets. The functionality of the user interface is similar to legacy Java applets and includes color schemes, highlighting of conserved and variable alignment positions, row reordering by drag and drop, interlinked 3D visualization and sequence groups. Novel features are (i) support for multiple overlapping residue annotations, such as chemical modifications, single nucleotide polymorphisms and mutations, (ii) mechanisms to quickly hide residue annotations, (iii) export to MS-Word and (iv) sequence icons. Alignment-To-HTML, the first interactive alignment visualization that runs in web browsers without additional software, confirms that to some extend HTML5 is already sufficient to display complex biological data. The low speed at which programs are executed in browsers is still the main obstacle. Nevertheless, we envision an increased use of HTML and JavaScript for interactive biological software. Under GPL at: http://www.bioinformatics.org/strap/toHTML/.

  2. 3D Microstructural Architectures for Metal and Alloy Components Fabricated by 3D Printing/Additive Manufacturing Technologies

    NASA Astrophysics Data System (ADS)

    Martinez, E.; Murr, L. E.; Amato, K. N.; Hernandez, J.; Shindo, P. W.; Gaytan, S. M.; Ramirez, D. A.; Medina, F.; Wicker, R. B.

    The layer-by-layer building of monolithic, 3D metal components from selectively melted powder layers using laser or electron beams is a novel form of 3D printing or additive manufacturing. Microstructures created in these 3D products can involve novel, directional solidification structures which can include crystallographically oriented grains containing columnar arrays of precipitates characteristic of a microstructural architecture. These microstructural architectures are advantageously rendered in 3D image constructions involving light optical microscopy and scanning and transmission electron microscopy observations. Microstructural evolution can also be effectively examined through 3D image sequences which, along with x-ray diffraction (XRD) analysis in the x-y and x-z planes, can effectively characterize related crystallographic/texture variances. This paper compares 3D microstructural architectures in Co-base and Ni-base superalloys, columnar martensitic grain structures in 17-4 PH alloy, and columnar copper oxides and dislocation arrays in copper.

  3. Virtual probing system for medical volume data

    NASA Astrophysics Data System (ADS)

    Xiao, Yongfei; Fu, Yili; Wang, Shuguo

    2007-12-01

    Because of the huge computation in 3D medical data visualization, looking into its inner data interactively is always a problem to be resolved. In this paper, we present a novel approach to explore 3D medical dataset in real time by utilizing a 3D widget to manipulate the scanning plane. With the help of the 3D texture property in modern graphics card, a virtual scanning probe is used to explore oblique clipping plane of medical volume data in real time. A 3D model of the medical dataset is also rendered to illustrate the relationship between the scanning-plane image and the other tissues in medical data. It will be a valuable tool in anatomy education and understanding of medical images in the medical research.

  4. NeuroTessMesh: A Tool for the Generation and Visualization of Neuron Meshes and Adaptive On-the-Fly Refinement

    PubMed Central

    Garcia-Cantero, Juan J.; Brito, Juan P.; Mata, Susana; Bayona, Sofia; Pastor, Luis

    2017-01-01

    Gaining a better understanding of the human brain continues to be one of the greatest challenges for science, largely because of the overwhelming complexity of the brain and the difficulty of analyzing the features and behavior of dense neural networks. Regarding analysis, 3D visualization has proven to be a useful tool for the evaluation of complex systems. However, the large number of neurons in non-trivial circuits, together with their intricate geometry, makes the visualization of a neuronal scenario an extremely challenging computational problem. Previous work in this area dealt with the generation of 3D polygonal meshes that approximated the cells’ overall anatomy but did not attempt to deal with the extremely high storage and computational cost required to manage a complex scene. This paper presents NeuroTessMesh, a tool specifically designed to cope with many of the problems associated with the visualization of neural circuits that are comprised of large numbers of cells. In addition, this method facilitates the recovery and visualization of the 3D geometry of cells included in databases, such as NeuroMorpho, and provides the tools needed to approximate missing information such as the soma’s morphology. This method takes as its only input the available compact, yet incomplete, morphological tracings of the cells as acquired by neuroscientists. It uses a multiresolution approach that combines an initial, coarse mesh generation with subsequent on-the-fly adaptive mesh refinement stages using tessellation shaders. For the coarse mesh generation, a novel approach, based on the Finite Element Method, allows approximation of the 3D shape of the soma from its incomplete description. Subsequently, the adaptive refinement process performed in the graphic card generates meshes that provide good visual quality geometries at a reasonable computational cost, both in terms of memory and rendering time. All the described techniques have been integrated into NeuroTessMesh, available to the scientific community, to generate, visualize, and save the adaptive resolution meshes. PMID:28690511

  5. Preventing facial recognition when rendering MR images of the head in three dimensions.

    PubMed

    Budin, François; Zeng, Donglin; Ghosh, Arpita; Bullitt, Elizabeth

    2008-06-01

    In the United States it is not allowed to make public any patient-specific information without the patient's consent. This ruling has led to difficulty for those interested in sharing three-dimensional (3D) images of the head and brain since a patient's face might be recognized from a 3D rendering of the skin surface. Approaches employed to date have included brain stripping and total removal of the face anterior to a cut plane, each of which lose potentially important anatomical information about the skull surface, air sinuses, and orbits. This paper describes a new approach that involves (a) definition of a plane anterior to which the face lies, and (b) an adjustable level of deformation of the skin surface anterior to that plane. On the basis of a user performance study using forced choices, we conclude that approximately 30% of individuals are at risk of recognition from 3D renderings of unaltered images and that truncation of the face below the level of the nose does not preclude facial recognition. Removal of the face anterior to a cut plane may interfere with accurate registration and may delete important anatomical information. Our new method alters little of the underlying anatomy and does not prevent effective registration into a common coordinate system. Although the methods presented here were not fully effective (one subject was consistently recognized under the forced choice study design even at the maximum deformation level employed) this paper may point a way toward solution of a difficult problem that has received little attention in the literature.

  6. Preventing Facial Recognition When Rendering MR Images of the Head in Three Dimensions

    PubMed Central

    Budin, François; Zeng, Donglin; Ghosh, Arpita; Bullitt, Elizabeth

    2008-01-01

    In the United States it is not allowed to make public any patient-specific information without the patient's consent. This ruling has led to difficulty for those interested in sharing three-dimensional (3D) images of the head and brain since a patient's face might be recognized from a 3D rendering of the skin surface. Approaches employed to date have included brain stripping and total removal of the face anterior to a cut plane, each of which lose potentially important anatomical information about the skull surface, air sinuses, and orbits. This paper describes a new approach that involves a) definition of a plane anterior to which the face lies, and b) an adjustable level of deformation of the skin surface anterior to that plane. On the basis of a user performance study using forced choices, we conclude that approximately 30% of individuals are at risk of recognition from 3D renderings of unaltered images and that truncation of the face below the level of the nose does not preclude facial recognition. Removal of the face anterior to a cut plane may interfere with accurate registration and may delete important anatomical information. Our new method alters little of the underlying anatomy and does not prevent effective registration into a common coordinate system. Although the methods presented here were not fully effective (one subject was consistently recognized under the forced choice study design even at the maximum deformation level employed) this paper may point a way toward solution of a difficult problem that has received little attention in the literature. PMID:18069044

  7. Real-time synthetic vision cockpit display for general aviation

    NASA Astrophysics Data System (ADS)

    Hansen, Andrew J.; Smith, W. Garth; Rybacki, Richard M.

    1999-07-01

    Low cost, high performance graphics solutions based on PC hardware platforms are now capable of rendering synthetic vision of a pilot's out-the-window view during all phases of flight. When coupled to a GPS navigation payload the virtual image can be fully correlated to the physical world. In particular, differential GPS services such as the Wide Area Augmentation System WAAS will provide all aviation users with highly accurate 3D navigation. As well, short baseline GPS attitude systems are becoming a viable and inexpensive solution. A glass cockpit display rendering geographically specific imagery draped terrain in real-time can be coupled with high accuracy (7m 95% positioning, sub degree pointing), high integrity (99.99999% position error bound) differential GPS navigation/attitude solutions to provide both situational awareness and 3D guidance to (auto) pilots throughout en route, terminal area, and precision approach phases of flight. This paper describes the technical issues addressed when coupling GPS and glass cockpit displays including the navigation/display interface, real-time 60 Hz rendering of terrain with multiple levels of detail under demand paging, and construction of verified terrain databases draped with geographically specific satellite imagery. Further, on-board recordings of the navigation solution and the cockpit display provide a replay facility for post-flight simulation based on live landings as well as synchronized multiple display channels with different views from the same flight. PC-based solutions which integrate GPS navigation and attitude determination with 3D visualization provide the aviation community, and general aviation in particular, with low cost high performance guidance and situational awareness in all phases of flight.

  8. Direct Visuo-Haptic 4D Volume Rendering Using Respiratory Motion Models.

    PubMed

    Fortmeier, Dirk; Wilms, Matthias; Mastmeyer, Andre; Handels, Heinz

    2015-01-01

    This article presents methods for direct visuo-haptic 4D volume rendering of virtual patient models under respiratory motion. Breathing models are computed based on patient-specific 4D CT image data sequences. Virtual patient models are visualized in real-time by ray casting based rendering of a reference CT image warped by a time-variant displacement field, which is computed using the motion models at run-time. Furthermore, haptic interaction with the animated virtual patient models is provided by using the displacements computed at high rendering rates to translate the position of the haptic device into the space of the reference CT image. This concept is applied to virtual palpation and the haptic simulation of insertion of a virtual bendable needle. To this aim, different motion models that are applicable in real-time are presented and the methods are integrated into a needle puncture training simulation framework, which can be used for simulated biopsy or vessel puncture in the liver. To confirm real-time applicability, a performance analysis of the resulting framework is given. It is shown that the presented methods achieve mean update rates around 2,000 Hz for haptic simulation and interactive frame rates for volume rendering and thus are well suited for visuo-haptic rendering of virtual patients under respiratory motion.

  9. RenderToolbox3: MATLAB tools that facilitate physically based stimulus rendering for vision research.

    PubMed

    Heasly, Benjamin S; Cottaris, Nicolas P; Lichtman, Daniel P; Xiao, Bei; Brainard, David H

    2014-02-07

    RenderToolbox3 provides MATLAB utilities and prescribes a workflow that should be useful to researchers who want to employ graphics in the study of vision and perhaps in other endeavors as well. In particular, RenderToolbox3 facilitates rendering scene families in which various scene attributes and renderer behaviors are manipulated parametrically, enables spectral specification of object reflectance and illuminant spectra, enables the use of physically based material specifications, helps validate renderer output, and converts renderer output to physical units of radiance. This paper describes the design and functionality of the toolbox and discusses several examples that demonstrate its use. We have designed RenderToolbox3 to be portable across computer hardware and operating systems and to be free and open source (except for MATLAB itself). RenderToolbox3 is available at https://github.com/DavidBrainard/RenderToolbox3.

  10. Participation of 3-O-sulfated heparan sulfates in the protection of macrophages by herpes simplex virus-1 glycoprotein D and cyclophilin B against apoptosis.

    PubMed

    Delos, Maxime; Hellec, Charles; Foulquier, François; Carpentier, Mathieu; Allain, Fabrice; Denys, Agnès

    2017-02-01

    Heparan sulfates (HS) are involved in numerous biological processes, which rely on their ability to interact with a large panel of proteins. Although the reaction of 3-O-sulfation can be catalysed by the largest family of HS sulfotransferases, very few mechanisms have been associated with this modification and to date, only glycoprotein D (gD) of herpes simplex virus-1 (HSV-1 gD) and cyclophilin B (CyPB) have been well-described as ligands for 3- O -sulfated HS. Here, we hypothesized that both ligands could induce the same responses via a mechanism dependent on 3- O -sulfated HS. First, we checked that HSV-1 gD was as efficient as CyPB to induce the activation of the same signalling events in primary macrophages. We then demonstrated that both ligands efficiently reduced staurosporin-induced apoptosis and modulated the expression of apoptotic genes. In addition to 3- O -sulfated HS, HSV-1 gD was reported to interact with other receptors, including herpes virus entry mediator (HVEM), nectin-1 and -2. Thus, we decided to identify the contribution of each binding site in the responses triggered by HSV-1 gD and CyPB. We found that knock-down of 3- O -sulfotransferase 2, which is the main 3- O -sulfated HS-generating enzyme in macrophages, strongly reduced the responses induced by both ligands. Moreover, silencing the expression of HVEM rendered macrophages unresponsive to either HSV-1 gD and CyPB, thus indicating that both proteins induced the same responses by interacting with a complex formed by 3- O -sulfated HS and HVEM. Collectively, our results suggest that HSV-1 might hijack the binding sites for CyPB in order to protect macrophages against apoptosis for efficient infection.

  11. A Parallel Rendering Algorithm for MIMD Architectures

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.; Orloff, Tobias

    1991-01-01

    Applications such as animation and scientific visualization demand high performance rendering of complex three dimensional scenes. To deliver the necessary rendering rates, highly parallel hardware architectures are required. The challenge is then to design algorithms and software which effectively use the hardware parallelism. A rendering algorithm targeted to distributed memory MIMD architectures is described. For maximum performance, the algorithm exploits both object-level and pixel-level parallelism. The behavior of the algorithm is examined both analytically and experimentally. Its performance for large numbers of processors is found to be limited primarily by communication overheads. An experimental implementation for the Intel iPSC/860 shows increasing performance from 1 to 128 processors across a wide range of scene complexities. It is shown that minimal modifications to the algorithm will adapt it for use on shared memory architectures as well.

  12. High-Performance 3D Articulated Robot Display

    NASA Technical Reports Server (NTRS)

    Powell, Mark W.; Torres, Recaredo J.; Mittman, David S.; Kurien, James A.; Abramyan, Lucy

    2011-01-01

    In the domain of telerobotic operations, the primary challenge facing the operator is to understand the state of the robotic platform. One key aspect of understanding the state is to visualize the physical location and configuration of the platform. As there is a wide variety of mobile robots, the requirements for visualizing their configurations vary diversely across different platforms. There can also be diversity in the mechanical mobility, such as wheeled, tracked, or legged mobility over surfaces. Adaptable 3D articulated robot visualization software can accommodate a wide variety of robotic platforms and environments. The visualization has been used for surface, aerial, space, and water robotic vehicle visualization during field testing. It has been used to enable operations of wheeled and legged surface vehicles, and can be readily adapted to facilitate other mechanical mobility solutions. The 3D visualization can render an articulated 3D model of a robotic platform for any environment. Given the model, the software receives real-time telemetry from the avionics system onboard the vehicle and animates the robot visualization to reflect the telemetered physical state. This is used to track the position and attitude in real time to monitor the progress of the vehicle as it traverses its environment. It is also used to monitor the state of any or all articulated elements of the vehicle, such as arms, legs, or control surfaces. The visualization can also render other sorts of telemetered states visually, such as stress or strains that are measured by the avionics. Such data can be used to color or annotate the virtual vehicle to indicate nominal or off-nominal states during operation. The visualization is also able to render the simulated environment where the vehicle is operating. For surface and aerial vehicles, it can render the terrain under the vehicle as the avionics sends it location information (GPS, odometry, or star tracking), and locate the vehicle over or on the terrain correctly. For long traverses over terrain, the visualization can stream in terrain piecewise in order to maintain the current area of interest for the operator without incurring unreasonable resource constraints on the computing platform. The visualization software is designed to run on laptops that can operate in field-testing environments without Internet access, which is a frequently encountered situation when testing in remote locations that simulate planetary environments such as Mars and other planetary bodies.

  13. Roles of universal three-dimensional image analysis devices that assist surgical operations.

    PubMed

    Sakamoto, Tsuyoshi

    2014-04-01

    The circumstances surrounding medical image analysis have undergone rapid evolution. In such a situation, it can be said that "imaging" obtained through medical imaging modality and the "analysis" that we employ have become amalgamated. Recently, we feel the distance between "imaging" and "analysis" has become closer regarding the imaging analysis of any organ system, as if both terms mentioned above have become integrated. The history of medical image analysis started with the appearance of the computer. The invention of multi-planar reconstruction (MPR) used in the helical scan had a significant impact and became the basis for recent image analysis. Subsequently, curbed MPR (CPR) and other methods were developed, and the 3D diagnostic imaging and image analysis of the human body have started on a full scale. Volume rendering: the development of a new rendering algorithm and the significant improvement of memory and CPUs contributed to the development of "volume rendering," which allows 3D views with retained internal information. A new value was created by this development; computed tomography (CT) images that used to be for "diagnosis" before that time have become "applicable to treatment." In the past, before the development of volume rendering, a clinician had to mentally reconstruct an image reconfigured for diagnosis into a 3D image, but these developments have allowed the depiction of a 3D image on a monitor. Current technology: Currently, in Japan, the estimation of the liver volume and the perfusion area of the portal vein and hepatic vein are vigorously being adopted during preoperative planning for hepatectomy. Such a circumstance seems to be brought by the substantial improvement of said basic techniques and by upgrading the user interface, allowing doctors easy manipulation by themselves. The following describes the specific techniques. Future of post-processing technology: It is expected, in terms of the role of image analysis, for better or worse, that computer-aided diagnosis (CAD) will develop to a highly advanced level in every diagnostic field. Further, it is also expected in the treatment field that a technique coordinating various devices will be strongly required as a surgery navigator. Actually, surgery using an image navigator is being widely studied, and coordination with hardware, including robots, will also be developed. © 2014 Japanese Society of Hepato-Biliary-Pancreatic Surgery.

  14. Acoustic Holographic Rendering with Two-dimensional Metamaterial-based Passive Phased Array

    PubMed Central

    Xie, Yangbo; Shen, Chen; Wang, Wenqi; Li, Junfei; Suo, Dingjie; Popa, Bogdan-Ioan; Jing, Yun; Cummer, Steven A.

    2016-01-01

    Acoustic holographic rendering in complete analogy with optical holography are useful for various applications, ranging from multi-focal lensing, multiplexed sensing and synthesizing three-dimensional complex sound fields. Conventional approaches rely on a large number of active transducers and phase shifting circuits. In this paper we show that by using passive metamaterials as subwavelength pixels, holographic rendering can be achieved without cumbersome circuitry and with only a single transducer, thus significantly reducing system complexity. Such metamaterial-based holograms can serve as versatile platforms for various advanced acoustic wave manipulation and signal modulation, leading to new possibilities in acoustic sensing, energy deposition and medical diagnostic imaging. PMID:27739472

  15. ConfocalVR: Immersive Visualization Applied to Confocal Microscopy.

    PubMed

    Stefani, Caroline; Lacy-Hulbert, Adam; Skillman, Thomas

    2018-06-24

    ConfocalVR is a virtual reality (VR) application created to improve the ability of researchers to study the complexity of cell architecture. Confocal microscopes take pictures of fluorescently labeled proteins or molecules at different focal planes to create a stack of 2D images throughout the specimen. Current software applications reconstruct the 3D image and render it as a 2D projection onto a computer screen where users need to rotate the image to expose the full 3D structure. This process is mentally taxing, breaks down if you stop the rotation, and does not take advantage of the eye's full field of view. ConfocalVR exploits consumer-grade virtual reality (VR) systems to fully immerse the user in the 3D cellular image. In this virtual environment the user can: 1) adjust image viewing parameters without leaving the virtual space, 2) reach out and grab the image to quickly rotate and scale the image to focus on key features, and 3) interact with other users in a shared virtual space enabling real-time collaborative exploration and discussion. We found that immersive VR technology allows the user to rapidly understand cellular architecture and protein or molecule distribution. We note that it is impossible to understand the value of immersive visualization without experiencing it first hand, so we encourage readers to get access to a VR system, download this software, and evaluate it for yourself. The ConfocalVR software is available for download at http://www.confocalvr.com, and is free for nonprofits. Copyright © 2018. Published by Elsevier Ltd.

  16. A 3-Dimensional Atlas of Human Tongue Muscles

    PubMed Central

    SANDERS, IRA; MU, LIANCAI

    2013-01-01

    The human tongue is one of the most important yet least understood structures of the body. One reason for the relative lack of research on the human tongue is its complex anatomy. This is a real barrier to investigators as there are few anatomical resources in the literature that show this complex anatomy clearly. As a result, the diagnosis and treatment of tongue disorders lags behind that for other structures of the head and neck. This report intended to fill this gap by displaying the tongue’s anatomy in multiple ways. The primary material used in this study was serial axial images of the male and female human tongue from the Visible Human (VH) Project of the National Library of Medicine. In addition, thick serial coronal sections of three human tongues were rendered translucent. The VH axial images were computer reconstructed into serial coronal sections and each tongue muscle was outlined. These outlines were used to construct a 3-dimensional computer model of the tongue that allows each muscle to be seen in its in vivo anatomical position. The thick coronal sections supplement the 3-D model by showing details of the complex interweaving of tongue muscles throughout the tongue. The graphics are perhaps the clearest guide to date to aid clinical or basic science investigators in identifying each tongue muscle in any part of the human tongue. PMID:23650264

  17. ASCEM Data Brower (ASCEMDB) v0.8

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    ROMOSAN, ALEXANDRU

    Data management tool designed for the Advanced Simulation Capability for Environmental Management (ASCEM) framework. Distinguishing features of this gateway include: (1) handling of complex geometry data, (2) advance selection mechanism, (3) state of art rendering of spatiotemporal data records, and (4) seamless integration with a distributed workflow engine.

  18. Remote volume rendering pipeline for mHealth applications

    NASA Astrophysics Data System (ADS)

    Gutenko, Ievgeniia; Petkov, Kaloian; Papadopoulos, Charilaos; Zhao, Xin; Park, Ji Hwan; Kaufman, Arie; Cha, Ronald

    2014-03-01

    We introduce a novel remote volume rendering pipeline for medical visualization targeted for mHealth (mobile health) applications. The necessity of such a pipeline stems from the large size of the medical imaging data produced by current CT and MRI scanners with respect to the complexity of the volumetric rendering algorithms. For example, the resolution of typical CT Angiography (CTA) data easily reaches 512^3 voxels and can exceed 6 gigabytes in size by spanning over the time domain while capturing a beating heart. This explosion in data size makes data transfers to mobile devices challenging, and even when the transfer problem is resolved the rendering performance of the device still remains a bottleneck. To deal with this issue, we propose a thin-client architecture, where the entirety of the data resides on a remote server where the image is rendered and then streamed to the client mobile device. We utilize the display and interaction capabilities of the mobile device, while performing interactive volume rendering on a server capable of handling large datasets. Specifically, upon user interaction the volume is rendered on the server and encoded into an H.264 video stream. H.264 is ubiquitously hardware accelerated, resulting in faster compression and lower power requirements. The choice of low-latency CPU- and GPU-based encoders is particularly important in enabling the interactive nature of our system. We demonstrate a prototype of our framework using various medical datasets on commodity tablet devices.

  19. Generalized pipeline for preview and rendering of synthetic holograms

    NASA Astrophysics Data System (ADS)

    Pappu, Ravikanth; Sparrell, Carlton J.; Underkoffler, John S.; Kropp, Adam B.; Chen, Benjie; Plesniak, Wendy J.

    1997-04-01

    We describe a general pipeline for the computation and display of either fully-computed holograms or holographic stereograms using the same 3D database. A rendering previewer on a Silicon Graphics Onyx allows a user to specify viewing geometry, database transformations, and scene lighting. The previewer then generates one of two descriptions of the object--a series of perspective views or a polygonal model--which is then used by a fringe rendering engine to compute fringes specific to hologram type. The images are viewed on the second generation MIT Holographic Video System. This allows a viewer to compare holographic stereograms with fully-computed holograms originating from the same database and comes closer to the goal of a single pipeline being able to display the same data in different formats.

  20. Heavy metal staining, a comparative assessment of gadolinium chloride and osmium tetroxide for inner ear labyrinthine contrast enhancement using X-ray microtomography.

    PubMed

    Wong, Christopher C; Curthoys, Ian S; O'Leary, Stephen J; Jones, Allan S

    2013-01-01

    The use of both gadolinium chloride (GdCl(3)) and osmium tetroxide (OsO(4)) allowed for the visualization of the membranous labyrinth and other intralabyrinthine structures, at different intensities, as compared with the control sample. This initial comparison shows the advantages of GdCl(3) in radiological assessments and OsO(4) in more detailed anatomical studies and pathways of labyrinthine pathogenesis using X-ray microtomography (microCT). To assess an improved OsO(4) staining protocol and compare the staining affinities against GdCl(3). Guinea pig temporal bones were stained with either GdCl(3) (2% w/v) for 7 days or OsO(4) (2% w/v) for 3 days, and scanned in a microCT system. The post-scanned datasets were then assessed in a 3D rendering program. The enhanced soft tissue contrast as presented in the temporal bones stained with either GdCl(3) or OsO(4) allowed for the membranous labyrinth to be visualized throughout the whole specimen. GdCl(3)-stained specimens presented more defined contours of the bone profile in the radiographs, while OsO(4)-stained specimens provided more anatomical detail of individual intralabyrinthine structures, hence allowing spatial relationships to be visualized with ease in a 3D rendering context and 2D axial slice images.

  1. A graphics to scalable vector graphics adaptation framework for progressive remote line rendering on mobile devices

    NASA Astrophysics Data System (ADS)

    Le, Minh Tuan; Nguyen, Congdu; Yoon, Dae-Il; Jung, Eun Ku; Kim, Hae-Kwang

    2007-12-01

    In this paper, we introduce a graphics to Scalable Vector Graphics (SVG) adaptation framework with a mechanism of vector graphics transmission to overcome the shortcoming of real-time representation and interaction experiences of 3D graphics application running on mobile devices. We therefore develop an interactive 3D visualization system based on the proposed framework for rapidly representing a 3D scene on mobile devices without having to download it from the server. Our system scenario is composed of a client viewer and a graphic to SVG adaptation server. The client viewer offers the user to access to the same 3D contents with different devices according to consumer interactions.

  2. Ray Casting of Large Multi-Resolution Volume Datasets

    NASA Astrophysics Data System (ADS)

    Lux, C.; Fröhlich, B.

    2009-04-01

    High quality volume visualization through ray casting on graphics processing units (GPU) has become an important approach for many application domains. We present a GPU-based, multi-resolution ray casting technique for the interactive visualization of massive volume data sets commonly found in the oil and gas industry. Large volume data sets are represented as a multi-resolution hierarchy based on an octree data structure. The original volume data is decomposed into small bricks of a fixed size acting as the leaf nodes of the octree. These nodes are the highest resolution of the volume. Coarser resolutions are represented through inner nodes of the hierarchy which are generated by down sampling eight neighboring nodes on a finer level. Due to limited memory resources of current desktop workstations and graphics hardware only a limited working set of bricks can be locally maintained for a frame to be displayed. This working set is chosen to represent the whole volume at different local resolution levels depending on the current viewer position, transfer function and distinct areas of interest. During runtime the working set of bricks is maintained in CPU- and GPU memory and is adaptively updated by asynchronously fetching data from external sources like hard drives or a network. The CPU memory hereby acts as a secondary level cache for these sources from which the GPU representation is updated. Our volume ray casting algorithm is based on a 3D texture-atlas in GPU memory. This texture-atlas contains the complete working set of bricks of the current multi-resolution representation of the volume. This enables the volume ray casting algorithm to access the whole working set of bricks through only a single 3D texture. For traversing rays through the volume, information about the locations and resolution levels of visited bricks are required for correct compositing computations. We encode this information into a small 3D index texture which represents the current octree subdivision on its finest level and spatially organizes the bricked data. This approach allows us to render a bricked multi-resolution volume data set utilizing only a single rendering pass with no loss of compositing precision. In contrast most state-of-the art volume rendering systems handle the bricked data as individual 3D textures, which are rendered one at a time while the results are composited into a lower precision frame buffer. Furthermore, our method enables us to integrate advanced volume rendering techniques like empty-space skipping, adaptive sampling and preintegrated transfer functions in a very straightforward manner with virtually no extra costs. Our interactive volume ray tracing implementation allows high quality visualizations of massive volume data sets of tens of Gigabytes in size on standard desktop workstations.

  3. Detection of pesticides and dioxins in tissue fats and rendering oils using laser-induced breakdown spectroscopy (LIBS).

    PubMed

    Multari, Rosalie A; Cremers, David A; Scott, Thomas; Kendrick, Peter

    2013-03-13

    In laser-induced breakdown spectroscopy (LIBS), a series of powerful laser pulses are directed at a surface to form microplasmas from which light is collected and spectrally analyzed to identify the surface material. In most cases, no sample preparation is needed, and results can be automated and made available within seconds to minutes. Advances in LIBS spectral data analysis using multivariate regression techniques have led to the ability to detect organic chemicals in complex matrices such as foods. Here, the use of LIBS to differentiate samples contaminated with aldrin, 1,2,3,4,6,7,8-heptachlorodibenzo-p-dioxin, chlorpyrifos, and dieldrin in the complex matrices of tissue fats and rendering oils is described. The pesticide concentrations in the samples ranged from 0.005 to 0.1 μg/g. All samples were successfully differentiated from each other and from control samples. Sample concentrations could also be differentiated for all of the pesticides and the dioxin included in this study. The results presented here provide first proof-of-principle data for the ability to create LIBS-based instrumentation for the rapid analysis of pesticide and dioxin contamination in tissue fat and rendered oils.

  4. Predicting Student Performance in Sonographic Scanning Using Spatial Ability as an Ability Determinent of Skill Acquisition

    ERIC Educational Resources Information Center

    Clem, Douglas Wayne

    2012-01-01

    Spatial ability refers to an individual's capacity to visualize and mentally manipulate three dimensional objects. Since sonographers manually manipulate 2D and 3D sonographic images to generate multi-viewed, logical, sequential renderings of an anatomical structure, it can be assumed that spatial ability is central to the perception and…

  5. Real-time stereographic display of volumetric datasets in radiology

    NASA Astrophysics Data System (ADS)

    Wang, Xiao Hui; Maitz, Glenn S.; Leader, J. K.; Good, Walter F.

    2006-02-01

    A workstation for testing the efficacy of stereographic displays for applications in radiology has been developed, and is currently being tested on lung CT exams acquired for lung cancer screening. The system exploits pre-staged rendering to achieve real-time dynamic display of slabs, where slab thickness, axial position, rendering method, brightness and contrast are interactively controlled by viewers. Stereo presentation is achieved by use of either frame-swapping images or cross-polarizing images. The system enables viewers to toggle between alternative renderings such as one using distance-weighted ray casting by maximum-intensity-projection, which is optimal for detection of small features in many cases, and ray casting by distance-weighted averaging, for characterizing features once detected. A reporting mechanism is provided which allows viewers to use a stereo cursor to measure and mark the 3D locations of specific features of interest, after which a pop-up dialog box appears for entering findings. The system's impact on performance is being tested on chest CT exams for lung cancer screening. Radiologists' subjective assessments have been solicited for other kinds of 3D exams (e.g., breast MRI) and their responses have been positive. Objective estimates of changes in performance and efficiency, however, must await the conclusion of our study.

  6. Papua New Guinea MT: Looking where seismic is blind

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoversten, G.M.

    1996-11-01

    Hydrocarbon exploration in the Papuan fold belt is made extremely difficult by mountainous terrain, equatorial jungle and thick karstified Miocene limestones at the surface. The high-velocity karstified limestones at or near the surface often render the seismic technique useless for imaging the subsurface. In such areas magnetotellurics (MT) provides a valuable capability for mapping subsurface structure. Numerical and field data examples are presented which demonstrate the severity of the 1D errors and the improvements in accuracy which can be achieved using a 2D inverse solution. Two MT lines over adjacent anticlines, both with well control and seismic data, are usedmore » to demonstrate the application of 1D and 2D inversions for structural models. The example over the Hides anticline illustrates a situation where 1D inversion of either TE or TM mode provides essentially the same depth to base of Darai as 2D inversion of both TE and TM. The example over the Angore anticline illustrates the inadequacy of 1D inversion in structurally complex geology complicated by electrical statics. Four MT lines along the Angore anticline have been interpreted using 2D inversion. Three-dimensional modelling has been used to simulate 3D statics in an otherwise 2D earth. These data were used to test the Groom-Bailey (GB) decomposition for possible benefits in reducing static effects and estimating geoelectric strike in the Papua New Guinea (PNG) field data. It has been found that the GB decomposition can provide improved regional 2D strike estimates in 3D contaminated data. However, in situations such as PNG, where the regional 2D strike is well established and hence can be fixed, the GB decomposition provides apparent resistivities identical to those simply rotated to strike.« less

  7. Three-dimensional volume-rendering technique in the angiographic follow-up of intracranial aneurysms embolized with coils.

    PubMed

    Zhou, Bing; Li, Ming-Hua; Wang, Wu; Xu, Hao-Wen; Cheng, Yong-De; Wang, Jue

    2010-03-01

    The authors conducted a study to evaluate the advantages of a 3D volume-rendering technique (VRT) in follow-up digital subtraction (DS) angiography of coil-embolized intracranial aneurysms. One hundred nine patients with 121 intracranial aneurysms underwent endovascular coil embolization and at least 1 follow-up DS angiography session at the authors' institution. Two neuroradiologists independently evaluated the conventional 2D DS angiograms, rotational angiograms, and 3D VRT images obtained at the interventional procedures and DS angiography follow-up. If multiple follow-up sessions were performed, the final follow-up was mainly considered. The authors compared the 3 techniques for their ability to detect aneurysm remnants (including aneurysm neck and sac remnants) and parent artery stenosis based on the angiographic follow-up. The Kruskal-Wallis test was used for group comparisons, and the kappa test was used to measure interobserver agreement. Statistical analyses were performed using commercially available software. There was a high statistical significance among 2D DS angiography, rotational angiography, and 3D VRT results (X(2) = 9.9613, p = 0.0069) when detecting an aneurysm remnant. Further comparisons disclosed a statistical significance between 3D VRT and rotational angiography (X(2) = 4.9754, p = 0.0257); a high statistical significance between 3D VRT and 2D DS angiography (X(2) = 8.9169, p = 0.0028); and no significant difference between rotational angiography and 2D DS angiography (X(2) = 0.5648, p = 0.4523). There was no statistical significance among the 3 techniques when detecting parent artery stenosis (X(2) = 2.5164, p = 0.2842). One case, in which parent artery stenosis was diagnosed by 2D DS angiography and rotational angiography, was excluded by 3D VRT following observations of multiple views. The kappa test showed good agreement between the 2 observers. The 3D VRT is more sensitive in detecting aneurysm remnants than 2D DS angiography and rotational angiography and is helpful for identifying parent artery stenosis. The authors recommend this technique for the angiographic follow-up of patients with coil-embolized aneurysms.

  8. A Single Swede Midge (Diptera: Cecidomyiidae) Larva Can Render Cauliflower Unmarketable.

    PubMed

    Stratton, Chase A; Hodgdon, Elisabeth A; Zuckerman, Samuel G; Shelton, Anthony M; Chen, Yolanda H

    2018-05-01

    Swede midge, Contarinia nasturtii Kieffer (Diptera: Cecidomyiidae), is an invasive pest causing significant damage on Brassica crops in the Northeastern United States and Eastern Canada. Heading brassicas, like cauliflower, appear to be particularly susceptible. Swede midge is difficult to control because larvae feed concealed inside meristematic tissues of the plant. In order to develop damage and marketability thresholds necessary for integrated pest management, it is important to determine how many larvae render plants unmarketable and whether the timing of infestation affects the severity of damage. We manipulated larval density (0, 1, 3, 5, 10, or 20) per plant and the timing of infestation (30, 55, and 80 d after seeding) on cauliflower in the lab and field to answer the following questions: 1) What is the swede midge damage threshold? 2) How many swede midge larvae can render cauliflower crowns unmarketable? and 3) Does the age of cauliflower at infestation influence the severity of damage and marketability? We found that even a single larva can cause mild twisting and scarring in the crown rendering cauliflower unmarketable 52% of the time, with more larvae causing more severe damage and additional losses, regardless of cauliflower age at infestation.

  9. On-demand rendering of an oblique slice through 3D volumetric data using JPEG2000 client-server framework

    NASA Astrophysics Data System (ADS)

    Joshi, Rajan L.

    2006-03-01

    In medical imaging, the popularity of image capture modalities such as multislice CT and MRI is resulting in an exponential increase in the amount of volumetric data that needs to be archived and transmitted. At the same time, the increased data is taxing the interpretation capabilities of radiologists. One of the workflow strategies recommended for radiologists to overcome the data overload is the use of volumetric navigation. This allows the radiologist to seek a series of oblique slices through the data. However, it might be inconvenient for a radiologist to wait until all the slices are transferred from the PACS server to a client, such as a diagnostic workstation. To overcome this problem, we propose a client-server architecture based on JPEG2000 and JPEG2000 Interactive Protocol (JPIP) for rendering oblique slices through 3D volumetric data stored remotely at a server. The client uses the JPIP protocol for obtaining JPEG2000 compressed data from the server on an as needed basis. In JPEG2000, the image pixels are wavelet-transformed and the wavelet coefficients are grouped into precincts. Based on the positioning of the oblique slice, compressed data from only certain precincts is needed to render the slice. The client communicates this information to the server so that the server can transmit only relevant compressed data. We also discuss the use of caching on the client side for further reduction in bandwidth requirements. Finally, we present simulation results to quantify the bandwidth savings for rendering a series of oblique slices.

  10. Mechanistic insight into ligand binding to G-quadruplex DNA

    PubMed Central

    Di Leva, Francesco Saverio; Novellino, Ettore; Cavalli, Andrea; Parrinello, Michele; Limongelli, Vittorio

    2014-01-01

    Specific guanine-rich regions in human genome can form higher-order DNA structures called G-quadruplexes, which regulate many relevant biological processes. For instance, the formation of G-quadruplex at telomeres can alter cellular functions, inducing apoptosis. Thus, developing small molecules that are able to bind and stabilize the telomeric G-quadruplexes represents an attractive strategy for antitumor therapy. An example is 3-(benzo[d]thiazol-2-yl)-7-hydroxy-8-((4-(2-hydroxyethyl)piperazin-1-yl)methyl)-2H-chromen-2-one (compound 1), recently identified as potent ligand of the G-quadruplex [d(TGGGGT)]4 with promising in vitro antitumor activity. The experimental observations are suggestive of a complex binding mechanism that, despite efforts, has defied full characterization. Here, we provide through metadynamics simulations a comprehensive understanding of the binding mechanism of 1 to the G-quadruplex [d(TGGGGT)]4. In our calculations, the ligand explores all the available binding sites on the DNA structure and the free-energy landscape of the whole binding process is computed. We have thus disclosed a peculiar hopping binding mechanism whereas 1 is able to bind both to the groove and to the 3’ end of the G-quadruplex. Our results fully explain the available experimental data, rendering our approach of great value for further ligand/DNA studies. PMID:24753420

  11. Three-dimensional Printing in Developing Countries

    PubMed Central

    Ibrahim, Ahmed M. S.; Jose, Rod R.; Rabie, Amr N.; Gerstle, Theodore L.; Lee, Bernard T.

    2015-01-01

    Summary: The advent of 3-dimensional (3D) printing technology has facilitated the creation of customized objects. The lack of regulation in developing countries renders conventional means of addressing various healthcare issues challenging. 3D printing may provide a venue for addressing many of these concerns in an inexpensive and easily accessible fashion. These may potentially include the production of basic medical supplies, vaccination beads, laboratory equipment, and prosthetic limbs. As this technology continues to improve and prices are reduced, 3D printing has the potential ability to promote initiatives across the entire developing world, resulting in improved surgical care and providing a higher quality of healthcare to its residents. PMID:26301132

  12. Three-dimensional Printing in Developing Countries.

    PubMed

    Ibrahim, Ahmed M S; Jose, Rod R; Rabie, Amr N; Gerstle, Theodore L; Lee, Bernard T; Lin, Samuel J

    2015-07-01

    The advent of 3-dimensional (3D) printing technology has facilitated the creation of customized objects. The lack of regulation in developing countries renders conventional means of addressing various healthcare issues challenging. 3D printing may provide a venue for addressing many of these concerns in an inexpensive and easily accessible fashion. These may potentially include the production of basic medical supplies, vaccination beads, laboratory equipment, and prosthetic limbs. As this technology continues to improve and prices are reduced, 3D printing has the potential ability to promote initiatives across the entire developing world, resulting in improved surgical care and providing a higher quality of healthcare to its residents.

  13. Three-dimensional model of the skull and the cranial bones reconstructed from CT scans designed for rapid prototyping process.

    PubMed

    Skrzat, Janusz; Spulber, Alexandru; Walocha, Jerzy

    This paper presents the effects of building mesh models of the human skull and the cranial bones from a series of CT-scans. With the aid of computer so ware, 3D reconstructions of the whole skull and segmented cranial bones were performed and visualized by surface rendering techniques. The article briefly discusses clinical and educational applications of 3D cranial models created using stereolitographic reproduction.

  14. Voxel-based lesion mapping of meningioma: a comprehensive lesion location mapping of 260 lesions.

    PubMed

    Hirayama, Ryuichi; Kinoshita, Manabu; Arita, Hideyuki; Kagawa, Naoki; Kishima, Haruhiko; Hashimoto, Naoya; Fujimoto, Yasunori; Yoshimine, Toshiki

    2018-06-01

    OBJECTIVE In the present study the authors aimed to determine preferred locations of meningiomas by avoiding descriptive analysis and instead using voxel-based lesion mapping and 3D image-rendering techniques. METHODS Magnetic resonance images obtained in 248 treatment-naïve meningioma patients with 260 lesions were retrospectively and consecutively collected. All images were registered to a 1-mm isotropic, high-resolution, T1-weighted brain atlas provided by the Montreal Neurological Institute (the MNI152), and a lesion frequency map was created, followed by 3D volume rendering to visualize the preferred locations of meningiomas in 3D. RESULTS The 3D lesion frequency map clearly showed that skull base structures such as parasellar, sphenoid wing, and petroclival regions were commonly affected by the tumor. The middle one-third of the superior sagittal sinus was most commonly affected in parasagittal tumors. Substantial lesion accumulation was observed around the leptomeninges covering the central sulcus and the sylvian fissure, with very few lesions observed at the frontal, parietal, and occipital convexities. CONCLUSIONS Using an objective visualization method, meningiomas were shown to be located around the middle third of the superior sagittal sinus, the perisylvian convexity, and the skull base. These observations, which are in line with previous descriptive analyses, justify further use of voxel-based lesion mapping techniques to help understand the biological nature of this disease.

  15. Virtual Sonography Through the Internet: Volume Compression Issues

    PubMed Central

    Vilarchao-Cavia, Joseba; Troyano-Luque, Juan-Mario; Clavijo, Matilde

    2001-01-01

    Background Three-dimensional ultrasound images allow virtual sonography even at a distance. However, the size of final 3-D files limits their transmission through slow networks such as the Internet. Objective To analyze compression techniques that transform ultrasound images into small 3-D volumes that can be transmitted through the Internet without loss of relevant medical information. Methods Samples were selected from ultrasound examinations performed during, 1999-2000, in the Obstetrics and Gynecology Department at the University Hospital in La Laguna, Canary Islands, Spain. The conventional ultrasound video output was recorded at 25 fps (frames per second) on a PC, producing 100- to 120-MB files (for from 500 to 550 frames). Processing to obtain 3-D images progressively reduced file size. Results The original frames passed through different compression stages: selecting the region of interest, rendering techniques, and compression for storage. Final 3-D volumes reached 1:25 compression rates (1.5- to 2-MB files). Those volumes need 7 to 8 minutes to be transmitted through the Internet at a mean data throughput of 6.6 Kbytes per second. At the receiving site, virtual sonography is possible using orthogonal projections or oblique cuts. Conclusions Modern volume-rendering techniques allowed distant virtual sonography through the Internet. This is the result of their efficient data compression that maintains its attractiveness as a main criterion for distant diagnosis. PMID:11720963

  16. Fast Physically Correct Refocusing for Sparse Light Fields Using Block-Based Multi-Rate View Interpolation.

    PubMed

    Huang, Chao-Tsung; Wang, Yu-Wen; Huang, Li-Ren; Chin, Jui; Chen, Liang-Gee

    2017-02-01

    Digital refocusing has a tradeoff between complexity and quality when using sparsely sampled light fields for low-storage applications. In this paper, we propose a fast physically correct refocusing algorithm to address this issue in a twofold way. First, view interpolation is adopted to provide photorealistic quality at infocus-defocus hybrid boundaries. Regarding its conventional high complexity, we devised a fast line-scan method specifically for refocusing, and its 1D kernel can be 30× faster than the benchmark View Synthesis Reference Software (VSRS)-1D-Fast. Second, we propose a block-based multi-rate processing flow for accelerating purely infocused or defocused regions, and a further 3- 34× speedup can be achieved for high-resolution images. All candidate blocks of variable sizes can interpolate different numbers of rendered views and perform refocusing in different subsampled layers. To avoid visible aliasing and block artifacts, we determine these parameters and the simulated aperture filter through a localized filter response analysis using defocus blur statistics. The final quadtree block partitions are then optimized in terms of computation time. Extensive experimental results are provided to show superior refocusing quality and fast computation speed. In particular, the run time is comparable with the conventional single-image blurring, which causes serious boundary artifacts.

  17. Analyses of the Complexity of Patients Undergoing Attended Polysomnography in the Era of Home Sleep Apnea Tests

    PubMed Central

    Colaco, Brendon; Herold, Daniel; Johnson, Matthew; Roellinger, Daniel; Naessens, James M.; Morgenthaler, Timothy I.

    2018-01-01

    Study Objectives: Health care complexity includes dimensions of patient comorbidity and the level of services needed to meet patient demands. Home sleep apnea tests (HSAT) are increasingly used to test medically uncomplicated patients suspected of having moderate to severe obstructive sleep apnea (OSA). Patients with significant comorbidities or other sleep disorders are not candidates for HSAT and require attended in-center polysomnography. We hypothesized that this trend would result in increasingly complex patients being studied in sleep centers. Methods: Our study had two parts. To ascertain trends in sleep patient comorbidity, we used administrative diagnostic codes from patients undergoing polysomnography at the Mayo Clinic Center for Sleep Medicine from 2005 to June 2015 to calculate the Charlson and the Elixhauser comorbidity indices. We measured the level of services provided in two ways: (1) in a subset of patients from the past 2 months of 2015, we evaluated correlation of these morbidity indices with an internally developed Polysomnogram Clinical Index (PSGCI) rating anticipated patient care needs from 0 to 3 and (2) we measured the sleep study complexity based on polysomnography protocol design. Results: In 43,780 patients studied from 2005 to June 2015, the Charlson index increased from a mean of 1.38 to 1.88 (3.1% per year, P < .001) and the mean Elixhauser index increased from 2.61 to 3.35 (2.5% per year, P < .001). Both comorbidity indices were significantly higher at the highest (Level 3) level of the PSGCI (P < .001), and sleep study complexity increased over time. Conclusions: The complexity of patients undergoing attended polysomnography has increased by 28% to 36% over the past decade as measured by validated comorbidity indices, and these indices correlate with the complexity of rendered care during polysomnography. These findings have implications for increasing requirements for staffing, monitoring capabilities, and facility design of future sleep centers. Commentary: A commentary on this article appears in this issue on page 499. Citation: Colaco B, Herold D, Johnson M, Roellinger D, Naessens JM, Morgenthaler TI. Analyses of the complexity of patients undergoing attended polysomnography in the era of home sleep apnea tests. J Clin Sleep Med. 2018;14(4):631–639. PMID:29609716

  18. Augmented reality in laparoscopic surgical oncology.

    PubMed

    Nicolau, Stéphane; Soler, Luc; Mutter, Didier; Marescaux, Jacques

    2011-09-01

    Minimally invasive surgery represents one of the main evolutions of surgical techniques aimed at providing a greater benefit to the patient. However, minimally invasive surgery increases the operative difficulty since the depth perception is usually dramatically reduced, the field of view is limited and the sense of touch is transmitted by an instrument. However, these drawbacks can currently be reduced by computer technology guiding the surgical gesture. Indeed, from a patient's medical image (US, CT or MRI), Augmented Reality (AR) can increase the surgeon's intra-operative vision by providing a virtual transparency of the patient. AR is based on two main processes: the 3D visualization of the anatomical or pathological structures appearing in the medical image, and the registration of this visualization on the real patient. 3D visualization can be performed directly from the medical image without the need for a pre-processing step thanks to volume rendering. But better results are obtained with surface rendering after organ and pathology delineations and 3D modelling. Registration can be performed interactively or automatically. Several interactive systems have been developed and applied to humans, demonstrating the benefit of AR in surgical oncology. It also shows the current limited interactivity due to soft organ movements and interaction between surgeon instruments and organs. If the current automatic AR systems show the feasibility of such system, it is still relying on specific and expensive equipment which is not available in clinical routine. Moreover, they are not robust enough due to the high complexity of developing a real-time registration taking organ deformation and human movement into account. However, the latest results of automatic AR systems are extremely encouraging and show that it will become a standard requirement for future computer-assisted surgical oncology. In this article, we will explain the concept of AR and its principles. Then, we will review the existing interactive and automatic AR systems in digestive surgical oncology, highlighting their benefits and limitations. Finally, we will discuss the future evolutions and the issues that still have to be tackled so that this technology can be seamlessly integrated in the operating room. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. CatSperζ regulates the structural continuity of sperm Ca2+ signaling domains and is required for normal fertility

    PubMed Central

    Chung, Jean-Ju; Miki, Kiyoshi; Kim, Doory; Shim, Sang-Hee; Shi, Huanan F; Hwang, Jae Yeon; Cai, Xinjiang; Iseri, Yusuf; Zhuang, Xiaowei; Clapham, David E

    2017-01-01

    We report that the Gm7068 (CatSpere) and Tex40 (CatSperz) genes encode novel subunits of a 9-subunit CatSper ion channel complex. Targeted disruption of CatSperz reduces CatSper current and sperm rheotactic efficiency in mice, resulting in severe male subfertility. Normally distributed in linear quadrilateral nanodomains along the flagellum, the complex lacking CatSperζ is disrupted at ~0.8 μm intervals along the flagellum. This disruption renders the proximal flagellum inflexible and alters the 3D flagellar envelope, thus preventing sperm from reorienting against fluid flow in vitro and efficiently migrating in vivo. Ejaculated CatSperz-null sperm cells retrieved from the mated female uterus partially rescue in vitro fertilization (IVF) that failed with epididymal spermatozoa alone. Human CatSperε is quadrilaterally arranged along the flagella, similar to the CatSper complex in mouse sperm. We speculate that the newly identified CatSperζ subunit is a late evolutionary adaptation to maximize fertilization inside the mammalian female reproductive tract. DOI: http://dx.doi.org/10.7554/eLife.23082.001 PMID:28226241

  20. Scalable and Interactive Segmentation and Visualization of Neural Processes in EM Datasets

    PubMed Central

    Jeong, Won-Ki; Beyer, Johanna; Hadwiger, Markus; Vazquez, Amelio; Pfister, Hanspeter; Whitaker, Ross T.

    2011-01-01

    Recent advances in scanning technology provide high resolution EM (Electron Microscopy) datasets that allow neuroscientists to reconstruct complex neural connections in a nervous system. However, due to the enormous size and complexity of the resulting data, segmentation and visualization of neural processes in EM data is usually a difficult and very time-consuming task. In this paper, we present NeuroTrace, a novel EM volume segmentation and visualization system that consists of two parts: a semi-automatic multiphase level set segmentation with 3D tracking for reconstruction of neural processes, and a specialized volume rendering approach for visualization of EM volumes. It employs view-dependent on-demand filtering and evaluation of a local histogram edge metric, as well as on-the-fly interpolation and ray-casting of implicit surfaces for segmented neural structures. Both methods are implemented on the GPU for interactive performance. NeuroTrace is designed to be scalable to large datasets and data-parallel hardware architectures. A comparison of NeuroTrace with a commonly used manual EM segmentation tool shows that our interactive workflow is faster and easier to use for the reconstruction of complex neural processes. PMID:19834227

  1. Correlation induced electron-hole asymmetry in quasi- two-dimensional iridates.

    PubMed

    Pärschke, Ekaterina M; Wohlfeld, Krzysztof; Foyevtsova, Kateryna; van den Brink, Jeroen

    2017-09-25

    The resemblance of crystallographic and magnetic structures of the quasi-two-dimensional iridates Ba 2 IrO 4 and Sr 2 IrO 4 to La 2 CuO 4 points at an analogy to cuprate high-Tc superconductors, even if spin-orbit coupling is very strong in iridates. Here we examine this analogy for the motion of a charge (hole or electron) added to the antiferromagnetic ground state. We show that correlation effects render the hole and electron case in iridates very different. An added electron forms a spin polaron, similar to the cuprates, but the situation of a removed electron is far more complex. Many-body 5d 4 configurations form which can be singlet and triplet states of total angular momentum that strongly affect the hole motion. This not only has ramifications for the interpretation of (inverse-)photoemission experiments but also demonstrates that correlation physics renders electron- and hole-doped iridates fundamentally different.Some iridate compounds such as Sr 2 IrO 4 have electronic and atomic structures similar to quasi-2D copper oxides, raising the prospect of high temperature superconductivity. Here, the authors show that there is significant electron-hole asymmetry in iridates, contrary to expectations from the cuprates.

  2. A spectral-Tchebychev solution for three-dimensional dynamics of curved beams under mixed boundary conditions

    NASA Astrophysics Data System (ADS)

    Bediz, Bekir; Aksoy, Serdar

    2018-01-01

    This paper presents the application of the spectral-Tchebychev (ST) technique for solution of three-dimensional dynamics of curved beams/structures having variable and arbitrary cross-section under mixed boundary conditions. To accurately capture the vibrational behavior of curved structures, a three-dimensional (3D) solution approach is required since these structures generally exhibit coupled motions. In this study, the integral boundary value problem (IBVP) governing the dynamics of the curved structures is found using extended Hamilton's principle where the strain energy is expressed using 3D linear elasticity equation. To solve the IBVP numerically, the 3D spectral Tchebychev (3D-ST) approach is used. To evaluate the integral and derivative operations defined by the IBVP and to render the complex geometry into an equivalent straight beam with rectangular cross-section, a series of coordinate transformations are applied. To validate and assess the performance of the presented solution approach, two case studies are performed: (i) curved beam with rectangular cross-section, (ii) curved and pretwisted beam with airfoil cross-section. In both cases, the results (natural frequencies and mode shapes) are also found using a finite element (FE) solution approach. It is shown that the difference in predicted natural frequencies are less than 1%, and the mode shapes are in excellent agreement based on the modal assurance criteria (MAC) analyses; however, the presented spectral-Tchebychev solution approach significantly reduces the computational burden. Therefore, it can be concluded that the presented solution approach can capture the 3D vibrational behavior of curved beams as accurately as an FE solution, but for a fraction of the computational cost.

  3. Expanding the Interaction Lexicon for 3D Graphics

    DTIC Science & Technology

    2001-11-01

    believe that extending it to work with image-based rendering engines is straightforward. I could modify plenoptic image editing [Seitz] to allow...M. Seitz and Kiriakos N. Kutulakos. Plenoptic Image Editing. International Conference on Computer Vision ‘98, pages 17-24. [ShapeCapture

  4. Microgravity

    NASA Image and Video Library

    2004-04-15

    Computed tomography (CT) images of resin-impregnated Mechanics of Granular Materials (MGM) specimens are assembled to provide 3-D volume renderings of density patterns formed by dislocation under the external loading stress profile applied during the experiments. Experiments flown on STS-79 and STS-89. Principal Investigator: Dr. Stein Sture

  5. Non-photorealistic rendering of virtual implant models for computer-assisted fluoroscopy-based surgical procedures

    NASA Astrophysics Data System (ADS)

    Zheng, Guoyan

    2007-03-01

    Surgical navigation systems visualize the positions and orientations of surgical instruments and implants as graphical overlays onto a medical image of the operated anatomy on a computer monitor. The orthopaedic surgical navigation systems could be categorized according to the image modalities that are used for the visualization of surgical action. In the so-called CT-based systems or 'surgeon-defined anatomy' based systems, where a 3D volume or surface representation of the operated anatomy could be constructed from the preoperatively acquired tomographic data or through intraoperatively digitized anatomy landmarks, a photorealistic rendering of the surgical action has been identified to greatly improve usability of these navigation systems. However, this may not hold true when the virtual representation of surgical instruments and implants is superimposed onto 2D projection images in a fluoroscopy-based navigation system due to the so-called image occlusion problem. Image occlusion occurs when the field of view of the fluoroscopic image is occupied by the virtual representation of surgical implants or instruments. In these situations, the surgeon may miss part of the image details, even if transparency and/or wire-frame rendering is used. In this paper, we propose to use non-photorealistic rendering to overcome this difficulty. Laboratory testing results on foamed plastic bones during various computer-assisted fluoroscopybased surgical procedures including total hip arthroplasty and long bone fracture reduction and osteosynthesis are shown.

  6. Bragg Gratings, Photosensitivity, and Poling in Glass Fibers and Waveguides: Applications and Fundamentals. Technical Digest Series, Volume 17

    DTIC Science & Technology

    1998-05-26

    therefore, produce higher propagation losses. A. Theory The presence of losses in the cladding modes renders their propagation constants complex...growth theory [10, 11] by tf(L,F,Ga)= ’ n + \\ „4-1 (" + l) 0 F \\ L <C (1) where L is the service length, L0 is the fiber gauge length, and m is...single input pulse, (p. 114) 8:30am BMB2 ■ Ultrashort purse propagation through fiber gratings: theory and experiment, L.R. Chen, S.D. Benjamin

  7. Lamb wave detection of limpet mines on ship hulls.

    PubMed

    Bingham, Jill; Hinders, Mark; Friedman, Adam

    2009-12-01

    This paper describes the use of ultrasonic guided waves for identifying the mass loading due to underwater limpet mines on ship hulls. The Dynamic Wavelet Fingerprint Technique (DFWT) is used to render the guided wave mode information in two-dimensional binary images because the waveform features of interest are too subtle to identify in time domain. The use of wavelets allows both time and scale features from the original signals to be retained, and image processing can be used to automatically extract features that correspond to the arrival times of the guided wave modes. For further understanding of how the guided wave modes propagate through the real structures, a parallel processing, 3D elastic wave simulation is developed using the finite integration technique (EFIT). This full field, technique models situations that are too complex for analytical solutions, such as built up 3D structures. The simulations have produced informative visualizations of the guided wave modes in the structures as well as mimicking directly the output from sensors placed in the simulation space for direct comparison to experiments. Results from both drydock and in-water experiments with dummy mines are also shown.

  8. Splitting a colon geometry with multiplanar clipping

    NASA Astrophysics Data System (ADS)

    Ahn, David K.; Vining, David J.; Ge, Yaorong; Stelts, David R.

    1998-06-01

    Virtual colonoscopy, a recent three-dimensional (3D) visualization technique, has provided radiologists with a unique diagnostic tool. Using this technique, a radiologist can examine the internal morphology of a patient's colon by navigating through a surface-rendered model that is constructed from helical computed tomography image data. Virtual colonoscopy can be used to detect early forms of colon cancer in a way that is less invasive and expensive compared to conventional endoscopy. However, the common approach of 'flying' through the colon lumen to visually search for polyps is tedious and time-consuming, especially when a radiologist loses his or her orientation within the colon. Furthermore, a radiologist's field of view is often limited by the 3D camera position located inside the colon lumen. We have developed a new technique, called multi-planar geometry clipping, that addresses these problems. Our algorithm divides a complex colon anatomy into several smaller segments, and then splits each of these segments in half for display on a static medium. Multi-planar geometry clipping eliminates virtual colonoscopy's dependence upon expensive, real-time graphics workstations by enabling radiologists to globally inspect the entire internal surface of the colon from a single viewpoint.

  9. An Agent Based Collaborative Simplification of 3D Mesh Model

    NASA Astrophysics Data System (ADS)

    Wang, Li-Rong; Yu, Bo; Hagiwara, Ichiro

    Large-volume mesh model faces the challenge in fast rendering and transmission by Internet. The current mesh models obtained by using three-dimensional (3D) scanning technology are usually very large in data volume. This paper develops a mobile agent based collaborative environment on the development platform of mobile-C. Communication among distributed agents includes grasping image of visualized mesh model, annotation to grasped image and instant message. Remote and collaborative simplification can be efficiently conducted by Internet.

  10. On Alternative Approaches to 3D Image Perception: Monoscopic 3D Techniques

    NASA Astrophysics Data System (ADS)

    Blundell, Barry G.

    2015-06-01

    In the eighteenth century, techniques that enabled a strong sense of 3D perception to be experienced without recourse to binocular disparities (arising from the spatial separation of the eyes) underpinned the first significant commercial sales of 3D viewing devices and associated content. However following the advent of stereoscopic techniques in the nineteenth century, 3D image depiction has become inextricably linked to binocular parallax and outside the vision science and arts communities relatively little attention has been directed towards earlier approaches. Here we introduce relevant concepts and terminology and consider a number of techniques and optical devices that enable 3D perception to be experienced on the basis of planar images rendered from a single vantage point. Subsequently we allude to possible mechanisms for non-binocular parallax based 3D perception. Particular attention is given to reviewing areas likely to be thought-provoking to those involved in 3D display development, spatial visualization, HCI, and other related areas of interdisciplinary research.

  11. 32 CFR 156.3 - Policy.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    .... (d) No negative inference may be raised solely on the basis of mental health counseling. Such counseling may be a positive factor that, by itself, shall not jeopardize the rendering of eligibility... counseling, where relevant to adjudication for a national security position, may justify further inquiry to...

  12. Virtual Environment for Surgical Room of the Future.

    DTIC Science & Technology

    1995-10-01

    Design; 1. wire frame Dynamic Interaction 2. surface B. Acoustic Three-Dimensional Modeling; 3. solid based on radiosity modeling B. Dynamic...infection control of people and E. Rendering and Shadowing equipment 1. ray tracing D. Fluid Flow 2. radiosity F. Animation OBJECT RECOGNITION COMMUNICATION

  13. Windows Memory Forensic Data Visualization

    DTIC Science & Technology

    2014-06-12

    clustering characteristics (Bastian, et al, 2009). The software is written in Java and utilizes the OpenGL library for rendering graphical content...Toolkit 2 nd ed. Burlington MA: Syngress. D3noob. (2013, February 8). Using a MYSQL database as a source of data. Message posted to http

  14. Evaluation of Adaptive Subdivision Method on Mobile Device

    NASA Astrophysics Data System (ADS)

    Rahim, Mohd Shafry Mohd; Isa, Siti Aida Mohd; Rehman, Amjad; Saba, Tanzila

    2013-06-01

    Recently, there are significant improvements in the capabilities of mobile devices; but rendering large 3D object is still tedious because of the constraint in resources of mobile devices. To reduce storage requirement, 3D object is simplified but certain area of curvature is compromised and the surface will not be smooth. Therefore a method to smoother selected area of a curvature is implemented. One of the popular methods is adaptive subdivision method. Experiments are performed using two data with results based on processing time, rendering speed and the appearance of the object on the devices. The result shows a downfall in frame rate performance due to the increase in the number of triangles with each level of iteration while the processing time of generating the new mesh also significantly increase. Since there is a difference in screen size between the devices the surface on the iPhone appears to have more triangles and more compact than the surface displayed on the iPad. [Figure not available: see fulltext.

  15. MRI for transformation of preserved organs and their pathologies into digital formats for medical education and creation of a virtual pathology museum. A pilot study.

    PubMed

    Venkatesh, S K; Wang, G; Seet, J E; Teo, L L S; Chong, V F H

    2013-03-01

    To evaluate the feasibility of magnetic resonance imaging (MRI) for the transformation of preserved organs and their disease entities into digital formats for medical education and creation of a virtual museum. MRI of selected 114 pathology specimen jars representing different organs and their diseases was performed using a 3 T MRI machine with two or more MRI sequences including three-dimensional (3D) T1-weighted (T1W), 3D-T2W, 3D-FLAIR (fluid attenuated inversion recovery), fat-water separation (DIXON), and gradient-recalled echo (GRE) sequences. Qualitative assessment of MRI for depiction of disease and internal anatomy was performed. Volume rendering was performed on commercially available workstations. The digital images, 3D models, and photographs of specimens were archived into a workstation serving as a virtual pathology museum. MRI was successfully performed on all specimens. The 3D-T1W and 3D-T2W sequences demonstrated the best contrast between normal and pathological tissues. The digital material is a useful aid for understanding disease by giving insights into internal structural changes not apparent on visual inspection alone. Volume rendering produced vivid 3D models with better contrast between normal tissue and diseased tissue compared to real specimens or their photographs in some cases. The digital library provides good illustration material for radiological-pathological correlation by enhancing pathological anatomy and information on nature and signal characteristics of tissues. In some specimens, the MRI appearance may be different from corresponding organ and disease in vivo due to dead tissue and changes induced by prolonged contact with preservative fluid. MRI of pathology specimens is feasible and provides excellent images for education and creating a virtual pathology museum that can serve as permanent record of digital material for self-directed learning, improving teaching aids, and radiological-pathological correlation. Copyright © 2012 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  16. EpitopeViewer: a Java application for the visualization and analysis of immune epitopes in the Immune Epitope Database and Analysis Resource (IEDB).

    PubMed

    Beaver, John E; Bourne, Philip E; Ponomarenko, Julia V

    2007-02-21

    Structural information about epitopes, particularly the three-dimensional (3D) structures of antigens in complex with immune receptors, presents a valuable source of data for immunology. This information is available in the Protein Data Bank (PDB) and provided in curated form by the Immune Epitope Database and Analysis Resource (IEDB). With continued growth in these data and the importance in understanding molecular level interactions of immunological interest there is a need for new specialized molecular visualization and analysis tools. The EpitopeViewer is a platform-independent Java application for the visualization of the three-dimensional structure and sequence of epitopes and analyses of their interactions with antigen-specific receptors of the immune system (antibodies, T cell receptors and MHC molecules). The viewer renders both 3D views and two-dimensional plots of intermolecular interactions between the antigen and receptor(s) by reading curated data from the IEDB and/or calculated on-the-fly from atom coordinates from the PDB. The 3D views and associated interactions can be saved for future use and publication. The EpitopeViewer can be accessed from the IEDB Web site http://www.immuneepitope.org through the quick link 'Browse Records by 3D Structure.' The EpitopeViewer is designed and been tested for use by immunologists with little or no training in molecular graphics. The EpitopeViewer can be launched from most popular Web browsers without user intervention. A Java Runtime Environment (RJE) 1.4.2 or higher is required.

  17. 8 CFR 235.2 - Parole for deferred inspection.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... can overcome a finding of inadmissibility by: (1) Posting a bond under section 213 of the Act; (2) Seeking and obtaining a waiver under section 211 or 212(d)(3) or (4) of the Act; or (3) Presenting... arrival is found or believed to be suffering from a disability that renders it impractical to proceed with...

  18. 6-DoF Haptic Rendering Using Continuous Collision Detection between Points and Signed Distance Fields.

    PubMed

    Hongyi Xu; Barbic, Jernej

    2017-01-01

    We present an algorithm for fast continuous collision detection between points and signed distance fields, and demonstrate how to robustly use it for 6-DoF haptic rendering of contact between objects with complex geometry. Continuous collision detection is often needed in computer animation, haptics, and virtual reality applications, but has so far only been investigated for polygon (triangular) geometry representations. We demonstrate how to robustly and continuously detect intersections between points and level sets of the signed distance field. We suggest using an octree subdivision of the distance field for fast traversal of distance field cells. We also give a method to resolve continuous collisions between point clouds organized into a tree hierarchy and a signed distance field, enabling rendering of contact between rigid objects with complex geometry. We investigate and compare two 6-DoF haptic rendering methods now applicable to point-versus-distance field contact for the first time: continuous integration of penalty forces, and a constraint-based method. An experimental comparison to discrete collision detection demonstrates that the continuous method is more robust and can correctly resolve collisions even under high velocities and during complex contact.

  19. KARHUNEN-LOÈVE Basis Functions of Kolmogorov Turbulence in the Sphere

    NASA Astrophysics Data System (ADS)

    Mathar, Richard J.

    In support of modeling atmospheric turbulence, the statistically independent Karhunen-Loève modes of refractive indices with isotropic Kolmogorov spectrum of the covariance are calculated inside a sphere of fixed radius, rendered as series of 3D Zernike functions. Many of the symmetry arguments of the well-known associated 2D problem for the circular input pupil remain valid. The technique of efficient diagonalization of the eigenvalue problem in wavenumber space is founded on the Fourier representation of the 3D Zernike basis, and extensible to the von-Kármán power spectrum.

  20. Scientific Visualization and Simulation for Multi-dimensional Marine Environment Data

    NASA Astrophysics Data System (ADS)

    Su, T.; Liu, H.; Wang, W.; Song, Z.; Jia, Z.

    2017-12-01

    As higher attention on the ocean and rapid development of marine detection, there are increasingly demands for realistic simulation and interactive visualization of marine environment in real time. Based on advanced technology such as GPU rendering, CUDA parallel computing and rapid grid oriented strategy, a series of efficient and high-quality visualization methods, which can deal with large-scale and multi-dimensional marine data in different environmental circumstances, has been proposed in this paper. Firstly, a high-quality seawater simulation is realized by FFT algorithm, bump mapping and texture animation technology. Secondly, large-scale multi-dimensional marine hydrological environmental data is virtualized by 3d interactive technologies and volume rendering techniques. Thirdly, seabed terrain data is simulated with improved Delaunay algorithm, surface reconstruction algorithm, dynamic LOD algorithm and GPU programming techniques. Fourthly, seamless modelling in real time for both ocean and land based on digital globe is achieved by the WebGL technique to meet the requirement of web-based application. The experiments suggest that these methods can not only have a satisfying marine environment simulation effect, but also meet the rendering requirements of global multi-dimension marine data. Additionally, a simulation system for underwater oil spill is established by OSG 3D-rendering engine. It is integrated with the marine visualization method mentioned above, which shows movement processes, physical parameters, current velocity and direction for different types of deep water oil spill particle (oil spill particles, hydrates particles, gas particles, etc.) dynamically and simultaneously in multi-dimension. With such application, valuable reference and decision-making information can be provided for understanding the progress of oil spill in deep water, which is helpful for ocean disaster forecasting, warning and emergency response.

  1. Forensic 3D Visualization of CT Data Using Cinematic Volume Rendering: A Preliminary Study.

    PubMed

    Ebert, Lars C; Schweitzer, Wolf; Gascho, Dominic; Ruder, Thomas D; Flach, Patricia M; Thali, Michael J; Ampanozi, Garyfalia

    2017-02-01

    The 3D volume-rendering technique (VRT) is commonly used in forensic radiology. Its main function is to explain medical findings to state attorneys, judges, or police representatives. New visualization algorithms permit the generation of almost photorealistic volume renderings of CT datasets. The objective of this study is to present and compare a variety of radiologic findings to illustrate the differences between and the advantages and limitations of the current VRT and the physically based cinematic rendering technique (CRT). Seventy volunteers were shown VRT and CRT reconstructions of 10 different cases. They were asked to mark the findings on the images and rate them in terms of realism and understandability. A total of 48 of the 70 questionnaires were returned and included in the analysis. On the basis of most of the findings presented, CRT appears to be equal or superior to VRT with respect to the realism and understandability of the visualized findings. Overall, in terms of realism, the difference between the techniques was statistically significant (p < 0.05). Most participants perceived the CRT findings to be more understandable than the VRT findings, but that difference was not statistically significant (p > 0.05). CRT, which is similar to conventional VRT, is not primarily intended for diagnostic radiologic image analysis, and therefore it should be used primarily as a tool to deliver visual information in the form of radiologic image reports. Using CRT for forensic visualization might have advantages over using VRT if conveying a high degree of visual realism is of importance. Most of the shortcomings of CRT have to do with the software being an early prototype.

  2. 10. Historic photo of rendering of rocket engine test facility ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    10. Historic photo of rendering of rocket engine test facility complex, April 28, 1964. On file at NASA Plumbrook Research Center, Sandusky, Ohio. NASA GRC photo number C-69472. - Rocket Engine Testing Facility, NASA Glenn Research Center, Cleveland, Cuyahoga County, OH

  3. Chrono: A Parallel Physics Library for Rigid-Body, Flexible-Body, and Fluid Dynamics

    DTIC Science & Technology

    2013-08-01

    big data. Chrono::Render is capable of using 320 cores and is built around Pixar’s RenderMan. All these components combine to produce Chrono, a multi...rather small collection of rigid and/or deformable bodies of complex geometry (hourglass wall, wheel, track shoe, excava- tor blade, dipper ), and a...motivated by the scope of arbitrary data sets and the potentially immense scene complexity that results from big data; REYES, the underlying architecture

  4. Visualization for Molecular Dynamics Simulation of Gas and Metal Surface Interaction

    NASA Astrophysics Data System (ADS)

    Puzyrkov, D.; Polyakov, S.; Podryga, V.

    2016-02-01

    The development of methods, algorithms and applications for visualization of molecular dynamics simulation outputs is discussed. The visual analysis of the results of such calculations is a complex and actual problem especially in case of the large scale simulations. To solve this challenging task it is necessary to decide on: 1) what data parameters to render, 2) what type of visualization to choose, 3) what development tools to use. In the present work an attempt to answer these questions was made. For visualization it was offered to draw particles in the corresponding 3D coordinates and also their velocity vectors, trajectories and volume density in the form of isosurfaces or fog. We tested the way of post-processing and visualization based on the Python language with use of additional libraries. Also parallel software was developed that allows processing large volumes of data in the 3D regions of the examined system. This software gives the opportunity to achieve desired results that are obtained in parallel with the calculations, and at the end to collect discrete received frames into a video file. The software package "Enthought Mayavi2" was used as the tool for visualization. This visualization application gave us the opportunity to study the interaction of a gas with a metal surface and to closely observe the adsorption effect.

  5. Novel Real-Time Facial Wound Recovery Synthesis Using Subsurface Scattering

    PubMed Central

    Chin, Seongah

    2014-01-01

    We propose a wound recovery synthesis model that illustrates the appearance of a wound healing on a 3-dimensional (3D) face. The H3 model is used to determine the size of the recovering wound. Furthermore, we present our subsurface scattering model that is designed to take the multilayered skin structure of the wound into consideration to represent its color transformation. We also propose a novel real-time rendering method based on the results of an analysis of the characteristics of translucent materials. Finally, we validate the proposed methods with 3D wound-simulation experiments using shading models. PMID:25197721

  6. 3D Geo-Structures Visualization Education Project (3dgeostructuresvis.ucdavis.edu)

    NASA Astrophysics Data System (ADS)

    Billen, M. I.

    2014-12-01

    Students of field-based geology must master a suite of challenging skills from recognizing rocks, to measuring orientations of features in the field, to finding oneself (and the outcrop) on a map and placing structural information on maps. Students must then synthesize this information to derive meaning from the observations and ultimately to determine the three-dimensional (3D) shape of the deformed structures and their kinematic history. Synthesizing this kind of information requires sophisticated visualizations skills in order to extrapolate observations into the subsurface or missing (eroded) material. The good news is that students can learn 3D visualization skills through practice, and virtual tools can help provide some of that practice. Here I present a suite of learning modules focused at developing students' ability to imagine (visualize) complex 3D structures and their exposure through digital topographic surfaces. Using the software 3DVisualizer, developed by KeckCAVES (keckcaves.org) we have developed visualizations of common geologic structures (e.g., syncline, dipping fold) in which the rock is represented by originally flat-lying layers of sediment, each with a different color, which have been subsequently deformed. The exercises build up in complexity, first focusing on understanding the structure in 3D (penetrative understanding), and then moving to the exposure of the structure at a topographic surface. Individual layers can be rendered as a transparent feature to explore how the layer extends above and below the topographic surface (e.g., to follow an eroded fold limb across a valley). The exercises are provided using either movies of the visualization (which can also be used for examples during lectures), or the data and software can be downloaded to allow for more self-driven exploration and learning. These virtual field models and exercises can be used as "practice runs" before going into the field, as make-up assignments, as a field experience in regions without good geologic outcrops, or for students with disabilities that prevent them from going into the field. These exercises and modules are available from 3dgeostructuresvis.ucdavis.edu. We plan to add several new structures to the site each year. This project was funded by a National Science Foundation CAREER grant to Billen.

  7. Design and implementation of a 3D ocean virtual reality and visualization engine

    NASA Astrophysics Data System (ADS)

    Chen, Ge; Li, Bo; Tian, Fenglin; Ji, Pengbo; Li, Wenqing

    2012-12-01

    In this study, a 3D virtual reality and visualization engine for rendering the ocean, named VV-Ocean, is designed for marine applications. The design goals of VV-Ocean aim at high fidelity simulation of ocean environment, visualization of massive and multidimensional marine data, and imitation of marine lives. VV-Ocean is composed of five modules, i.e. memory management module, resources management module, scene management module, rendering process management module and interaction management module. There are three core functions in VV-Ocean: reconstructing vivid virtual ocean scenes, visualizing real data dynamically in real time, imitating and simulating marine lives intuitively. Based on VV-Ocean, we establish a sea-land integration platform which can reproduce drifting and diffusion processes of oil spilling from sea bottom to surface. Environment factors such as ocean current and wind field have been considered in this simulation. On this platform oil spilling process can be abstracted as movements of abundant oil particles. The result shows that oil particles blend with water well and the platform meets the requirement for real-time and interactive rendering. VV-Ocean can be widely used in ocean applications such as demonstrating marine operations, facilitating maritime communications, developing ocean games, reducing marine hazards, forecasting the weather over oceans, serving marine tourism, and so on. Finally, further technological improvements of VV-Ocean are discussed.

  8. An HTML5-Based Pure Website Solution for Rapidly Viewing and Processing Large-Scale 3D Medical Volume Reconstruction on Mobile Internet

    PubMed Central

    Chen, Xin; Zhang, Ye; Zhang, Jingna; Li, Ying; Mo, Xuemei; Chen, Wei

    2017-01-01

    This study aimed to propose a pure web-based solution to serve users to access large-scale 3D medical volume anywhere with good user experience and complete details. A novel solution of the Master-Slave interaction mode was proposed, which absorbed advantages of remote volume rendering and surface rendering. On server side, we designed a message-responding mechanism to listen to interactive requests from clients (Slave model) and to guide Master volume rendering. On client side, we used HTML5 to normalize user-interactive behaviors on Slave model and enhance the accuracy of behavior request and user-friendly experience. The results showed that more than four independent tasks (each with a data size of 249.4 MB) could be simultaneously carried out with a 100-KBps client bandwidth (extreme test); the first loading time was <12 s, and the response time of each behavior request for final high quality image remained at approximately 1 s, while the peak value of bandwidth was <50-KBps. Meanwhile, the FPS value for each client was ≥40. This solution could serve the users by rapidly accessing the application via one URL hyperlink without special software and hardware requirement in a diversified network environment and could be easily integrated into other telemedical systems seamlessly. PMID:28638406

  9. An HTML5-Based Pure Website Solution for Rapidly Viewing and Processing Large-Scale 3D Medical Volume Reconstruction on Mobile Internet.

    PubMed

    Qiao, Liang; Chen, Xin; Zhang, Ye; Zhang, Jingna; Wu, Yi; Li, Ying; Mo, Xuemei; Chen, Wei; Xie, Bing; Qiu, Mingguo

    2017-01-01

    This study aimed to propose a pure web-based solution to serve users to access large-scale 3D medical volume anywhere with good user experience and complete details. A novel solution of the Master-Slave interaction mode was proposed, which absorbed advantages of remote volume rendering and surface rendering. On server side, we designed a message-responding mechanism to listen to interactive requests from clients ( Slave model) and to guide Master volume rendering. On client side, we used HTML5 to normalize user-interactive behaviors on Slave model and enhance the accuracy of behavior request and user-friendly experience. The results showed that more than four independent tasks (each with a data size of 249.4 MB) could be simultaneously carried out with a 100-KBps client bandwidth (extreme test); the first loading time was <12 s, and the response time of each behavior request for final high quality image remained at approximately 1 s, while the peak value of bandwidth was <50-KBps. Meanwhile, the FPS value for each client was ≥40. This solution could serve the users by rapidly accessing the application via one URL hyperlink without special software and hardware requirement in a diversified network environment and could be easily integrated into other telemedical systems seamlessly.

  10. Using CAD software to simulate PV energy yield - The case of product integrated photovoltaic operated under indoor solar irradiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reich, N.H.; van Sark, W.G.J.H.M.; Turkenburg, W.C.

    2010-08-15

    In this paper, we show that photovoltaic (PV) energy yields can be simulated using standard rendering and ray-tracing features of Computer Aided Design (CAD) software. To this end, three-dimensional (3-D) sceneries are ray-traced in CAD. The PV power output is then modeled by translating irradiance intensity data of rendered images back into numerical data. To ensure accurate results, the solar irradiation data used as input is compared to numerical data obtained from rendered images, showing excellent agreement. As expected, also ray-tracing precision in the CAD software proves to be very high. To demonstrate PV energy yield simulations using this innovativemore » concept, solar radiation time course data of a few days was modeled in 3-D to simulate distributions of irradiance incident on flat, single- and double-bend shapes and a PV powered computer mouse located on a window sill. Comparisons of measured to simulated PV output of the mouse show that also in practice, simulation accuracies can be very high. Theoretically, this concept has great potential, as it can be adapted to suit a wide range of solar energy applications, such as sun-tracking and concentrator systems, Building Integrated PV (BIPV) or Product Integrated PV (PIPV). However, graphical user interfaces of 'CAD-PV' software tools are not yet available. (author)« less

  11. Biplane reconstruction and visualization of virtual endoscopic and fluoroscopic views for interventional device navigation

    NASA Astrophysics Data System (ADS)

    Wagner, Martin G.; Strother, Charles M.; Schafer, Sebastian; Mistretta, Charles A.

    2016-03-01

    Biplane fluoroscopic imaging is an important tool for minimally invasive procedures for the treatment of cerebrovascular diseases. However, finding a good working angle for the C-arms of the angiography system as well as navigating based on the 2D projection images can be a difficult task. The purpose of this work is to propose a novel 4D reconstruction algorithm for interventional devices from biplane fluoroscopy images and to propose new techniques for a better visualization of the results. The proposed reconstruction methods binarizes the fluoroscopic images using a dedicated noise reduction algorithm for curvilinear structures and a global thresholding approach. A topology preserving thinning algorithm is then applied and a path search algorithm minimizing the curvature of the device is used to extract the 2D device centerlines. Finally, the 3D device path is reconstructed using epipolar geometry. The point correspondences are determined by a monotonic mapping function that minimizes the reconstruction error. The three dimensional reconstruction of the device path allows the rendering of virtual fluoroscopy images from arbitrary angles as well as 3D visualizations like virtual endoscopic views or glass pipe renderings, where the vessel wall is rendered with a semi-transparent material. This work also proposes a combination of different visualization techniques in order to increase the usability and spatial orientation for the user. A combination of synchronized endoscopic and glass pipe views is proposed, where the virtual endoscopic camera position is determined based on the device tip location as well as the previous camera position using a Kalman filter in order to create a smooth path. Additionally, vessel centerlines are displayed and the path to the target is highlighted. Finally, the virtual endoscopic camera position is also visualized in the glass pipe view to further improve the spatial orientation. The proposed techniques could considerably improve the workflow of minimally invasive procedures for the treatment of cerebrovascular diseases.

  12. Validation of Thermal Lethality against Salmonella enterica in Poultry Offal during Rendering.

    PubMed

    Jones-Ibarra, Amie-Marie; Acuff, Gary R; Alvarado, Christine Z; Taylor, T Matthew

    2017-09-01

    Recent outbreaks of human disease following contact with companion animal foods cross-contaminated with enteric pathogens, such as Salmonella enterica, have resulted in increased concern regarding the microbiological safety of animal foods. Additionally, the U.S. Food and Drug Administration Food Safety Modernization Act and its implementing rules have stipulated the implementation of current good manufacturing practices and food safety preventive controls for livestock and companion animal foods. Animal foods and feeds are sometimes formulated to include thermally rendered animal by-product meals. The objective of this research was to determine the thermal inactivation of S. enterica in poultry offal during rendering at differing temperatures. Raw poultry offal was obtained from a commercial renderer and inoculated with a mixture of Salmonella serovars Senftenberg, Enteritidis, and Gallinarum (an avian pathogen) prior to being subjected to heating at 150, 155, or 160°F (65.5, 68.3, or 71.1°C) for up to 15 min. Following heat application, surviving Salmonella bacteria were enumerated. Mean D-values for the Salmonella cocktail at 150, 155, and 160°F were 0.254 ± 0.045, 0.172 ± 0.012, and 0.086 ± 0.004 min, respectively, indicative of increasing susceptibility to increased application of heat during processing. The mean thermal process constant (z-value) was 21.948 ± 3.87°F. Results indicate that a 7.0-log-cycle inactivation of Salmonella may be obtained from the cumulative lethality encountered during the heating come-up period and subsequent rendering of raw poultry offal at temperatures not less than 150°F. Current poultry rendering procedures are anticipated to be effective for achieving necessary pathogen control when completed under sanitary conditions.

  13. A Unified Air-Sea Visualization System: Survey on Gridding Structures

    NASA Technical Reports Server (NTRS)

    Anand, Harsh; Moorhead, Robert

    1995-01-01

    The goal is to develop a Unified Air-Sea Visualization System (UASVS) to enable the rapid fusion of observational, archival, and model data for verification and analysis. To design and develop UASVS, modelers were polled to determine the gridding structures and visualization systems used, and their needs with respect to visual analysis. A basic UASVS requirement is to allow a modeler to explore multiple data sets within a single environment, or to interpolate multiple datasets onto one unified grid. From this survey, the UASVS should be able to visualize 3D scalar/vector fields; render isosurfaces; visualize arbitrary slices of the 3D data; visualize data defined on spectral element grids with the minimum number of interpolation stages; render contours; produce 3D vector plots and streamlines; provide unified visualization of satellite images, observations and model output overlays; display the visualization on a projection of the users choice; implement functions so the user can derive diagnostic values; animate the data to see the time-evolution; animate ocean and atmosphere at different rates; store the record of cursor movement, smooth the path, and animate a window around the moving path; repeatedly start and stop the visual time-stepping; generate VHS tape animations; work on a variety of workstations; and allow visualization across clusters of workstations and scalable high performance computer systems.

  14. 11. Historic photo of cutaway rendering of rocket engine test ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    11. Historic photo of cutaway rendering of rocket engine test facility complex, June 11, 1965. On file at NASA Plumbrook Research Center, Sandusky, Ohio. NASA GRC photo number C-74433. - Rocket Engine Testing Facility, NASA Glenn Research Center, Cleveland, Cuyahoga County, OH

  15. Advances in the Dynallax solid-state dynamic parallax barrier autostereoscopic visualization display system.

    PubMed

    Peterka, Tom; Kooima, Robert L; Sandin, Daniel J; Johnson, Andrew; Leigh, Jason; DeFanti, Thomas A

    2008-01-01

    A solid-state dynamic parallax barrier autostereoscopic display mitigates some of the restrictions present in static barrier systems, such as fixed view-distance range, slow response to head movements, and fixed stereo operating mode. By dynamically varying barrier parameters in real time, viewers may move closer to the display and move faster laterally than with a static barrier system, and the display can switch between 3D and 2D modes by disabling the barrier on a per-pixel basis. Moreover, Dynallax can output four independent eye channels when two viewers are present, and both head-tracked viewers receive an independent pair of left-eye and right-eye perspective views based on their position in 3D space. The display device is constructed by using a dual-stacked LCD monitor where a dynamic barrier is rendered on the front display and a modulated virtual environment composed of two or four channels is rendered on the rear display. Dynallax was recently demonstrated in a small-scale head-tracked prototype system. This paper summarizes the concepts presented earlier, extends the discussion of various topics, and presents recent improvements to the system.

  16. A novel shape-changing haptic table-top display

    NASA Astrophysics Data System (ADS)

    Wang, Jiabin; Zhao, Lu; Liu, Yue; Wang, Yongtian; Cai, Yi

    2018-01-01

    A shape-changing table-top display with haptic feedback allows its users to perceive 3D visual and texture displays interactively. Since few existing devices are developed as accurate displays with regulatory haptic feedback, a novel attentive and immersive shape changing mechanical interface (SCMI) consisting of image processing unit and transformation unit was proposed in this paper. In order to support a precise 3D table-top display with an offset of less than 2 mm, a custommade mechanism was developed to form precise surface and regulate the feedback force. The proposed image processing unit was capable of extracting texture data from 2D picture for rendering shape-changing surface and realizing 3D modeling. The preliminary evaluation result proved the feasibility of the proposed system.

  17. WebGL-enabled 3D visualization of a Solar Flare Simulation

    NASA Astrophysics Data System (ADS)

    Chen, A.; Cheung, C. M. M.; Chintzoglou, G.

    2016-12-01

    The visualization of magnetohydrodynamic (MHD) simulations of astrophysical systems such as solar flares often requires specialized software packages (e.g. Paraview and VAPOR). A shortcoming of using such software packages is the inability to share our findings with the public and scientific community in an interactive and engaging manner. By using the javascript-based WebGL application programming interface (API) and the three.js javascript package, we create an online in-browser experience for rendering solar flare simulations that will be interactive and accessible to the general public. The WebGL renderer displays objects such as vector flow fields, streamlines and textured isosurfaces. This allows the user to explore the spatial relation between the solar coronal magnetic field and the thermodynamic structure of the plasma in which the magnetic field is embedded. Plans for extending the features of the renderer will also be presented.

  18. Objective and subjective quality assessment of geometry compression of reconstructed 3D humans in a 3D virtual room

    NASA Astrophysics Data System (ADS)

    Mekuria, Rufael; Cesar, Pablo; Doumanis, Ioannis; Frisiello, Antonella

    2015-09-01

    Compression of 3D object based video is relevant for 3D Immersive applications. Nevertheless, the perceptual aspects of the degradation introduced by codecs for meshes and point clouds are not well understood. In this paper we evaluate the subjective and objective degradations introduced by such codecs in a state of art 3D immersive virtual room. In the 3D immersive virtual room, users are captured with multiple cameras, and their surfaces are reconstructed as photorealistic colored/textured 3D meshes or point clouds. To test the perceptual effect of compression and transmission, we render degraded versions with different frame rates in different contexts (near/far) in the scene. A quantitative subjective study with 16 users shows that negligible distortion of decoded surfaces compared to the original reconstructions can be achieved in the 3D virtual room. In addition, a qualitative task based analysis in a full prototype field trial shows increased presence, emotion, user and state recognition of the reconstructed 3D Human representation compared to animated computer avatars.

  19. YouDash3D: exploring stereoscopic 3D gaming for 3D movie theaters

    NASA Astrophysics Data System (ADS)

    Schild, Jonas; Seele, Sven; Masuch, Maic

    2012-03-01

    Along with the success of the digitally revived stereoscopic cinema, events beyond 3D movies become attractive for movie theater operators, i.e. interactive 3D games. In this paper, we present a case that explores possible challenges and solutions for interactive 3D games to be played by a movie theater audience. We analyze the setting and showcase current issues related to lighting and interaction. Our second focus is to provide gameplay mechanics that make special use of stereoscopy, especially depth-based game design. Based on these results, we present YouDash3D, a game prototype that explores public stereoscopic gameplay in a reduced kiosk setup. It features live 3D HD video stream of a professional stereo camera rig rendered in a real-time game scene. We use the effect to place the stereoscopic effigies of players into the digital game. The game showcases how stereoscopic vision can provide for a novel depth-based game mechanic. Projected trigger zones and distributed clusters of the audience video allow for easy adaptation to larger audiences and 3D movie theater gaming.

  20. Software systems for modeling articulated figures

    NASA Technical Reports Server (NTRS)

    Phillips, Cary B.

    1989-01-01

    Research in computer animation and simulation of human task performance requires sophisticated geometric modeling and user interface tools. The software for a research environment should present the programmer with a powerful but flexible substrate of facilities for displaying and manipulating geometric objects, yet insure that future tools have a consistent and friendly user interface. Jack is a system which provides a flexible and extensible programmer and user interface for displaying and manipulating complex geometric figures, particularly human figures in a 3D working environment. It is a basic software framework for high-performance Silicon Graphics IRIS workstations for modeling and manipulating geometric objects in a general but powerful way. It provides a consistent and user-friendly interface across various applications in computer animation and simulation of human task performance. Currently, Jack provides input and control for applications including lighting specification and image rendering, anthropometric modeling, figure positioning, inverse kinematics, dynamic simulation, and keyframe animation.

  1. Multidimensional Processing and Visual Rendering of Complex 3D Biomedical Images

    NASA Technical Reports Server (NTRS)

    Sams, Clarence F.

    2016-01-01

    The proposed technology uses advanced image analysis techniques to maximize the resolution and utility of medical imaging methods being used during spaceflight. We utilize COTS technology for medical imaging, but our applications require higher resolution assessment of the medical images than is routinely applied with nominal system software. By leveraging advanced data reduction and multidimensional imaging techniques utilized in analysis of Planetary Sciences and Cell Biology imaging, it is possible to significantly increase the information extracted from the onboard biomedical imaging systems. Year 1 focused on application of these techniques to the ocular images collected on ground test subjects and ISS crewmembers. Focus was on the choroidal vasculature and the structure of the optic disc. Methods allowed for increased resolution and quantitation of structural changes enabling detailed assessment of progression over time. These techniques enhance the monitoring and evaluation of crew vision issues during space flight.

  2. CT urography of urinary diversions with enhanced CT digital radiography: preliminary experience.

    PubMed

    Sudakoff, Gary S; Guralnick, Michael; Langenstroer, Peter; Foley, W Dennis; Cihlar, Krista L; Shakespear, Jonathan S; See, William A

    2005-01-01

    The purpose of this study was to determine if 3D-rendered CT urography (CTU) depicts both normal and abnormal findings in patients with urinary diversions and if the addition of contrast-enhanced CT digital radiography (CTDR) improves opacification of the urinary collecting system. Thirty CTU and contrast-enhanced CTDR examinations were performed in 24 patients who underwent cystectomy for bladder cancer. Indications for evaluation included hematuria, tumor surveillance, or suspected diversion malfunction. All examinations were evaluated without knowledge of the stage or grade of a patient's tumor and were compared with the clinical records. Opacification of the urinary collecting system was evaluated with 3D CTU alone, contrast-enhanced CTDR alone, and combined CTU and CTDR. Nine abnormalities were identified including distal ureteral strictures (n = 4), vascular compression of the mid left ureter (n = 1), scarring of the mid right pole infundibulum (n = 1), bilateral hydronephrosis and hydroureter (n = 1), urinary reservoir calculus (n = 1), and tumor recurrence invading the afferent limb of the neobladder (n = 1). Eight of the nine detected abnormalities were surgically or pathologically confirmed. All abnormalities were identified on all three imaging techniques but were best seen on 3D CTU and enhanced CTDR images. Incomplete opacification of the urinary collecting system occurred in 17 patients with CTU alone, 12 patients with contrast-enhanced CTDR alone, and nine patients with combined CTU and contrast-enhanced CTDR. Compared with CTU alone, the combined technique of 3D CTU and contrast-enhanced CTDR improved opacification by a statistically significant difference (p = 0.037). CTU with 3D rendering can accurately depict both normal and abnormal postoperative findings in patients with urinary diversions. Adding enhanced CTDR can improve visualization of the urinary collecting system.

  3. Strategies for Effectively Visualizing a 3D Flow Using Volume Line Integral Convolution

    NASA Technical Reports Server (NTRS)

    Interrante, Victoria; Grosch, Chester

    1997-01-01

    This paper discusses strategies for effectively portraying 3D flow using volume line integral convolution. Issues include defining an appropriate input texture, clarifying the distinct identities and relative depths of the advected texture elements, and selectively highlighting regions of interest in both the input and output volumes. Apart from offering insights into the greater potential of 3D LIC as a method for effectively representing flow in a volume, a principal contribution of this work is the suggestion of a technique for generating and rendering 3D visibility-impeding 'halos' that can help to intuitively indicate the presence of depth discontinuities between contiguous elements in a projection and thereby clarify the 3D spatial organization of elements in the flow. The proposed techniques are applied to the visualization of a hot, supersonic, laminar jet exiting into a colder, subsonic coflow.

  4. A feasibility study of a 3-D finite element solution scheme for aeroengine duct acoustics

    NASA Technical Reports Server (NTRS)

    Abrahamson, A. L.

    1980-01-01

    The advantage from development of a 3-D model of aeroengine duct acoustics is the ability to analyze axial and circumferential liner segmentation simultaneously. The feasibility of a 3-D duct acoustics model was investigated using Galerkin or least squares element formulations combined with Gaussian elimination, successive over-relaxation, or conjugate gradient solution algorithms on conventional scalar computers and on a vector machine. A least squares element formulation combined with a conjugate gradient solver on a CDC Star vector computer initially appeared to have great promise, but severe difficulties were encountered with matrix ill-conditioning. These difficulties in conditioning rendered this technique impractical for realistic problems.

  5. Depicting 3D shape using lines

    NASA Astrophysics Data System (ADS)

    DeCarlo, Doug

    2012-03-01

    Over the last few years, researchers in computer graphics have developed sophisticated mathematical descriptions of lines on 3D shapes that can be rendered convincingly as strokes in drawings. These innovations highlight fundamental questions about how human perception takes strokes in drawings as evidence of 3D structure. Answering these questions will lead to a greater scientific understanding of the flexibility and richness of human perception, as well as to practical techniques for synthesizing clearer and more compelling drawings. This paper reviews what is known about the mathematics and perception of computer-generated line drawings of shape and motivates an ongoing program of research to better characterize the shapes people see when they look at such drawings.

  6. Freely-available, true-color volume rendering software and cryohistology data sets for virtual exploration of the temporal bone anatomy.

    PubMed

    Kahrs, Lüder Alexander; Labadie, Robert Frederick

    2013-01-01

    Cadaveric dissection of temporal bone anatomy is not always possible or feasible in certain educational environments. Volume rendering using CT and/or MRI helps understanding spatial relationships, but they suffer in nonrealistic depictions especially regarding color of anatomical structures. Freely available, nonstained histological data sets and software which are able to render such data sets in realistic color could overcome this limitation and be a very effective teaching tool. With recent availability of specialized public-domain software, volume rendering of true-color, histological data sets is now possible. We present both feasibility as well as step-by-step instructions to allow processing of publicly available data sets (Visible Female Human and Visible Ear) into easily navigable 3-dimensional models using free software. Example renderings are shown to demonstrate the utility of these free methods in virtual exploration of the complex anatomy of the temporal bone. After exploring the data sets, the Visible Ear appears more natural than the Visible Human. We provide directions for an easy-to-use, open-source software in conjunction with freely available histological data sets. This work facilitates self-education of spatial relationships of anatomical structures inside the human temporal bone as well as it allows exploration of surgical approaches prior to cadaveric testing and/or clinical implementation. Copyright © 2013 S. Karger AG, Basel.

  7. Scientific visualization of volumetric radar cross section data

    NASA Astrophysics Data System (ADS)

    Wojszynski, Thomas G.

    1992-12-01

    For aircraft design and mission planning, designers, threat analysts, mission planners, and pilots require a Radar Cross Section (RCS) central tendency with its associated distribution about a specified aspect and its relation to a known threat, Historically, RCS data sets have been statically analyzed to evaluate a d profile. However, Scientific Visualization, the application of computer graphics techniques to produce pictures of complex physical phenomena appears to be a more promising tool to interpret this data. This work describes data reduction techniques and a surface rendering algorithm to construct and display a complex polyhedron from adjacent contours of RCS data. Data reduction is accomplished by sectorizing the data and characterizing the statistical properties of the data. Color, lighting, and orientation cues are added to complete the visualization system. The tool may be useful for synthesis, design, and analysis of complex, low observable air vehicles.

  8. The Airborne Optical Systems Testbed (AOSTB)

    DTIC Science & Technology

    2017-05-31

    appropriate color to each pixel in and displayed in a two -dimensional array. Another method is to render a 3D model from the data and display the model as if...USA Distribution A: Public Release ALBOTA@LL.MIT.EDU ABSTRACT Over the last two decades MIT Lincoln Laboratory (MITLL) has pioneered the development... two -dimensional (2D) grid of detectors. Rather than measuring intensity, as in a conventional camera, these detectors measure the photon time-of

  9. Ideal Positions: 3D Sonography, Medical Visuality, Popular Culture.

    PubMed

    Seiber, Tim

    2016-03-01

    As digital technologies are integrated into medical environments, they continue to transform the experience of contemporary health care. Importantly, medicine is increasingly visual. In the history of sonography, visibility has played an important role in accessing fetal bodies for diagnostic and entertainment purposes. With the advent of three-dimensional (3D) rendering, sonography presents the fetus visually as already a child. The aesthetics of this process and the resulting imagery, made possible in digital networks, discloses important changes in the relationship between technology and biology, reproductive health and political debates, and biotechnology and culture.

  10. Image processing methods in two and three dimensions used to animate remotely sensed data. [cloud cover

    NASA Technical Reports Server (NTRS)

    Hussey, K. J.; Hall, J. R.; Mortensen, R. A.

    1986-01-01

    Image processing methods and software used to animate nonimaging remotely sensed data on cloud cover are described. Three FORTRAN programs were written in the VICAR2/TAE image processing domain to perform 3D perspective rendering, to interactively select parameters controlling the projection, and to interpolate parameter sets for animation images between key frames. Operation of the 3D programs and transferring the images to film is automated using executive control language and custom hardware to link the computer and camera.

  11. Image-based 3D reconstruction and virtual environmental walk-through

    NASA Astrophysics Data System (ADS)

    Sun, Jifeng; Fang, Lixiong; Luo, Ying

    2001-09-01

    We present a 3D reconstruction method, which combines geometry-based modeling, image-based modeling and rendering techniques. The first component is an interactive geometry modeling method which recovery of the basic geometry of the photographed scene. The second component is model-based stereo algorithm. We discus the image processing problems and algorithms of walking through in virtual space, then designs and implement a high performance multi-thread wandering algorithm. The applications range from architectural planning and archaeological reconstruction to virtual environments and cinematic special effects.

  12. Characterization of Two Related Drosophila γ-tubulin Complexes that Differ in Their Ability to Nucleate Microtubules

    PubMed Central

    Oegema, Karen; Wiese, Christiane; Martin, Ona C.; Milligan, Ronald A.; Iwamatsu, Akihiro; Mitchison, Timothy J.; Zheng, Yixian

    1999-01-01

    γ-tubulin exists in two related complexes in Drosophila embryo extracts (Moritz, M., Y. Zheng, B.M. Alberts, and K. Oegema. 1998. J. Cell Biol. 142:1– 12). Here, we report the purification and characterization of both complexes that we name γ-tubulin small complex (γTuSC; ∼280,000 D) and Drosophila γTuRC (∼2,200,000 D). In addition to γ-tubulin, the γTuSC contains Dgrip84 and Dgrip91, two proteins homologous to the Spc97/98p protein family. The γTuSC is a structural subunit of the γTuRC, a larger complex containing about six additional polypeptides. Like the γTuRC isolated from Xenopus egg extracts (Zheng, Y., M.L. Wong, B. Alberts, and T. Mitchison. 1995. Nature. 378:578–583), the Drosophila γTuRC can nucleate microtubules in vitro and has an open ring structure with a diameter of 25 nm. Cryo-electron microscopy reveals a modular structure with ∼13 radially arranged structural repeats. The γTuSC also nucleates microtubules, but much less efficiently than the γTuRC, suggesting that assembly into a larger complex enhances nucleating activity. Analysis of the nucleotide content of the γTuSC reveals that γ-tubulin binds preferentially to GDP over GTP, rendering γ-tubulin an unusual member of the tubulin superfamily. PMID:10037793

  13. A novel cost-effective computer-assisted imaging technology for accurate placement of thoracic pedicle screws.

    PubMed

    Abe, Yuichiro; Ito, Manabu; Abumi, Kuniyoshi; Kotani, Yoshihisa; Sudo, Hideki; Minami, Akio

    2011-11-01

    Use of computer-assisted spine surgery (CASS) technologies, such as navigation systems, to improve the accuracy of pedicle screw (PS) placement is increasingly popular. Despite of their benefits, previous CASS systems are too expensive to be ubiquitously employed, and more affordable and portable systems are desirable. The aim of this study was to introduce a novel and affordable computer-assisted technique that 3-dimensionally visualizes anatomical features of the pedicles and assists in PS insertion. The authors have termed this the 3D-visual guidance technique for inserting pedicle screws (3D-VG TIPS). The 3D-VG technique for placing PSs requires only a consumer-class computer with an inexpensive 3D DICOM viewer; other special equipment is unnecessary. Preoperative CT data of the spine were collected for each patient using the 3D-VG TIPS. In this technique, the anatomical axis of each pedicle can be analyzed by volume-rendered 3D models, as with existing navigation systems, and both the ideal entry point and the trajectory of each PS can be visualized on the surface of 3D-rendered images. Intraoperative guidance slides are made from these images and displayed on a TV monitor in the operating room. The surgeon can insert PSs according to these guidance slides. The authors enrolled 30 patients with adolescent idiopathic scoliosis (AIS) who underwent posterior fusion with segmental screw fixation for validation of this technique. The novel technique allowed surgeons, from office or home, to evaluate the precise anatomy of each pedicle and the risks of screw misplacement, and to perform 3D preoperative planning for screw placement on their own computer. Looking at both 3D guidance images on a TV monitor and the bony structures of the posterior elements in each patient in the operating theater, surgeons were able to determine the best entry point for each PS with ease and confidence. Using the current technique, the screw malposition rate was 4.5% in the thoracic region in corrective surgery for AIS. The authors found that 3D-VG TIPS worked on a consumer-class computer and easily visualized the ideal entry point and trajectory of PSs in any operating theater without costly special equipment. This new technique is suitable for preoperative planning and intraoperative guidance when performing reconstructive surgery with PSs.

  14. Lbs Augmented Reality Assistive System for Utilities Infrastructure Management Through Galileo and Egnos

    NASA Astrophysics Data System (ADS)

    Stylianidis, E.; Valaria, E.; Smagas, K.; Pagani, A.; Henriques, J.; Garca, A.; Jimeno, E.; Carrillo, I.; Patias, P.; Georgiadis, C.; Kounoudes, A.; Michail, K.

    2016-06-01

    There is a continuous and increasing demand for solutions, both software and hardware-based, that are able to productively handle underground utilities geospatial data. Innovative approaches that are based on the use of the European GNSS, Galileo and EGNOS, sensor technologies and LBS, are able to monitor, document and manage utility infrastructures' data with an intuitive 3D augmented visualisation and navigation/positioning technology. A software and hardware-based system called LARA, currently under develop- ment through a H2020 co-funded project, aims at meeting that demand. The concept of LARA is to integrate the different innovative components of existing technologies in order to design and develop an integrated navigation/positioning and information system which coordinates GNSS, AR, 3D GIS and geodatabases on a mobile platform for monitoring, documenting and managing utility infrastruc- tures on-site. The LARA system will guide utility field workers to locate the working area by helping them see beneath the ground, rendering the complexity of the 3D models of the underground grid such as water, gas and electricity. The capacity and benefits of LARA are scheduled to be tested in two case studies located in Greece and the United Kingdom with various underground utilities. The paper aspires to present the first results from this initiative. The project leading to this application has received funding from the European GNSS Agency under the European Union's Horizon 2020 research and innovation programme under grant agreement No 641460.

  15. Using 3D Visualization to Communicate Scientific Results to Non-scientists

    NASA Astrophysics Data System (ADS)

    Whipple, S.; Mellors, R. J.; Sale, J.; Kilb, D.

    2002-12-01

    If "a picture is worth a thousand words" then an animation is worth millions. 3D animations and visualizations are useful for geoscientists but are perhaps even more valuable for rapidly illustrating standard geoscience ideas and concepts (such as faults, seismicity patterns, and topography) to non-specialists. This is useful not only for purely educational needs but also in rapidly briefing decision makers where time may be critical. As a demonstration of this we juxtapose large geophysical datasets (e.g., Southern California seismicity and topography) with other large societal datasets (such as highways and urban areas), which allows an instant understanding of the correlations. We intend to work out a methodology to aid other datasets such as hospitals and bridges, for example, in an ongoing fashion. The 3D scenes we create from the separate datasets can be "flown" through and individual snapshots that emphasize the concepts of interest are quickly rendered and converted to formats accessible to all. Viewing the snapshots and scenes greatly aids non-specialists comprehension of the problems and tasks at hand. For example, seismicity clusters (such as aftershocks) and faults near urban areas are clearly visible. A simple "fly-by" through our Southern California scene demonstrates simple concepts such as the topographic features due to plate motion along faults, and the demarcation of the North American/Pacific Plate boundary by the complex fault system (e.g., Elsinore, San Jacinto and San Andreas faults) in Southern California.

  16. Mixed Reality Meets Pharmaceutical Development.

    PubMed

    Forrest, William P; Mackey, Megan A; Shah, Vivek M; Hassell, Kerry M; Shah, Prashant; Wylie, Jennifer L; Gopinath, Janakiraman; Balderhaar, Henning; Li, Li; Wuelfing, W Peter; Helmy, Roy

    2017-12-01

    As science evolves, the need for more efficient and innovative knowledge transfer capabilities becomes evident. Advances in drug discovery and delivery sciences have directly impacted the pharmaceutical industry, though the added complexities have not shortened the development process. These added complexities also make it difficult for scientists to rapidly and effectively transfer knowledge to offset the lengthened drug development timelines. While webcams, camera phones, and iPads have been explored as potential new methods of real-time information sharing, the non-"hands-free" nature and lack of viewer and observer point-of-view render them unsuitable for the R&D laboratory or manufacturing setting. As an alternative solution, the Microsoft HoloLens mixed-reality headset was evaluated as a more efficient, hands-free method of knowledge transfer and information sharing. After completing a traditional method transfer between 3 R&D sites (Rahway, NJ; West Point, PA and Schnachen, Switzerland), a retrospective analysis of efficiency gain was performed through the comparison of a mock method transfer between NJ and PA sites using the HoloLens. The results demonstrated a minimum 10-fold gain in efficiency, weighing in from a savings in time, cost, and the ability to have real-time data analysis and discussion. In addition, other use cases were evaluated involving vendor and contract research/manufacturing organizations. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  17. Validation of percutaneous puncture trajectory during renal access using 4D ultrasound reconstruction

    NASA Astrophysics Data System (ADS)

    Rodrigues, Pedro L.; Rodrigues, Nuno F.; Fonseca, Jaime C.; Vilaça, João. L.

    2015-03-01

    An accurate percutaneous puncture is essential for disintegration and removal of renal stones. Although this procedure has proven to be safe, some organs surrounding the renal target might be accidentally perforated. This work describes a new intraoperative framework where tracked surgical tools are superimposed within 4D ultrasound imaging for security assessment of the percutaneous puncture trajectory (PPT). A PPT is first generated from the skin puncture site towards an anatomical target, using the information retrieved by electromagnetic motion tracking sensors coupled to surgical tools. Then, 2D ultrasound images acquired with a tracked probe are used to reconstruct a 4D ultrasound around the PPT under GPU processing. Volume hole-filling was performed in different processing time intervals by a tri-linear interpolation method. At spaced time intervals, the volume of the anatomical structures was segmented to ascertain if any vital structure is in between PPT and might compromise the surgical success. To enhance the volume visualization of the reconstructed structures, different render transfer functions were used. Results: Real-time US volume reconstruction and rendering with more than 25 frames/s was only possible when rendering only three orthogonal slice views. When using the whole reconstructed volume one achieved 8-15 frames/s. 3 frames/s were reached when one introduce the segmentation and detection if some structure intersected the PPT. The proposed framework creates a virtual and intuitive platform that can be used to identify and validate a PPT to safely and accurately perform the puncture in percutaneous nephrolithotomy.

  18. Multi- and hyperspectral scene modeling

    NASA Astrophysics Data System (ADS)

    Borel, Christoph C.; Tuttle, Ronald F.

    2011-06-01

    This paper shows how to use a public domain raytracer POV-Ray (Persistence Of Vision Raytracer) to render multiand hyper-spectral scenes. The scripting environment allows automatic changing of the reflectance and transmittance parameters. The radiosity rendering mode allows accurate simulation of multiple-reflections between surfaces and also allows semi-transparent surfaces such as plant leaves. We show that POV-Ray computes occlusion accurately using a test scene with two blocks under a uniform sky. A complex scene representing a plant canopy is generated using a few lines of script. With appropriate rendering settings, shadows cast by leaves are rendered in many bands. Comparing single and multiple reflection renderings, the effect of multiple reflections is clearly visible and accounts for 25% of the overall apparent canopy reflectance in the near infrared.

  19. Coniferous Canopy BRF Simulation Based on 3-D Realistic Scene

    NASA Technical Reports Server (NTRS)

    Wang, Xin-yun; Guo, Zhi-feng; Qin, Wen-han; Sun, Guo-qing

    2011-01-01

    It is difficulties for the computer simulation method to study radiation regime at large-scale. Simplified coniferous model was investigate d in the present study. It makes the computer simulation methods such as L-systems and radiosity-graphics combined method (RGM) more powerf ul in remote sensing of heterogeneous coniferous forests over a large -scale region. L-systems is applied to render 3-D coniferous forest scenarios: and RGM model was used to calculate BRF (bidirectional refle ctance factor) in visible and near-infrared regions. Results in this study show that in most cases both agreed well. Meanwhiie at a tree and forest level. the results are also good.

  20. Application of 3D Laser Scanning Technology in Complex Rock Foundation Design

    NASA Astrophysics Data System (ADS)

    Junjie, Ma; Dan, Lu; Zhilong, Liu

    2017-12-01

    Taking the complex landform of Tanxi Mountain Landscape Bridge as an example, the application of 3D laser scanning technology in the mapping of complex rock foundations is studied in this paper. A set of 3D laser scanning technologies are formed and several key engineering problems are solved. The first is 3D laser scanning technology of complex landforms. 3D laser scanning technology is used to obtain a complete 3D point cloud data model of the complex landform. The detailed and accurate results of the surveying and mapping decrease the measuring time and supplementary measuring times. The second is 3D collaborative modeling of the complex landform. A 3D model of the complex landform is established based on the 3D point cloud data model. The super-structural foundation model is introduced for 3D collaborative design. The optimal design plan is selected and the construction progress is accelerated. And the last is finite-element analysis technology of the complex landform foundation. A 3D model of the complex landform is introduced into ANSYS for building a finite element model to calculate anti-slide stability of the rock, and provides a basis for the landform foundation design and construction.

  1. Stronger Dopamine D1 Receptor-Mediated Neurotransmission in Dyskinesia.

    PubMed

    Farré, Daniel; Muñoz, Ana; Moreno, Estefanía; Reyes-Resina, Irene; Canet-Pons, Júlia; Dopeso-Reyes, Iria G; Rico, Alberto J; Lluís, Carme; Mallol, Josefa; Navarro, Gemma; Canela, Enric I; Cortés, Antonio; Labandeira-García, José L; Casadó, Vicent; Lanciego, José L; Franco, Rafael

    2015-12-01

    Radioligand binding assays to rat striatal dopamine D1 receptors showed that brain lateralization of the dopaminergic system were not due to changes in expression but in agonist affinity. D1 receptor-mediated striatal imbalance resulted from a significantly higher agonist affinity in the left striatum. D1 receptors heteromerize with dopamine D3 receptors, which are considered therapeutic targets for dyskinesia in parkinsonian patients. Expression of both D3 and D1-D3 receptor heteromers were increased in samples from 6-hydroxy-dopamine-hemilesioned rats rendered dyskinetic by treatment with 3, 4-dihydroxyphenyl-L-alanine (L-DOPA). Similar findings were obtained using striatal samples from primates. Radioligand binding studies in the presence of a D3 agonist led in dyskinetic, but not in lesioned or L-DOPA-treated rats, to a higher dopamine sensitivity. Upon D3-receptor activation, the affinity of agonists for binding to the right striatal D1 receptor increased. Excess dopamine coming from L-DOPA medication likely activates D3 receptors thus making right and left striatal D1 receptors equally responsive to dopamine. These results show that dyskinesia occurs concurrently with a right/left striatal balance in D1 receptor-mediated neurotransmission.

  2. GPU-based multi-volume ray casting within VTK for medical applications.

    PubMed

    Bozorgi, Mohammadmehdi; Lindseth, Frank

    2015-03-01

    Multi-volume visualization is important for displaying relevant information in multimodal or multitemporal medical imaging studies. The main objective with the current study was to develop an efficient GPU-based multi-volume ray caster (MVRC) and validate the proposed visualization system in the context of image-guided surgical navigation. Ray casting can produce high-quality 2D images from 3D volume data but the method is computationally demanding, especially when multiple volumes are involved, so a parallel GPU version has been implemented. In the proposed MVRC, imaginary rays are sent through the volumes (one ray for each pixel in the view), and at equal and short intervals along the rays, samples are collected from each volume. Samples from all the volumes are composited using front to back α-blending. Since all the rays can be processed simultaneously, the MVRC was implemented in parallel on the GPU to achieve acceptable interactive frame rates. The method is fully integrated within the visualization toolkit (VTK) pipeline with the ability to apply different operations (e.g., transformations, clipping, and cropping) on each volume separately. The implemented method is cross-platform (Windows, Linux and Mac OSX) and runs on different graphics card (NVidia and AMD). The speed of the MVRC was tested with one to five volumes of varying sizes: 128(3), 256(3), and 512(3). A Tesla C2070 GPU was used, and the output image size was 600 × 600 pixels. The original VTK single-volume ray caster and the MVRC were compared when rendering only one volume. The multi-volume rendering system achieved an interactive frame rate (> 15 fps) when rendering five small volumes (128 (3) voxels), four medium-sized volumes (256(3) voxels), and two large volumes (512(3) voxels). When rendering single volumes, the frame rate of the MVRC was comparable to the original VTK ray caster for small and medium-sized datasets but was approximately 3 frames per second slower for large datasets. The MVRC was successfully integrated in an existing surgical navigation system and was shown to be clinically useful during an ultrasound-guided neurosurgical tumor resection. A GPU-based MVRC for VTK is a useful tool in medical visualization. The proposed multi-volume GPU-based ray caster for VTK provided high-quality images at reasonable frame rates. The MVRC was effective when used in a neurosurgical navigation application.

  3. Patient-specific three-dimensional printing for pre-surgical planning in hepatocellular carcinoma treatment.

    PubMed

    Perica, Elizabeth; Sun, Zhonghua

    2017-12-01

    Recently, three-dimensional (3D) printing has shown great interest in medicine, and 3D printed models may be rendered as part of the pre-surgical planning process in order to better understand the complexities of an individual's anatomy. The aim of this study is to investigate the feasibility of utilising 3D printed liver models as clinical tools in pre-operative planning for resectable hepatocellular carcinoma (HCC) lesions. High-resolution contrast-enhanced computed tomography (CT) images were acquired and utilized to generate a patient-specific 3D printed liver model. Hepatic structures were segmented and edited to produce a printable model delineating intrahepatic anatomy and a resectable HCC lesion. Quantitative assessment of 3D model accuracy compared measurements of critical anatomical landmarks acquired from the original CT images, standard tessellation language (STL) files, and the 3D printed liver model. Comparative analysis of surveys completed by two radiologists investigated the clinical value of 3D printed liver models in radiology. The application of utilizing 3D printed liver models as tools in surgical planning for resectable HCC lesions was evaluated through kappa analysis of questionnaires completed by two abdominal surgeons. A scaled down multi-material 3D liver model delineating patient-specific hepatic anatomy and pathology was produced, requiring a total production time of 25.25 hours and costing a total of AUD $1,250. A discrepancy was found in the total mean of measurements at each stage of production, with a total mean of 18.28±9.31 mm for measurements acquired from the original CT data, 15.63±8.06 mm for the STL files, and 14.47±7.71 mm for the 3D printed liver model. The 3D liver model did not enhance the radiologists' perception of patient-specific anatomy or pathology. Kappa analysis of the surgeon's responses to survey questions yielded a percentage agreement of 80%, and a κ value of 0.38 (P=0.24) indicating fair agreement. Study outcomes indicate that there is minimal value in utilizing the 3D printed models in diagnostic radiology. The potential usefulness of utilizing patient-specific 3D printed liver models as tools in surgical planning and intraoperative guidance for HCC treatment is verified. However, the feasibility of this application is currently challenged by identified limitations in 3D model production, including the cost and time required for model production, and inaccuracies potentially introduced at each stage of model fabrication.

  4. A Heterobimetallic W-Ni Complex Containing a Redox-Active W[SNS]2 Metalloligand.

    PubMed

    Rosenkoetter, Kyle E; Ziller, Joseph W; Heyduk, Alan F

    2016-07-05

    The tungsten complex W[SNS]2 ([SNS]H3 = bis(2-mercapto-4-methylphenyl)amine) was bound to a Ni(dppe) [dppe = 1,2-bis(diphenylphosphino)ethane] fragment to form the new heterobimetallic complex W[SNS]2Ni(dppe). Characterization of the complex by single-crystal X-ray diffraction revealed the presence of a short W-Ni bond, which renders the complex diamagnetic despite formal tungsten(V) and nickel(I) oxidation states. The W[SNS]2 unit acts as a redox-active metalloligand in the bimetallic complex, which displays four one-electron redox processes by cyclic voltammetry. In the presence of the organic acid 4-cyanoanilinium tetrafluoroborate, W[SNS]2Ni(dppe) catalyzes the electrochemical reduction of protons to hydrogen coincident with the first reduction of the complex.

  5. GPU accelerated generation of digitally reconstructed radiographs for 2-D/3-D image registration.

    PubMed

    Dorgham, Osama M; Laycock, Stephen D; Fisher, Mark H

    2012-09-01

    Recent advances in programming languages for graphics processing units (GPUs) provide developers with a convenient way of implementing applications which can be executed on the CPU and GPU interchangeably. GPUs are becoming relatively cheap, powerful, and widely available hardware components, which can be used to perform intensive calculations. The last decade of hardware performance developments shows that GPU-based computation is progressing significantly faster than CPU-based computation, particularly if one considers the execution of highly parallelisable algorithms. Future predictions illustrate that this trend is likely to continue. In this paper, we introduce a way of accelerating 2-D/3-D image registration by developing a hybrid system which executes on the CPU and utilizes the GPU for parallelizing the generation of digitally reconstructed radiographs (DRRs). Based on the advancements of the GPU over the CPU, it is timely to exploit the benefits of many-core GPU technology by developing algorithms for DRR generation. Although some previous work has investigated the rendering of DRRs using the GPU, this paper investigates approximations which reduce the computational overhead while still maintaining a quality consistent with that needed for 2-D/3-D registration with sufficient accuracy to be clinically acceptable in certain applications of radiation oncology. Furthermore, by comparing implementations of 2-D/3-D registration on the CPU and GPU, we investigate current performance and propose an optimal framework for PC implementations addressing the rigid registration problem. Using this framework, we are able to render DRR images from a 256×256×133 CT volume in ~24 ms using an NVidia GeForce 8800 GTX and in ~2 ms using NVidia GeForce GTX 580. In addition to applications requiring fast automatic patient setup, these levels of performance suggest image-guided radiation therapy at video frame rates is technically feasible using relatively low cost PC architecture.

  6. D Model Visualization Enhancements in Real-Time Game Engines

    NASA Astrophysics Data System (ADS)

    Merlo, A.; Sánchez Belenguer, C.; Vendrell Vidal, E.; Fantini, F.; Aliperta, A.

    2013-02-01

    This paper describes two procedures used to disseminate tangible cultural heritage through real-time 3D simulations providing accurate-scientific representations. The main idea is to create simple geometries (with low-poly count) and apply two different texture maps to them: a normal map and a displacement map. There are two ways to achieve models that fit with normal or displacement maps: with the former (normal maps), the number of polygons in the reality-based model may be dramatically reduced by decimation algorithms and then normals may be calculated by rendering them to texture solutions (baking). With the latter, a LOD model is needed; its topology has to be quad-dominant for it to be converted to a good quality subdivision surface (with consistent tangency and curvature all over). The subdivision surface is constructed using methodologies for the construction of assets borrowed from character animation: these techniques have been recently implemented in many entertainment applications known as "retopology". The normal map is used as usual, in order to shade the surface of the model in a realistic way. The displacement map is used to finish, in real-time, the flat faces of the object, by adding the geometric detail missing in the low-poly models. The accuracy of the resulting geometry is progressively refined based on the distance from the viewing point, so the result is like a continuous level of detail, the only difference being that there is no need to create different 3D models for one and the same object. All geometric detail is calculated in real-time according to the displacement map. This approach can be used in Unity, a real-time 3D engine originally designed for developing computer games. It provides a powerful rendering engine, fully integrated with a complete set of intuitive tools and rapid workflows that allow users to easily create interactive 3D contents. With the release of Unity 4.0, new rendering features have been added, including DirectX 11 support. Real-time tessellation is a technique that can be applied by using such technology. Since the displacement and the resulting geometry are calculated by the GPU, the time-based execution cost of this technique is very low.

  7. 3-D rigid body tracking using vision and depth sensors.

    PubMed

    Gedik, O Serdar; Alatan, A Aydn

    2013-10-01

    In robotics and augmented reality applications, model-based 3-D tracking of rigid objects is generally required. With the help of accurate pose estimates, it is required to increase reliability and decrease jitter in total. Among many solutions of pose estimation in the literature, pure vision-based 3-D trackers require either manual initializations or offline training stages. On the other hand, trackers relying on pure depth sensors are not suitable for AR applications. An automated 3-D tracking algorithm, which is based on fusion of vision and depth sensors via extended Kalman filter, is proposed in this paper. A novel measurement-tracking scheme, which is based on estimation of optical flow using intensity and shape index map data of 3-D point cloud, increases 2-D, as well as 3-D, tracking performance significantly. The proposed method requires neither manual initialization of pose nor offline training, while enabling highly accurate 3-D tracking. The accuracy of the proposed method is tested against a number of conventional techniques, and a superior performance is clearly observed in terms of both objectively via error metrics and subjectively for the rendered scenes.

  8. Creating photo-realistic works in a 3D scene using layers styles to create an animation

    NASA Astrophysics Data System (ADS)

    Avramescu, A. M.

    2015-11-01

    Creating realist objects in a 3D scene is not an easy work. We have to be very careful to make the creation very detailed. If we don't know how to make these photo-realistic works, by using the techniques and a good reference photo we can create an amazing amount of detail and realism. For example, in this article there are some of these detailed methods from which we can learn the techniques necessary to make beautiful and realistic objects in a scene. More precisely, in this paper, we present how to create a 3D animated scene, mainly using the Pen Tool and Blending Options. Indeed, this work is based on teaching some simple ways of using the Layer Styles to create some great shadows, lights, textures and a realistic sense of 3 Dimension. The present work involves also showing how some interesting ways of using the illuminating and rendering options can create a realistic effect in a scene. Moreover, this article shows how to create photo realistic 3D models from a digital image. The present work proposes to present how to use Illustrator paths, texturing, basic lighting and rendering, how to apply textures and how to parent the building and objects components. We also propose to use this proposition to recreate smaller details or 3D objects from a 2D image. After a critic art stage, we are able now to present in this paper the architecture of a design method that proposes to create an animation. The aim is to create a conceptual and methodological tutorial to address this issue both scientifically and in practice. This objective also includes proposing, on strong scientific basis, a model that gives the possibility of a better understanding of the techniques necessary to create a realistic animation.

  9. Image-Based Macro-Micro Finite Element Models of a Canine Femur with Implant Design Implications

    NASA Astrophysics Data System (ADS)

    Ghosh, Somnath; Krishnan, Ganapathi; Dyce, Jonathan

    2006-06-01

    In this paper, a comprehensive model of a bone-cement-implant assembly is developed for a canine cemented femoral prosthesis system. Various steps in this development entail profiling the canine femur contours by computed tomography (CT) scanning, computer aided design (CAD) reconstruction of the canine femur from CT images, CAD modeling of the implant from implant blue prints and CAD modeling of the interface cement. Finite element analysis of the macroscopic assembly is conducted for stress analysis in individual components of the system, accounting for variation in density and material properties in the porous bone material. A sensitivity analysis is conducted with the macroscopic model to investigate the effect of implant design variables on the stress distribution in the assembly. Subsequently, rigorous microstructural analysis of the bone incorporating the morphological intricacies is conducted. Various steps in this development include acquisition of the bone microstructural data from histological serial sectioning, stacking of sections to obtain 3D renderings of void distributions, microstructural characterization and determination of properties and, finally, microstructural stress analysis using a 3D Voronoi cell finite element method. Generation of the simulated microstructure and analysis by the 3D Voronoi cell finite element model provides a new way of modeling complex microstructures and correlating to morphological characteristics. An inverse calculation of the material parameters of bone by combining macroscopic experiments with microstructural characterization and analysis provides a new approach to evaluating properties without having to do experiments at this scale. Finally, the microstructural stresses in the femur are computed using the 3D VCFEM to study the stress distribution at the scale of the bone porosity. Significant difference is observed between the macroscopic stresses and the peak microscopic stresses at different locations.

  10. Glyph-based analysis of multimodal directional distributions in vector field ensembles

    NASA Astrophysics Data System (ADS)

    Jarema, Mihaela; Demir, Ismail; Kehrer, Johannes; Westermann, Rüdiger

    2015-04-01

    Ensemble simulations are increasingly often performed in the geosciences in order to study the uncertainty and variability of model predictions. Describing ensemble data by mean and standard deviation can be misleading in case of multimodal distributions. We present first results of a glyph-based visualization of multimodal directional distributions in 2D and 3D vector ensemble data. Directional information on the circle/sphere is modeled using mixtures of probability density functions (pdfs), which enables us to characterize the distributions with relatively few parameters. The resulting mixture models are represented by 2D and 3D lobular glyphs showing direction, spread and strength of each principal mode of the distributions. A 3D extension of our approach is realized by means of an efficient GPU rendering technique. We demonstrate our method in the context of ensemble weather simulations.

  11. Interactive 3D Visualization: An Important Element in Dealing with Increasing Data Volumes and Decreasing Resources

    NASA Astrophysics Data System (ADS)

    Gee, L.; Reed, B.; Mayer, L.

    2002-12-01

    Recent years have seen remarkable advances in sonar technology, positioning capabilities, and computer processing power that have revolutionized the way we image the seafloor. The US Naval Oceanographic Office (NAVOCEANO) has updated its survey vessels and launches to the latest generation of technology and now possesses a tremendous ocean observing and mapping capability. However, the systems produce massive amounts of data that must be validated prior to inclusion in various bathymetry, hydrography, and imagery products. The key to meeting the challenge of the massive data volumes was to change the approach that required every data point be viewed. This was achieved with the replacement of the traditional line-by-line editing approach with an automated cleaning module, and an area-based editor. The approach includes a unique data structure that enables the direct access to the full resolution data from the area based view, including a direct interface to target files and imagery snippets from mosaic and full resolution imagery. The increased data volumes to be processed also offered tremendous opportunities in terms of visualization and analysis, and interactive 3D presentation of the complex multi-attribute data provided a natural complement to the area based processing. If properly geo-referenced and treated, the complex data sets can be presented in a natural and intuitive manner that allows the integration of multiple components each at their inherent level of resolution and without compromising the quantitative nature of the data. Artificial sun-illumination, shading, and 3-D rendering are used with digital bathymetric data to form natural looking and easily interpretable, yet quantitative, landscapes that allow the user to rapidly identify the data requiring further processing or analysis. Color can be used to represent depth or other parameters (like backscatter, quality factors or sediment properties), which can be draped over the DTM, or high resolution imagery can be texture mapped on bathymetric data. The presentation will demonstrate the new approach of the integrated area based processing and 3D visualization with a number of data sets from recent surveys.

  12. CROSS DRIVE: A New Interactive and Immersive Approach for Exploring 3D Time-Dependent Mars Atmospheric Data in Distributed Teams

    NASA Astrophysics Data System (ADS)

    Gerndt, Andreas M.; Engelke, Wito; Giuranna, Marco; Vandaele, Ann C.; Neary, Lori; Aoki, Shohei; Kasaba, Yasumasa; Garcia, Arturo; Fernando, Terrence; Roberts, David; CROSS DRIVE Team

    2016-10-01

    Atmospheric phenomena of Mars can be highly dynamic and have daily and seasonal variations. Planetary-scale wavelike disturbances, for example, are frequently observed in Mars' polar winter atmosphere. Possible sources of the wave activity were suggested to be dynamical instabilities and quasi-stationary planetary waves, i.e. waves that arise predominantly via zonally asymmetric surface properties. For a comprehensive understanding of these phenomena, single layers of altitude have to be analyzed carefully and relations between different atmospheric quantities and interaction with the surface of Mars have to be considered. The CROSS DRIVE project tries to address the presentation of those data with a global view by means of virtual reality techniques. Complex orbiter data from spectrometer and observation data from Earth are combined with global circulation models and high-resolution terrain data and images available from Mars Express or MRO instruments. Scientists can interactively extract features from those dataset and can change visualization parameters in real-time in order to emphasize findings. Stereoscopic views allow for perception of the actual 3D behavior of Mars's atmosphere. A very important feature of the visualization system is the possibility to connect distributed workspaces together. This enables discussions between distributed working groups. The workspace can scale from virtual reality systems to expert desktop applications to web-based project portals. If multiple virtual environments are connected, the 3D position of each individual user is captured and used to depict the scientist as an avatar in the virtual world. The appearance of the avatar can also scale from simple annotations to complex avatars using tele-presence technology to reconstruct the users in 3D. Any change of the feature set (annotations, cutplanes, volume rendering, etc.) within the VR is immediately exchanged between all connected users. This allows that everybody is always aware of what is visible and discussed. The discussion is supported by audio and interaction is controlled by a moderator managing turn-taking presentations. A use case execution proved a success and showed the potential of this immersive approach.

  13. The Flatworld Simulation Control Architecture (FSCA): A Framework for Scalable Immersive Visualization Systems

    DTIC Science & Technology

    2004-12-01

    handling using the X10 home automation protocol. Each 3D graphics client renders its scene according to an assigned virtual camera position. By having...control protocol. DMX is a versatile and robust framework which overcomes limitations of the X10 home automation protocol which we are currently using

  14. The Potential for Scientific Collaboration in Virtual Ecosystems

    ERIC Educational Resources Information Center

    Magerko, Brian

    2010-01-01

    This article explores the potential benefits of creating "virtual ecosystems" from real-world data. These ecosystems are intended to be realistic virtual representations of environments that may be costly or difficult to access in person. They can be constructed as 3D worlds rendered from stereo video data, augmented with scientific data, and then…

  15. Pathfinder. Volume 8, Number 3, May/June 2010. Technology - Rendering an Ever-Clearer Picture

    DTIC Science & Technology

    2010-06-01

    Agency,Office of Corporate Communications,4600 Sangamore Road, Mail Stop D-54,Bethesda,MD, 20816 -5003 8. PERFORMING ORGANIZATION REPORT NUMBER 9...Bethesda, MD 20816 -5003 Telephone: (301) 227-7388, DSN 287-7388 E-mail: pathfinder@nga.mil Director Vice Adm. Robert B. Murrett, U.S. Navy Deputy

  16. FaceTOON: a unified platform for feature-based cartoon expression generation

    NASA Astrophysics Data System (ADS)

    Zaharia, Titus; Marre, Olivier; Prêteux, Françoise; Monjaux, Perrine

    2008-02-01

    This paper presents the FaceTOON system, a semi-automatic platform dedicated to the creation of verbal and emotional facial expressions, within the applicative framework of 2D cartoon production. The proposed FaceTOON platform makes it possible to rapidly create 3D facial animations with a minimum amount of user interaction. In contrast with existing commercial 3D modeling softwares, which usually require from the users advanced 3D graphics skills and competences, the FaceTOON system is based exclusively on 2D interaction mechanisms, the 3D modeling stage being completely transparent for the user. The system takes as input a neutral 3D face model, free of any facial feature, and a set of 2D drawings, representing the desired facial features. A 2D/3D virtual mapping procedure makes it possible to obtain a ready-for-animation model which can be directly manipulated and deformed for generating expressions. The platform includes a complete set of dedicated tools for 2D/3D interactive deformation, pose management, key-frame interpolation and MPEG-4 compliant animation and rendering. The proposed FaceTOON system is currently considered for industrial evaluation and commercialization by the Quadraxis company.

  17. CA-LOD: Collision Avoidance Level of Detail for Scalable, Controllable Crowds

    NASA Astrophysics Data System (ADS)

    Paris, Sébastien; Gerdelan, Anton; O'Sullivan, Carol

    The new wave of computer-driven entertainment technology throws audiences and game players into massive virtual worlds where entire cities are rendered in real time. Computer animated characters run through inner-city streets teeming with pedestrians, all fully rendered with 3D graphics, animations, particle effects and linked to 3D sound effects to produce more realistic and immersive computer-hosted entertainment experiences than ever before. Computing all of this detail at once is enormously computationally expensive, and game designers as a rule, have sacrificed the behavioural realism in favour of better graphics. In this paper we propose a new Collision Avoidance Level of Detail (CA-LOD) algorithm that allows games to support huge crowds in real time with the appearance of more intelligent behaviour. We propose two collision avoidance models used for two different CA-LODs: a fuzzy steering focusing on the performances, and a geometric steering to obtain the best realism. Mixing these approaches allows to obtain thousands of autonomous characters in real time, resulting in a scalable but still controllable crowd.

  18. 3D Flow visualization in virtual reality

    NASA Astrophysics Data System (ADS)

    Pietraszewski, Noah; Dhillon, Ranbir; Green, Melissa

    2017-11-01

    By viewing fluid dynamic isosurfaces in virtual reality (VR), many of the issues associated with the rendering of three-dimensional objects on a two-dimensional screen can be addressed. In addition, viewing a variety of unsteady 3D data sets in VR opens up novel opportunities for education and community outreach. In this work, the vortex wake of a bio-inspired pitching panel was visualized using a three-dimensional structural model of Q-criterion isosurfaces rendered in virtual reality using the HTC Vive. Utilizing the Unity cross-platform gaming engine, a program was developed to allow the user to control and change this model's position and orientation in three-dimensional space. In addition to controlling the model's position and orientation, the user can ``scroll'' forward and backward in time to analyze the formation and shedding of vortices in the wake. Finally, the user can toggle between different quantities, while keeping the time step constant, to analyze flow parameter relationships at specific times during flow development. The information, data, or work presented herein was funded in part by an award from NYS Department of Economic Development (DED) through the Syracuse Center of Excellence.

  19. Effects of VR system fidelity on analyzing isosurface visualization of volume datasets.

    PubMed

    Laha, Bireswar; Bowman, Doug A; Socha, John J

    2014-04-01

    Volume visualization is an important technique for analyzing datasets from a variety of different scientific domains. Volume data analysis is inherently difficult because volumes are three-dimensional, dense, and unfamiliar, requiring scientists to precisely control the viewpoint and to make precise spatial judgments. Researchers have proposed that more immersive (higher fidelity) VR systems might improve task performance with volume datasets, and significant results tied to different components of display fidelity have been reported. However, more information is needed to generalize these results to different task types, domains, and rendering styles. We visualized isosurfaces extracted from synchrotron microscopic computed tomography (SR-μCT) scans of beetles, in a CAVE-like display. We ran a controlled experiment evaluating the effects of three components of system fidelity (field of regard, stereoscopy, and head tracking) on a variety of abstract task categories that are applicable to various scientific domains, and also compared our results with those from our prior experiment using 3D texture-based rendering. We report many significant findings. For example, for search and spatial judgment tasks with isosurface visualization, a stereoscopic display provides better performance, but for tasks with 3D texture-based rendering, displays with higher field of regard were more effective, independent of the levels of the other display components. We also found that systems with high field of regard and head tracking improve performance in spatial judgment tasks. Our results extend existing knowledge and produce new guidelines for designing VR systems to improve the effectiveness of volume data analysis.

  20. Real-time Graphics Processing Unit Based Fourier Domain Optical Coherence Tomography and Surgical Applications

    NASA Astrophysics Data System (ADS)

    Zhang, Kang

    2011-12-01

    In this dissertation, real-time Fourier domain optical coherence tomography (FD-OCT) capable of multi-dimensional micrometer-resolution imaging targeted specifically for microsurgical intervention applications was developed and studied. As a part of this work several ultra-high speed real-time FD-OCT imaging and sensing systems were proposed and developed. A real-time 4D (3D+time) OCT system platform using the graphics processing unit (GPU) to accelerate OCT signal processing, the imaging reconstruction, visualization, and volume rendering was developed. Several GPU based algorithms such as non-uniform fast Fourier transform (NUFFT), numerical dispersion compensation, and multi-GPU implementation were developed to improve the impulse response, SNR roll-off and stability of the system. Full-range complex-conjugate-free FD-OCT was also implemented on the GPU architecture to achieve doubled image range and improved SNR. These technologies overcome the imaging reconstruction and visualization bottlenecks widely exist in current ultra-high speed FD-OCT systems and open the way to interventional OCT imaging for applications in guided microsurgery. A hand-held common-path optical coherence tomography (CP-OCT) distance-sensor based microsurgical tool was developed and validated. Through real-time signal processing, edge detection and feed-back control, the tool was shown to be capable of track target surface and compensate motion. The micro-incision test using a phantom was performed using a CP-OCT-sensor integrated hand-held tool, which showed an incision error less than +/-5 microns, comparing to >100 microns error by free-hand incision. The CP-OCT distance sensor has also been utilized to enhance the accuracy and safety of optical nerve stimulation. Finally, several experiments were conducted to validate the system for surgical applications. One of them involved 4D OCT guided micro-manipulation using a phantom. Multiple volume renderings of one 3D data set were performed with different view angles to allow accurate monitoring of the micro-manipulation, and the user to clearly monitor tool-to-target spatial relation in real-time. The system was also validated by imaging multiple biological samples, such as human fingerprint, human cadaver head and small animals. Compared to conventional surgical microscopes, GPU-based real-time FD-OCT can provide the surgeons with a real-time comprehensive spatial view of the microsurgical region and accurate depth perception.

  1. NEDE: an open-source scripting suite for developing experiments in 3D virtual environments.

    PubMed

    Jangraw, David C; Johri, Ansh; Gribetz, Meron; Sajda, Paul

    2014-09-30

    As neuroscientists endeavor to understand the brain's response to ecologically valid scenarios, many are leaving behind hyper-controlled paradigms in favor of more realistic ones. This movement has made the use of 3D rendering software an increasingly compelling option. However, mastering such software and scripting rigorous experiments requires a daunting amount of time and effort. To reduce these startup costs and make virtual environment studies more accessible to researchers, we demonstrate a naturalistic experimental design environment (NEDE) that allows experimenters to present realistic virtual stimuli while still providing tight control over the subject's experience. NEDE is a suite of open-source scripts built on the widely used Unity3D game development software, giving experimenters access to powerful rendering tools while interfacing with eye tracking and EEG, randomizing stimuli, and providing custom task prompts. Researchers using NEDE can present a dynamic 3D virtual environment in which randomized stimulus objects can be placed, allowing subjects to explore in search of these objects. NEDE interfaces with a research-grade eye tracker in real-time to maintain precise timing records and sync with EEG or other recording modalities. Python offers an alternative for experienced programmers who feel comfortable mastering and integrating the various toolboxes available. NEDE combines many of these capabilities with an easy-to-use interface and, through Unity's extensive user base, a much more substantial body of assets and tutorials. Our flexible, open-source experimental design system lowers the barrier to entry for neuroscientists interested in developing experiments in realistic virtual environments. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Imaging system for creating 3D block-face cryo-images of whole mice

    NASA Astrophysics Data System (ADS)

    Roy, Debashish; Breen, Michael; Salvado, Olivier; Heinzel, Meredith; McKinley, Eliot; Wilson, David

    2006-03-01

    We developed a cryomicrotome/imaging system that provides high resolution, high sensitivity block-face images of whole mice or excised organs, and applied it to a variety of biological applications. With this cryo-imaging system, we sectioned cryo-preserved tissues at 2-40 μm thickness and acquired high resolution brightfield and fluorescence images with microscopic in-plane resolution (as good as 1.2 μm). Brightfield images of normal and pathological anatomy show exquisite detail, especially in the abdominal cavity. Multi-planar reformatting and 3D renderings allow one to interrogate 3D structures. In this report, we present brightfield images of mouse anatomy, as well as 3D renderings of organs. For BPK mice model of polycystic kidney disease, we compared brightfield cryo-images and kidney volumes to MRI. The color images provided greater contrast and resolution of cysts as compared to in vivo MRI. We note that color cryo-images are closer to what a researcher sees in dissection, making it easier for them to interpret image data. The combination of field of view, depth of field, ultra high resolution and color/fluorescence contrast enables cryo-image volumes to provide details that cannot be found through in vivo imaging or other ex vivo optical imaging approaches. We believe that this novel imaging system will have applications that include identification of mouse phenotypes, characterization of diseases like blood vessel disease, kidney disease, and cancer, assessment of drug and gene therapy delivery and efficacy and validation of other imaging modalities.

  3. Glnemo2: Interactive Visualization 3D Program

    NASA Astrophysics Data System (ADS)

    Lambert, Jean-Charles

    2011-10-01

    Glnemo2 is an interactive 3D visualization program developed in C++ using the OpenGL library and Nokia QT 4.X API. It displays in 3D the particles positions of the different components of an nbody snapshot. It quickly gives a lot of information about the data (shape, density area, formation of structures such as spirals, bars, or peanuts). It allows for in/out zooms, rotations, changes of scale, translations, selection of different groups of particles and plots in different blending colors. It can color particles according to their density or temperature, play with the density threshold, trace orbits, display different time steps, take automatic screenshots to make movies, select particles using the mouse, and fly over a simulation using a given camera path. All these features are accessible from a very intuitive graphic user interface. Glnemo2 supports a wide range of input file formats (Nemo, Gadget 1 and 2, phiGrape, Ramses, list of files, realtime gyrfalcON simulation) which are automatically detected at loading time without user intervention. Glnemo2 uses a plugin mechanism to load the data, so that it is easy to add a new file reader. It's powered by a 3D engine which uses the latest OpenGL technology, such as shaders (glsl), vertex buffer object, frame buffer object, and takes in account the power of the graphic card used in order to accelerate the rendering. With a fast GPU, millions of particles can be rendered in real time. Glnemo2 runs on Linux, Windows (using minGW compiler), and MaxOSX, thanks to the QT4API.

  4. Making transboundary risks governable: reducing complexity, constructing spatial identity, and ascribing capabilities.

    PubMed

    Lidskog, Rolf; Uggla, Ylva; Soneryd, Linda

    2011-03-01

    Environmental problems that cross national borders are attracting increasing public and political attention; regulating them involves coordinating the goals and activities of various governments, which often presupposes simplifying and standardizing complex knowledge, and finding ways to manage uncertainty. This article explores how transboundary environmental problems are dealt with to render complex issues governable. By discussing oil pollution in the Baltic Sea and the gas pipeline between Russia and Germany, we elucidate how boundaries are negotiated to make issues governable. Three processes are found to be particularly relevant to how involved actors render complex issues governable: complexity reduction, construction of a spatial identity for an issue, and ascription of capabilities to new or old actor constellations. We conclude that such regulation is always provisional, implying that existing regulation is always open for negotiation and criticism.

  5. D-deprenyl protects nigrostriatal neurons against 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine-induced dopaminergic neurotoxicity.

    PubMed

    Muralikrishnan, Dhanasekharan; Samantaray, Supriti; Mohanakumar, Kochupurackal P

    2003-10-01

    Selegiline (L-deprenyl) is believed to render protection against l-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP)-neurotoxicity to a significant extent via a free radical scavenging mechanism, which is independent of its ability to inhibit monoamine oxidase-B (MAO-B) in the brain. We investigated the hydroxyl radical (.OH) scavenging action and neuroprotective effect of D-deprenyl, its less active isomer, in MPTP-induced dopaminergic neurotoxicity in mice to test whether the chemical structure of the molecule or its biological effects contribute to this property. To achieve this goal we studied the effects of D-deprenyl on: (1).OH production in a Fenton reaction; (2) MPTP-induced.OH generation and dopamine (DA) depletion in vivo, employing a sensitive HPLC-electrochemical procedure; and (3) formation of MPP(+) in vivo in the striatum following systemic administration of MPTP, employing an HPLC-photodiode array detection system. D-deprenyl inhibited ferrous citrate-induced.OH in vitro (0.45 microM) and MPTP-induced.OH in vivo in substantia nigra (SN) and in the striatum (1.0 mg/kg, i.p.). D-deprenyl did not, but L-deprenyl (0.5 mg/kg dose) did significantly inhibit formation of MPP(+) in the striatum 90 min following systemic MPTP injection. It failed to affect MAO-B activity at 0.5 mg/kg in the striatum, but effectively blocked MPTP-induced striatal DA depletion. The potency of D-deprenyl to scavenge MPTP-induced.OH in vivo and to render protection against the dopaminergic neurotoxicity without affecting dopamine turnover, MAO-B activity, or formation of MPP(+) in the brain indicates a direct involvement of.OH in the neurotoxic action of MPTP and antioxidant effect in the neuroprotective action of deprenyl. Copyright 2003 Wiley-Liss, Inc.

  6. A detailed look at the cytoskeletal architecture of the Giardia lamblia ventral disc.

    PubMed

    Brown, Joanna R; Schwartz, Cindi L; Heumann, John M; Dawson, Scott C; Hoenger, Andreas

    2016-04-01

    Giardia lamblia is a protistan parasite that infects and colonizes the small intestine of mammals. It is widespread and particularly endemic in the developing world. Here we present a detailed structural study by 3-D negative staining and cryo-electron tomography of a unique Giardia organelle, the ventral disc. The disc is composed of a regular array of microtubules and associated sheets, called microribbons that form a large spiral, held together by a myriad of mostly unknown associated proteins. In a previous study we analyzed by cryo-electron tomography the central microtubule portion (here called disc body) of the ventral disc and found a large portion of microtubule associated inner (MIPs) and outer proteins (MAPs) that render these microtubules hyper-stable. With this follow-up study we expanded our 3-D analysis to different parts of the disc such as the ventral and dorsal areas of the overlap zone, as well as the outer disc margin. There are intrinsic location-specific characteristics in the composition of microtubule-associated proteins between these regions, as well as large differences between the overall architecture of microtubules and microribbons. The lateral packing of microtubule-microribbon complexes varies substantially, and closer packing often comes with contracted lateral tethers that seem to hold the disc together. It appears that the marginal microtubule-microribbon complexes function as outer, laterally contractible lids that may help the cell to clamp onto the intestinal microvilli. Furthermore, we analyzed length, quantity, curvature and distribution between different zones of the disc, which we found to differ from previous publications. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. 3D scanning characteristics of an amorphous silicon position sensitive detector array system.

    PubMed

    Contreras, Javier; Gomes, Luis; Filonovich, Sergej; Correia, Nuno; Fortunato, Elvira; Martins, Rodrigo; Ferreira, Isabel

    2012-02-13

    The 3D scanning electro-optical characteristics of a data acquisition prototype system integrating a 32 linear array of 1D amorphous silicon position sensitive detectors (PSD) were analyzed. The system was mounted on a platform for imaging 3D objects using the triangulation principle with a sheet-of-light laser. New obtained results reveal a minimum possible gap or simulated defect detection of approximately 350 μm. Furthermore, a first study of the angle for 3D scanning was also performed, allowing for a broad range of angles to be used in the process. The relationship between the scanning angle of the incident light onto the object and the image displacement distance on the sensor was determined for the first time in this system setup. Rendering of 3D object profiles was performed at a significantly higher number of frames than in the past and was possible for an incident light angle range of 15 ° to 85 °.

  8. A novel three-dimensional scaffold for regenerative endodontics: materials and biological characterizations.

    PubMed

    Bottino, Marco C; Yassen, Ghaeth H; Platt, Jeffrey A; Labban, Nawaf; Windsor, L Jack; Spolnik, Kenneth J; Bressiani, Ana H A

    2015-11-01

    An electrospun nanocomposite fibrous material holds promise as a scaffold, as well as a drug-delivery device to aid in root maturogenesis and the regeneration of the pulp-dentine complex. A novel three-dimensional (3D) nanocomposite scaffold composed of polydioxanone (PDS II®) and halloysite nanotubes (HNTs) was designed and fabricated by electrospinning. Morphology, structure, mechanical properties and cell compatibility studies were carried out to evaluate the effects of HNTs incorporation (0.5-10 wt% relative to PDS w/w). Overall, a 3D porous network was seen in the different fabricated electrospun scaffolds, regardless of the HNT content. The incorporation of HNTs at 10 wt% led to a significant (p < 0.0001) fibre diameter increase and a reduction in scaffold strength. Moreover, PDS-HNTs scaffolds supported the attachment and proliferation of human-derived pulp fibroblast cells. Quantitative proliferation assay performed with human dental pulp-derived cells as a function of nanotubes concentration indicated that the HNTs exhibit a high level of biocompatibility, rendering them good candidates for the potential encapsulation of distinct bioactive molecules. Collectively, the reported data support the conclusion that PDS-HNTs nanocomposite fibrous structures hold potential in the development of a bioactive scaffold for regenerative endodontics. Copyright © 2013 John Wiley & Sons, Ltd.

  9. Scanning Thin-Sheet Laser Imaging Microscopy Elucidates Details on Mouse Ear Development

    PubMed Central

    Kopecky, Benjamin; Johnson, Shane; Schmitz, Heather; Santi, Peter; Fritzsch, Bernd

    2016-01-01

    Background The mammalian inner ear is transformed from a flat placode into a three-dimensional (3D) structure with six sensory epithelia that allow for the perception of sound and both linear and angular acceleration. While hearing and balance problems are typically considered to be adult onset diseases, they may arise as a developmental perturbation to the developing ear. Future prevention of hearing or balance loss requires an understanding of how closely genetic mutations in model organisms reflect the human case, necessitating an objective multidimensional comparison of mouse ears with human ears that have comparable mutations in the same gene. Results Here, we present improved 3D analyses of normal murine ears during embryonic development using optical sections obtained through Thin-Sheet Laser Imaging Microscopy. We chronicle the transformation of an undifferentiated otic vesicle between mouse embryonic day 11.5 to a fully differentiated inner ear at postnatal day 15. Conclusions Our analysis of ear development provides new insights into ear development, enables unique perspectives into the complex development of the ear, and allows for the first full quantification of volumetric and linear aspects of ear growth. Our data provide the framework for future analysis of mutant phenotypes that are currently under-appreciated using only two dimensional renderings. PMID:22271591

  10. Scanning thin-sheet laser imaging microscopy elucidates details on mouse ear development.

    PubMed

    Kopecky, Benjamin; Johnson, Shane; Schmitz, Heather; Santi, Peter; Fritzsch, Bernd

    2012-03-01

    The mammalian inner ear is transformed from a flat placode into a three-dimensional (3D) structure with six sensory epithelia that allow for the perception of sound and both linear and angular acceleration. While hearing and balance problems are typically considered to be adult onset diseases, they may arise as a developmental perturbation to the developing ear. Future prevention of hearing or balance loss requires an understanding of how closely genetic mutations in model organisms reflect the human case, necessitating an objective multidimensional comparison of mouse ears with human ears that have comparable mutations in the same gene. Here, we present improved 3D analyses of normal murine ears during embryonic development using optical sections obtained through Thin-Sheet Laser Imaging Microscopy. We chronicle the transformation of an undifferentiated otic vesicle between mouse embryonic day 11.5 to a fully differentiated inner ear at postnatal day 15. Our analysis of ear development provides new insights into ear development, enables unique perspectives into the complex development of the ear, and allows for the first full quantification of volumetric and linear aspects of ear growth. Our data provide the framework for future analysis of mutant phenotypes that are currently under-appreciated using only two dimensional renderings. Copyright © 2012 Wiley Periodicals, Inc.

  11. View compensated compression of volume rendered images for remote visualization.

    PubMed

    Lalgudi, Hariharan G; Marcellin, Michael W; Bilgin, Ali; Oh, Han; Nadar, Mariappan S

    2009-07-01

    Remote visualization of volumetric images has gained importance over the past few years in medical and industrial applications. Volume visualization is a computationally intensive process, often requiring hardware acceleration to achieve a real time viewing experience. One remote visualization model that can accomplish this would transmit rendered images from a server, based on viewpoint requests from a client. For constrained server-client bandwidth, an efficient compression scheme is vital for transmitting high quality rendered images. In this paper, we present a new view compensation scheme that utilizes the geometric relationship between viewpoints to exploit the correlation between successive rendered images. The proposed method obviates motion estimation between rendered images, enabling significant reduction to the complexity of a compressor. Additionally, the view compensation scheme, in conjunction with JPEG2000 performs better than AVC, the state of the art video compression standard.

  12. Real-time rendering for multiview autostereoscopic displays

    NASA Astrophysics Data System (ADS)

    Berretty, R.-P. M.; Peters, F. J.; Volleberg, G. T. G.

    2006-02-01

    In video systems, the introduction of 3D video might be the next revolution after the introduction of color. Nowadays multiview autostereoscopic displays are in development. Such displays offer various views at the same time and the image content observed by the viewer depends upon his position with respect to the screen. His left eye receives a signal that is different from what his right eye gets; this gives, provided the signals have been properly processed, the impression of depth. The various views produced on the display differ with respect to their associated camera positions. A possible video format that is suited for rendering from different camera positions is the usual 2D format enriched with a depth related channel, e.g. for each pixel in the video not only its color is given, but also e.g. its distance to a camera. In this paper we provide a theoretical framework for the parallactic transformations which relates captured and observed depths to screen and image disparities. Moreover we present an efficient real time rendering algorithm that uses forward mapping to reduce aliasing artefacts and that deals properly with occlusions. For improved perceived resolution, we take the relative position of the color subpixels and the optics of the lenticular screen into account. Sophisticated filtering techniques results in high quality images.

  13. Hierarchical image-based rendering using texture mapping hardware

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Max, N

    1999-01-15

    Multi-layered depth images containing color and normal information for subobjects in a hierarchical scene model are precomputed with standard z-buffer hardware for six orthogonal views. These are adaptively selected according to the proximity of the viewpoint, and combined using hardware texture mapping to create ''reprojected'' output images for new viewpoints. (If a subobject is too close to the viewpoint, the polygons in the original model are rendered.) Specific z-ranges are selected from the textures with the hardware alpha test to give accurate 3D reprojection. The OpenGL color matrix is used to transform the precomputed normals into their orientations in themore » final view, for hardware shading.« less

  14. Applications of 2D to 3D conversion for educational purposes

    NASA Astrophysics Data System (ADS)

    Koido, Yoshihisa; Morikawa, Hiroyuki; Shiraishi, Saki; Takeuchi, Soya; Maruyama, Wataru; Nakagori, Toshio; Hirakata, Masataka; Shinkai, Hirohisa; Kawai, Takashi

    2013-03-01

    There are three main approaches creating stereoscopic S3D content: stereo filming using two cameras, stereo rendering of 3D computer graphics, and 2D to S3D conversion by adding binocular information to 2D material images. Although manual "off-line" conversion can control the amount of parallax flexibly, 2D material images are converted according to monocular information in most cases, and the flexibility of 2D to S3D conversion has not been exploited. If the depth is expressed flexibly, comprehensions and interests from converted S3D contents are anticipated to be differed from those from 2D. Therefore, in this study we created new S3D content for education by applying 2D to S3D conversion. For surgical education, we created S3D surgical operation content under a surgeon using a partial 2D to S3D conversion technique which was expected to concentrate viewers' attention on significant areas. And for art education, we converted Ukiyoe prints; traditional Japanese artworks made from a woodcut. The conversion of this content, which has little depth information, into S3D, is expected to produce different cognitive processes from those evoked by 2D content, e.g., the excitation of interest, and the understanding of spatial information. In addition, the effects of the representation of these contents were investigated.

  15. Transform coding for hardware-accelerated volume rendering.

    PubMed

    Fout, Nathaniel; Ma, Kwan-Liu

    2007-01-01

    Hardware-accelerated volume rendering using the GPU is now the standard approach for real-time volume rendering, although limited graphics memory can present a problem when rendering large volume data sets. Volumetric compression in which the decompression is coupled to rendering has been shown to be an effective solution to this problem; however, most existing techniques were developed in the context of software volume rendering, and all but the simplest approaches are prohibitive in a real-time hardware-accelerated volume rendering context. In this paper we present a novel block-based transform coding scheme designed specifically with real-time volume rendering in mind, such that the decompression is fast without sacrificing compression quality. This is made possible by consolidating the inverse transform with dequantization in such a way as to allow most of the reprojection to be precomputed. Furthermore, we take advantage of the freedom afforded by off-line compression in order to optimize the encoding as much as possible while hiding this complexity from the decoder. In this context we develop a new block classification scheme which allows us to preserve perceptually important features in the compression. The result of this work is an asymmetric transform coding scheme that allows very large volumes to be compressed and then decompressed in real-time while rendering on the GPU.

  16. A system of three-dimensional complex variables

    NASA Technical Reports Server (NTRS)

    Martin, E. Dale

    1986-01-01

    Some results of a new theory of multidimensional complex variables are reported, including analytic functions of a three-dimensional (3-D) complex variable. Three-dimensional complex numbers are defined, including vector properties and rules of multiplication. The necessary conditions for a function of a 3-D variable to be analytic are given and shown to be analogous to the 2-D Cauchy-Riemann equations. A simple example also demonstrates the analogy between the newly defined 3-D complex velocity and 3-D complex potential and the corresponding ordinary complex velocity and complex potential in two dimensions.

  17. Simulation and training of lumbar punctures using haptic volume rendering and a 6DOF haptic device

    NASA Astrophysics Data System (ADS)

    Färber, Matthias; Heller, Julika; Handels, Heinz

    2007-03-01

    The lumbar puncture is performed by inserting a needle into the spinal chord of the patient to inject medicaments or to extract liquor. The training of this procedure is usually done on the patient guided by experienced supervisors. A virtual reality lumbar puncture simulator has been developed in order to minimize the training costs and the patient's risk. We use a haptic device with six degrees of freedom (6DOF) to feedback forces that resist needle insertion and rotation. An improved haptic volume rendering approach is used to calculate the forces. This approach makes use of label data of relevant structures like skin, bone, muscles or fat and original CT data that contributes information about image structures that can not be segmented. A real-time 3D visualization with optional stereo view shows the punctured region. 2D visualizations of orthogonal slices enable a detailed impression of the anatomical context. The input data consisting of CT and label data and surface models of relevant structures is defined in an XML file together with haptic rendering and visualization parameters. In a first evaluation the visible human male data has been used to generate a virtual training body. Several users with different medical experience tested the lumbar puncture trainer. The simulator gives a good haptic and visual impression of the needle insertion and the haptic volume rendering technique enables the feeling of unsegmented structures. Especially, the restriction of transversal needle movement together with rotation constraints enabled by the 6DOF device facilitate a realistic puncture simulation.

  18. Anxiogenic-like effects of fluoxetine render adult male rats vulnerable to the effects of a novel stress.

    PubMed

    Gomez, Francisca; García-García, Luis

    2017-02-01

    Fluoxetine (FLX) has paradoxical anxiogenic-like effects during the acute phase of treatment. In adolescent (35d-old) male rats, the stress-like effects induced by short-term (3d-4d) FLX treatment appear to involve up-regulation of paraventricular nucleus (PVN) arginine vasopressin (AVP) mRNA. However, studies on FLX-induced anxiety-like effects in adult rodents are inconclusive. Herein, we sought to study the response of adult male rats (60-65d-old) to a similar FLX treatment, also investigating how the stressful component, inherent to our experimental conditions, contributed to the responses. We show that FLX acutely increased plasma corticosterone concentrations while it attenuated the stress-induced-hyperthermia (SIH) as well as it reduced (≈40%) basal POMC mRNA expression in the arcuate nucleus (ARC). However, FLX did not alter the basal expression of PVN-corticotrophin-releasing hormone (CRH), anterior pituitary-pro-opiomelanocortin (POMC) and raphe nucleusserotonin transporter (SERT). Nonetheless, some regressions point towards the plausibility that FLX activated the hypothalamic-pituitary-adrenal (HPA). The behavioral study revealed that FLX acutely increased emotional reactivity in the holeboard, effect followed by a body weight loss of ≈2.5% after 24h. Interestingly, i.p. injection with vehicle did not have behavioral effects, furthermore, after experiencing the stressful component of the holeboard, the rats kept eating and gaining weight as normal. By contrast, the stress-naïve rats reduced food intake and gained less weight although maintaining a positive energy state. Therefore, on one hand, repetition of a mild stressor would unchain compensatory mechanisms to restore energy homeostasis after stress increasing the resiliency to novel stressors. On the other hand, FLX might render stressed adult rats vulnerable to novel stressors through the emergence of counter-regulatory changes, involving HPA axis activation and diminished sympathetic output may be due to reduced melanocortin signaling. Therefore, complex interactions between hypothalamic CRH and POMC might be determining the adaptive nature of the response of adult male rats to FLX and/or stress. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. View generated database

    NASA Technical Reports Server (NTRS)

    Downward, James G.

    1992-01-01

    This document represents the final report for the View Generated Database (VGD) project, NAS7-1066. It documents the work done on the project up to the point at which all project work was terminated due to lack of project funds. The VGD was to provide the capability to accurately represent any real-world object or scene as a computer model. Such models include both an accurate spatial/geometric representation of surfaces of the object or scene, as well as any surface detail present on the object. Applications of such models are numerous, including acquisition and maintenance of work models for tele-autonomous systems, generation of accurate 3-D geometric/photometric models for various 3-D vision systems, and graphical models for realistic rendering of 3-D scenes via computer graphics.

  20. Illustrative visualization of 3D city models

    NASA Astrophysics Data System (ADS)

    Doellner, Juergen; Buchholz, Henrik; Nienhaus, Marc; Kirsch, Florian

    2005-03-01

    This paper presents an illustrative visualization technique that provides expressive representations of large-scale 3D city models, inspired by the tradition of artistic and cartographic visualizations typically found in bird"s-eye view and panoramic maps. We define a collection of city model components and a real-time multi-pass rendering algorithm that achieves comprehensible, abstract 3D city model depictions based on edge enhancement, color-based and shadow-based depth cues, and procedural facade texturing. Illustrative visualization provides an effective visual interface to urban spatial information and associated thematic information complementing visual interfaces based on the Virtual Reality paradigm, offering a huge potential for graphics design. Primary application areas include city and landscape planning, cartoon worlds in computer games, and tourist information systems.

  1. UWGSP4: an imaging and graphics superworkstation and its medical applications

    NASA Astrophysics Data System (ADS)

    Jong, Jing-Ming; Park, Hyun Wook; Eo, Kilsu; Kim, Min-Hwan; Zhang, Peng; Kim, Yongmin

    1992-05-01

    UWGSP4 is configured with a parallel architecture for image processing and a pipelined architecture for computer graphics. The system's peak performance is 1,280 MFLOPS for image processing and over 200,000 Gouraud shaded 3-D polygons per second for graphics. The simulated sustained performance is about 50% of the peak performance in general image processing. Most of the 2-D image processing functions are efficiently vectorized and parallelized in UWGSP4. A performance of 770 MFLOPS in convolution and 440 MFLOPS in FFT is achieved. The real-time cine display, up to 32 frames of 1280 X 1024 pixels per second, is supported. In 3-D imaging, the update rate for the surface rendering is 10 frames of 20,000 polygons per second; the update rate for the volume rendering is 6 frames of 128 X 128 X 128 voxels per second. The system provides 1280 X 1024 X 32-bit double frame buffers and one 1280 X 1024 X 8-bit overlay buffer for supporting realistic animation, 24-bit true color, and text annotation. A 1280 X 1024- pixel, 66-Hz noninterlaced display screen with 1:1 aspect ratio can be windowed into the frame buffer for the display of any portion of the processed image or graphics.

  2. Improving Intercontinental Ballistic Missile Maintenance Scheduling Through the Use of Location Analysis Methodologies

    DTIC Science & Technology

    2006-03-01

    factors that “maximize the benefit of location to the firm” ( Heizer & Render , 2004:302-307). In the book, Facility Location: Applications and Theory...Fylstra, D., Lasdon, L., Watson, J. and Waren, A. “Design and Use of the Microsoft Excel Solver,” Interfaces, 28(5):29-55, 1998. Heizer , Jay...and Render , Barry. Principles of Operations Management (5th ed.). New Jersey: Pearson Education Inc., 2004. Hofstra University. (n.d.). Von

  3. Real-time photorealistic stereoscopic rendering of fire

    NASA Astrophysics Data System (ADS)

    Rose, Benjamin M.; McAllister, David F.

    2007-02-01

    We propose a method for real-time photorealistic stereo rendering of the natural phenomenon of fire. Applications include the use of virtual reality in fire fighting, military training, and entertainment. Rendering fire in real-time presents a challenge because of the transparency and non-static fluid-like behavior of fire. It is well known that, in general, methods that are effective for monoscopic rendering are not necessarily easily extended to stereo rendering because monoscopic methods often do not provide the depth information necessary to produce the parallax required for binocular disparity in stereoscopic rendering. We investigate the existing techniques used for monoscopic rendering of fire and discuss their suitability for extension to real-time stereo rendering. Methods include the use of precomputed textures, dynamic generation of textures, and rendering models resulting from the approximation of solutions of fluid dynamics equations through the use of ray-tracing algorithms. We have found that in order to attain real-time frame rates, our method based on billboarding is effective. Slicing is used to simulate depth. Texture mapping or 2D images are mapped onto polygons and alpha blending is used to treat transparency. We can use video recordings or prerendered high-quality images of fire as textures to attain photorealistic stereo.

  4. Effects of finite pulse width on two-dimensional Fourier transform electron spin resonance.

    PubMed

    Liang, Zhichun; Crepeau, Richard H; Freed, Jack H

    2005-12-01

    Two-dimensional (2D) Fourier transform ESR techniques, such as 2D-ELDOR, have considerably improved the resolution of ESR in studies of molecular dynamics in complex fluids such as liquid crystals and membrane vesicles and in spin labeled polymers and peptides. A well-developed theory based on the stochastic Liouville equation (SLE) has been successfully employed to analyze these experiments. However, one fundamental assumption has been utilized to simplify the complex analysis, viz. the pulses have been treated as ideal non-selective ones, which therefore provide uniform irradiation of the whole spectrum. In actual experiments, the pulses are of finite width causing deviations from the theoretical predictions, a problem that is exacerbated by experiments performed at higher frequencies. In the present paper we provide a method to deal with the full SLE including the explicit role of the molecular dynamics, the spin Hamiltonian and the radiation field during the pulse. The computations are rendered more manageable by utilizing the Trotter formula, which is adapted to handle this SLE in what we call a "Split Super-Operator" method. Examples are given for different motional regimes, which show how 2D-ELDOR spectra are affected by the finite pulse widths. The theory shows good agreement with 2D-ELDOR experiments performed as a function of pulse width.

  5. Portability and Cross-Platform Performance of an MPI-Based Parallel Polygon Renderer

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1999-01-01

    Visualizing the results of computations performed on large-scale parallel computers is a challenging problem, due to the size of the datasets involved. One approach is to perform the visualization and graphics operations in place, exploiting the available parallelism to obtain the necessary rendering performance. Over the past several years, we have been developing algorithms and software to support visualization applications on NASA's parallel supercomputers. Our results have been incorporated into a parallel polygon rendering system called PGL. PGL was initially developed on tightly-coupled distributed-memory message-passing systems, including Intel's iPSC/860 and Paragon, and IBM's SP2. Over the past year, we have ported it to a variety of additional platforms, including the HP Exemplar, SGI Origin2OOO, Cray T3E, and clusters of Sun workstations. In implementing PGL, we have had two primary goals: cross-platform portability and high performance. Portability is important because (1) our manpower resources are limited, making it difficult to develop and maintain multiple versions of the code, and (2) NASA's complement of parallel computing platforms is diverse and subject to frequent change. Performance is important in delivering adequate rendering rates for complex scenes and ensuring that parallel computing resources are used effectively. Unfortunately, these two goals are often at odds. In this paper we report on our experiences with portability and performance of the PGL polygon renderer across a range of parallel computing platforms.

  6. 3D Face Modeling Using the Multi-Deformable Method

    PubMed Central

    Hwang, Jinkyu; Yu, Sunjin; Kim, Joongrock; Lee, Sangyoun

    2012-01-01

    In this paper, we focus on the problem of the accuracy performance of 3D face modeling techniques using corresponding features in multiple views, which is quite sensitive to feature extraction errors. To solve the problem, we adopt a statistical model-based 3D face modeling approach in a mirror system consisting of two mirrors and a camera. The overall procedure of our 3D facial modeling method has two primary steps: 3D facial shape estimation using a multiple 3D face deformable model and texture mapping using seamless cloning that is a type of gradient-domain blending. To evaluate our method's performance, we generate 3D faces of 30 individuals and then carry out two tests: accuracy test and robustness test. Our method shows not only highly accurate 3D face shape results when compared with the ground truth, but also robustness to feature extraction errors. Moreover, 3D face rendering results intuitively show that our method is more robust to feature extraction errors than other 3D face modeling methods. An additional contribution of our method is that a wide range of face textures can be acquired by the mirror system. By using this texture map, we generate realistic 3D face for individuals at the end of the paper. PMID:23201976

  7. A Heterobimetallic Complex With an Unsupported Uranium(III)-Aluminum(I) Bond: (CpSiMe3)3U-AlCp* (Cp* = C5Me5)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Minasian, Stefan; Krinsky Ph.D., Jamin; Williams, Valerie

    2008-07-23

    The discovery of molecular metal-metal bonds has been of fundamental importance to the understanding of chemical bonding. For the actinides, examples of unsupported metal-metal bonds are relatively uncommon, consisting of Cp{sub 3}U-SnPh{sub 3}, and several actinide-transition metal complexes. Traditionally, bonding in the f-elements has been described as electrostatic; however, elucidating the degree of covalency is a subject of recent research. In carbon monoxide complexes of the trivalent uranium metallocenes, decreased {nu}{sub CO} values relative to free CO suggest that the U(III) atom acts as a {pi}-donor. Ephritikhine and coworkers have demonstrated that {pi}-accepting ligands can differentiate trivalent lanthanide and actinidemore » ions, an effect that renders this chemistry of interest in the context of nuclear waste separation technology.« less

  8. 3-D Ultrasound Vascularity Assessment for Breast Cancer Diagnosis

    DTIC Science & Technology

    2000-09-01

    circumscribed mass with no microcalcifications. Final pathologic studies revealed carcinosarcoma (half ductal, half chondrosarcoma ). (a) Lateral-axial...Prostate 2 0 0 2 Angiosarcoma 0 0 2 2 Chondrosarcoma 0 1 0 1 Nasopharyngeal tumor 0 0 1 1 Hemangioendothelioma 0 0 1 1 Renal tumor 1 0 2 3 Baseline...patient with metastatic rendered copper deficient. Table 2 summarizes the clinical chondrosarcoma secondary to radiation treatment for breast course of

  9. Immune Cells, if Rendered Insensitive to Transforming Growth Factorbeta, Can Cure Prostate Cancer

    DTIC Science & Technology

    2007-02-01

    Robert E. Meyer,2 Shilajit Kundu,1 Michael Pins,2 Borko Javonovic,3 Timothy Kuzel,4 Seong-Jin Kim,5 Luk Van Parijs,6 Norm Smith,1 Larry Wong,1 Norman M...Zhang,1 Ximing Yang,2,5 Shilajit D. Kundu,1 Michael Pins,2,5 Borko Javonovic,3,5 Robert Meryer,2 Seong-Jin Kim,7 Norman M. Greenberg,8 Timothy Kuzel

  10. New impressive capabilities of SE-workbench for EO/IR real-time rendering of animated scenarios including flares

    NASA Astrophysics Data System (ADS)

    Le Goff, Alain; Cathala, Thierry; Latger, Jean

    2015-10-01

    To provide technical assessments of EO/IR flares and self-protection systems for aircraft, DGA Information superiority resorts to synthetic image generation to model the operational battlefield of an aircraft, as viewed by EO/IR threats. For this purpose, it completed the SE-Workbench suite from OKTAL-SE with functionalities to predict a realistic aircraft IR signature and is yet integrating the real-time EO/IR rendering engine of SE-Workbench called SE-FAST-IR. This engine is a set of physics-based software and libraries that allows preparing and visualizing a 3D scene for the EO/IR domain. It takes advantage of recent advances in GPU computing techniques. The recent past evolutions that have been performed concern mainly the realistic and physical rendering of reflections, the rendering of both radiative and thermal shadows, the use of procedural techniques for the managing and the rendering of very large terrains, the implementation of Image- Based Rendering for dynamic interpolation of plume static signatures and lastly for aircraft the dynamic interpolation of thermal states. The next step is the representation of the spectral, directional, spatial and temporal signature of flares by Lacroix Defense using OKTAL-SE technology. This representation is prepared from experimental data acquired during windblast tests and high speed track tests. It is based on particle system mechanisms to model the different components of a flare. The validation of a flare model will comprise a simulation of real trials and a comparison of simulation outputs to experimental results concerning the flare signature and above all the behavior of the stimulated threat.

  11. Roughness based perceptual analysis towards digital skin imaging system with haptic feedback.

    PubMed

    Kim, K

    2016-08-01

    To examine psoriasis or atopic eczema, analyzing skin roughness by palpation is essential to precisely diagnose skin diseases. However, optical sensor based skin imaging systems do not allow dermatologists to touch skin images. To solve the problem, a new haptic rendering technology that can accurately display skin roughness must be developed. In addition, the rendering algorithm must be able to filter spatial noises created during 2D to 3D image conversion without losing the original roughness on the skin image. In this study, a perceptual way to design a noise filter that will remove spatial noises and in the meantime recover maximized roughness is introduced by understanding human sensitivity on surface roughness. A visuohaptic rendering system that can provide a user with seeing and touching digital skin surface roughness has been developed including a geometric roughness estimation method from a meshed surface. In following, a psychophysical experiment was designed and conducted with 12 human subjects to measure human perception with the developed visual and haptic interfaces to examine surface roughness. From the psychophysical experiment, it was found that touch is more sensitive at lower surface roughness, and vice versa. Human perception with both senses, vision and touch, becomes less sensitive to surface distortions as roughness increases. When interact with both channels, visual and haptic interfaces, the performance to detect abnormalities on roughness is greatly improved by sensory integration with the developed visuohaptic rendering system. The result can be used as a guideline to design a noise filter that can perceptually remove spatial noises while recover maximized roughness values from a digital skin image obtained by optical sensors. In addition, the result also confirms that the developed visuohaptic rendering system can help dermatologists or skin care professionals examine skin conditions by using vision and touch at the same time. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  12. Two Eyes, 3D: Stereoscopic Design Principles

    NASA Astrophysics Data System (ADS)

    Price, Aaron; Subbarao, M.; Wyatt, R.

    2013-01-01

    Two Eyes, 3D is a NSF-funded research project about how people perceive highly spatial objects when shown with 2D or stereoscopic ("3D") representations. As part of the project, we produced a short film about SN 2011fe. The high definition film has been rendered in both 2D and stereoscopic formats. It was developed according to a set of stereoscopic design principles we derived from the literature and past experience producing and studying stereoscopic films. Study participants take a pre- and post-test that involves a spatial cognition assessment and scientific knowledge questions about Type-1a supernovae. For the evaluation, participants use iPads in order to record spatial manipulation of the device and look for elements of embodied cognition. We will present early results and also describe the stereoscopic design principles and the rationale behind them. All of our content and software is available under open source licenses. More information is at www.twoeyes3d.org.

  13. Signals of Personality and Health: The Contributions of Facial Shape, Skin Texture, and Viewing Angle

    ERIC Educational Resources Information Center

    Jones, Alex L.; Kramer, Robin S. S.; Ward, Robert

    2012-01-01

    To what extent does information in a person's face predict their likely behavior? There is increasing evidence for association between relatively neutral, static facial appearance and personality traits. By using composite images rendered from three dimensional (3D) scans of women scoring high and low on health and personality dimensions, we aimed…

  14. 77 FR 59458 - Regulation of Fuels and Fuel Additives: 2013 Biomass-Based Diesel Renewable Fuel Volume

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-27

    ... Consumption A. Demand for Biomass-Based Diesel B. Availability of Feedstocks To Produce 1.28 Billion Gallons of Biodiesel 1. Grease and Rendered Fats 2. Corn Oil 3. Soybean Oil 4. Effects on Food Prices 5. Other Bio-Oils C. Production Capacity D. Consumption Capacity E. Biomass-Based Diesel Distribution...

  15. Validation of the Five-Phase Method for Simulating Complex Fenestration Systems with Radiance against Field Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geisler-Moroder, David; Lee, Eleanor S.; Ward, Gregory J.

    2016-08-29

    The Five-Phase Method (5-pm) for simulating complex fenestration systems with Radiance is validated against field measurements. The capability of the method to predict workplane illuminances, vertical sensor illuminances, and glare indices derived from captured and rendered high dynamic range (HDR) images is investigated. To be able to accurately represent the direct sun part of the daylight not only in sensor point simulations, but also in renderings of interior scenes, the 5-pm calculation procedure was extended. The validation shows that the 5-pm is superior to the Three-Phase Method for predicting horizontal and vertical illuminance sensor values as well as glare indicesmore » derived from rendered images. Even with input data from global and diffuse horizontal irradiance measurements only, daylight glare probability (DGP) values can be predicted within 10% error of measured values for most situations.« less

  16. A new Dirac cone material: a graphene-like Be3C2 monolayer.

    PubMed

    Wang, Bing; Yuan, Shijun; Li, Yunhai; Shi, Li; Wang, Jinlan

    2017-05-04

    Two-dimensional (2D) materials with Dirac cones exhibit rich physics and many intriguing properties, but the search for new 2D Dirac materials is still a current hotspot. Using the global particle-swarm optimization method and density functional theory, we predict a new stable graphene-like 2D Dirac material: a Be 3 C 2 monolayer with a hexagonal honeycomb structure. The Dirac point occurs exactly at the Fermi level and arises from the merging of the hybridized p z bands of Be and C atoms. Most interestingly, this monolayer exhibits a high Fermi velocity in the same order of graphene. Moreover, the Dirac cone is very robust and retains even included spin-orbit coupling or external strain. These outstanding properties render the Be 3 C 2 monolayer a promising 2D material for special electronics applications.

  17. Numerical simulations of the NREL S826 airfoil

    NASA Astrophysics Data System (ADS)

    Sagmo, KF; Bartl, J.; Sætran, L.

    2016-09-01

    2D and 3D steady state simulations were done using the commercial CFD package Star-CCM+ with three different RANS turbulence models. Lift and drag coefficients were simulated at different angles of attack for the NREL S826 airfoil at a Reynolds number of 100 000, and compared to experimental data obtained at NTNU and at DTU. The Spalart-Allmaras and the Realizable k-epsilon turbulence models reproduced experimental results for lift well in the 2D simulations. The 3D simulations with the Realizable two-layer k-epsilon model predicted essentially the same lift coefficients as the 2D Spalart-Allmaras simulations. A comparison between 2D and 3D simulations with the Realizable k-epsilon model showed a significantly lower prediction in drag by the 2D simulations. From the conducted 3D simulations surface pressure predictions along the wing span were presented, along with volumetric renderings of vorticity. Both showed a high degree of span wise flow variation when going into the stall region, and predicted a flow field resembling that of stall cells for angles of attack above peak lift.

  18. 3D Microperiodic Hydrogel Scaffolds for Robust Neuronal Cultures

    PubMed Central

    Hanson Shepherd, Jennifer N.; Parker, Sara T.; Shepherd, Robert F.; Gillette, Martha U.; Lewis, Jennifer A.; Nuzzo, Ralph G.

    2011-01-01

    Three-dimensional (3D) microperiodic scaffolds of poly(2-hydroxyethyl methacrylate) (pHEMA) have been fabricated by direct-write assembly of a photopolymerizable hydrogel ink. The ink is initially composed of physically entangled pHEMA chains dissolved in a solution of HEMA monomer, comonomer, photoinitiator and water. Upon printing 3D scaffolds of varying architecture, the ink filaments are exposed to UV light, where they are transformed into an interpenetrating hydrogel network of chemically cross-linked and physically entangled pHEMA chains. These 3D microperiodic scaffolds are rendered growth compliant for primary rat hippocampal neurons by absorption of polylysine. Neuronal cells thrive on these scaffolds, forming differentiated, intricately branched networks. Confocal laser scanning microscopy reveals that both cell distribution and extent of neuronal process alignment depend upon scaffold architecture. This work provides an important step forward in the creation of suitable platforms for in vitro study of sensitive cell types. PMID:21709750

  19. Insights from imaging the implanting embryo and the uterine environment in three dimensions

    PubMed Central

    Arora, Ripla; Fries, Adam; Oelerich, Karina; Marchuk, Kyle; Sabeur, Khalida; Giudice, Linda C.

    2016-01-01

    Although much is known about the embryo during implantation, the architecture of the uterine environment in which the early embryo develops is not well understood. We employed confocal imaging in combination with 3D analysis to identify and quantify dynamic changes to the luminal structure of murine uterus in preparation for implantation. When applied to mouse mutants with known implantation defects, this method detected striking peri-implantation abnormalities in uterine morphology that cannot be visualized by histology. We revealed 3D organization of uterine glands and found that they undergo a stereotypical reorientation concurrent with implantation. Furthermore, we extended this technique to generate a 3D rendering of the cycling human endometrium. Analyzing the uterine and embryo structure in 3D for different genetic mutants and pathological conditions will help uncover novel molecular pathways and global structural changes that contribute to successful implantation of an embryo. PMID:27836961

  20. Development of a system for acquiring, reconstructing, and visualizing three-dimensional ultrasonic angiograms

    NASA Astrophysics Data System (ADS)

    Edwards, Warren S.; Ritchie, Cameron J.; Kim, Yongmin; Mack, Laurence A.

    1995-04-01

    We have developed a three-dimensional (3D) imaging system using power Doppler (PD) ultrasound (US). This system can be used for visualizing and analyzing the vascular anatomy of parenchymal organs. To create the 3D PD images, we acquired a series of two-dimensional PD images from a commercial US scanner and recorded the position and orientation of each image using a 3D magnetic position sensor. Three-dimensional volumes were reconstructed using specially designed software and then volume rendered for display. We assessed the feasibility and geometric accuracy of our system with various flow phantoms. The system was then tested on a volunteer by scanning a transplanted kidney. The reconstructed volumes of the flow phantom contained less than 1 mm of geometric distortion and the 3D images of the transplanted kidney depicted the segmental, arcuate, and interlobar vessels.

  1. Real-time catheter localization and visualization using three-dimensional echocardiography

    NASA Astrophysics Data System (ADS)

    Kozlowski, Pawel; Bandaru, Raja Sekhar; D'hooge, Jan; Samset, Eigil

    2017-03-01

    Real-time three-dimensional transesophageal echocardiography (RT3D-TEE) is increasingly used during minimally invasive cardiac surgeries (MICS). In many cath labs, RT3D-TEE is already one of the requisite tools for image guidance during MICS. However, the visualization of the catheter is not always satisfactory making 3D- TEE challenging to use as the only modality for guidance. We propose a novel technique for better visualization of the catheter along with the cardiac anatomy using TEE alone - exploiting both beamforming and post processing methods. We extended our earlier method called Delay and Standard Deviation (DASD) beamforming to 3D in order to enhance specular reflections. The beam-formed image was further post-processed by the Frangi filter to segment the catheter. Multi-variate visualization techniques enabled us to render both the standard tissue and the DASD beam-formed image on a clinical ultrasound scanner simultaneously. A frame rate of 15 FPS was achieved.

  2. Medical 3D Printing for the Radiologist

    PubMed Central

    Mitsouras, Dimitris; Liacouras, Peter; Imanzadeh, Amir; Giannopoulos, Andreas A.; Cai, Tianrun; Kumamaru, Kanako K.; George, Elizabeth; Wake, Nicole; Caterson, Edward J.; Pomahac, Bohdan; Ho, Vincent B.; Grant, Gerald T.

    2015-01-01

    While use of advanced visualization in radiology is instrumental in diagnosis and communication with referring clinicians, there is an unmet need to render Digital Imaging and Communications in Medicine (DICOM) images as three-dimensional (3D) printed models capable of providing both tactile feedback and tangible depth information about anatomic and pathologic states. Three-dimensional printed models, already entrenched in the nonmedical sciences, are rapidly being embraced in medicine as well as in the lay community. Incorporating 3D printing from images generated and interpreted by radiologists presents particular challenges, including training, materials and equipment, and guidelines. The overall costs of a 3D printing laboratory must be balanced by the clinical benefits. It is expected that the number of 3D-printed models generated from DICOM images for planning interventions and fabricating implants will grow exponentially. Radiologists should at a minimum be familiar with 3D printing as it relates to their field, including types of 3D printing technologies and materials used to create 3D-printed anatomic models, published applications of models to date, and clinical benefits in radiology. Online supplemental material is available for this article. ©RSNA, 2015 PMID:26562233

  3. Medical 3D Printing for the Radiologist.

    PubMed

    Mitsouras, Dimitris; Liacouras, Peter; Imanzadeh, Amir; Giannopoulos, Andreas A; Cai, Tianrun; Kumamaru, Kanako K; George, Elizabeth; Wake, Nicole; Caterson, Edward J; Pomahac, Bohdan; Ho, Vincent B; Grant, Gerald T; Rybicki, Frank J

    2015-01-01

    While use of advanced visualization in radiology is instrumental in diagnosis and communication with referring clinicians, there is an unmet need to render Digital Imaging and Communications in Medicine (DICOM) images as three-dimensional (3D) printed models capable of providing both tactile feedback and tangible depth information about anatomic and pathologic states. Three-dimensional printed models, already entrenched in the nonmedical sciences, are rapidly being embraced in medicine as well as in the lay community. Incorporating 3D printing from images generated and interpreted by radiologists presents particular challenges, including training, materials and equipment, and guidelines. The overall costs of a 3D printing laboratory must be balanced by the clinical benefits. It is expected that the number of 3D-printed models generated from DICOM images for planning interventions and fabricating implants will grow exponentially. Radiologists should at a minimum be familiar with 3D printing as it relates to their field, including types of 3D printing technologies and materials used to create 3D-printed anatomic models, published applications of models to date, and clinical benefits in radiology. Online supplemental material is available for this article. (©)RSNA, 2015.

  4. Operating System Support for Mobile Interactive Applications

    DTIC Science & Technology

    2002-08-01

    Buckingham Palace (inte- rior) Scene Number of polygons Taj Mahal 127406 Café 138598 Notre Dame 160206 Buckingham Palace 235572...d em an d (m ill io ns o f c yc le s) ª Number of polygons rendered Taj Mahal Café Notre Dame Buckingham Palace (a) Random camera position 0 100 200...Notre Dame Buckingham Palace (b) Fixed camera position The « -axis is the number of polygons rendered, i.e. ¬G­ where ¬ is the original model size

  5. Approximation of a foreign object using x-rays, reference photographs and 3D reconstruction techniques.

    PubMed

    Briggs, Matt; Shanmugam, Mohan

    2013-12-01

    This case study describes how a 3D animation was created to approximate the depth and angle of a foreign object (metal bar) that had become embedded into a patient's head. A pre-operative CT scan was not available as the patient could not fit though the CT scanner, therefore a post surgical CT scan, x-ray and photographic images were used. A surface render was made of the skull and imported into Blender (a 3D animation application). The metal bar was not available, however images of a similar object that was retrieved from the scene by the ambulance crew were used to recreate a 3D model. The x-ray images were then imported into Blender and used as background images in order to align the skull reconstruction and metal bar at the correct depth/angle. A 3D animation was then created to fully illustrate the angle and depth of the iron bar in the skull.

  6. State-of-The-Art and Applications of 3D Imaging Sensors in Industry, Cultural Heritage, Medicine, and Criminal Investigation

    PubMed Central

    Sansoni, Giovanna; Trebeschi, Marco; Docchio, Franco

    2009-01-01

    3D imaging sensors for the acquisition of three dimensional (3D) shapes have created, in recent years, a considerable degree of interest for a number of applications. The miniaturization and integration of the optical and electronic components used to build them have played a crucial role in the achievement of compactness, robustness and flexibility of the sensors. Today, several 3D sensors are available on the market, even in combination with other sensors in a “sensor fusion” approach. An importance equal to that of physical miniaturization has the portability of the measurements, via suitable interfaces, into software environments designed for their elaboration, e.g., CAD-CAM systems, virtual renders, and rapid prototyping tools. In this paper, following an overview of the state-of-art of 3D imaging sensors, a number of significant examples of their use are presented, with particular reference to industry, heritage, medicine, and criminal investigation applications. PMID:22389618

  7. State-of-The-Art and Applications of 3D Imaging Sensors in Industry, Cultural Heritage, Medicine, and Criminal Investigation.

    PubMed

    Sansoni, Giovanna; Trebeschi, Marco; Docchio, Franco

    2009-01-01

    3D imaging sensors for the acquisition of three dimensional (3D) shapes have created, in recent years, a considerable degree of interest for a number of applications. The miniaturization and integration of the optical and electronic components used to build them have played a crucial role in the achievement of compactness, robustness and flexibility of the sensors. Today, several 3D sensors are available on the market, even in combination with other sensors in a "sensor fusion" approach. An importance equal to that of physical miniaturization has the portability of the measurements, via suitable interfaces, into software environments designed for their elaboration, e.g., CAD-CAM systems, virtual renders, and rapid prototyping tools. In this paper, following an overview of the state-of-art of 3D imaging sensors, a number of significant examples of their use are presented, with particular reference to industry, heritage, medicine, and criminal investigation applications.

  8. Immersive Molecular Visualization with Omnidirectional Stereoscopic Ray Tracing and Remote Rendering

    PubMed Central

    Stone, John E.; Sherman, William R.; Schulten, Klaus

    2016-01-01

    Immersive molecular visualization provides the viewer with intuitive perception of complex structures and spatial relationships that are of critical interest to structural biologists. The recent availability of commodity head mounted displays (HMDs) provides a compelling opportunity for widespread adoption of immersive visualization by molecular scientists, but HMDs pose additional challenges due to the need for low-latency, high-frame-rate rendering. State-of-the-art molecular dynamics simulations produce terabytes of data that can be impractical to transfer from remote supercomputers, necessitating routine use of remote visualization. Hardware-accelerated video encoding has profoundly increased frame rates and image resolution for remote visualization, however round-trip network latencies would cause simulator sickness when using HMDs. We present a novel two-phase rendering approach that overcomes network latencies with the combination of omnidirectional stereoscopic progressive ray tracing and high performance rasterization, and its implementation within VMD, a widely used molecular visualization and analysis tool. The new rendering approach enables immersive molecular visualization with rendering techniques such as shadows, ambient occlusion lighting, depth-of-field, and high quality transparency, that are particularly helpful for the study of large biomolecular complexes. We describe ray tracing algorithms that are used to optimize interactivity and quality, and we report key performance metrics of the system. The new techniques can also benefit many other application domains. PMID:27747138

  9. Bio-inspired color image enhancement

    NASA Astrophysics Data System (ADS)

    Meylan, Laurence; Susstrunk, Sabine

    2004-06-01

    Capturing and rendering an image that fulfills the observer's expectations is a difficult task. This is due to the fact that the signal reaching the eye is processed by a complex mechanism before forming a percept, whereas a capturing device only retains the physical value of light intensities. It is especially difficult to render complex scenes with highly varying luminances. For example, a picture taken inside a room where objects are visible through the windows will not be rendered correctly by a global technique. Either details in the dim room will be hidden in shadow or the objects viewed through the window will be too bright. The image has to be treated locally to resemble more closely to what the observer remembers. The purpose of this work is to develop a technique for rendering images based on human local adaptation. We take inspiration from a model of color vision called Retinex. This model determines the perceived color given spatial relationships of the captured signals. Retinex has been used as a computational model for image rendering. In this article, we propose a new solution inspired by Retinex that is based on a single filter applied to the luminance channel. All parameters are image-dependent so that the process requires no parameter tuning. That makes the method more flexible than other existing ones. The presented results show that our method suitably enhances high dynamic range images.

  10. Predictability, Force and (Anti-)Resonance in Complex Object Control.

    PubMed

    Maurice, Pauline; Hogan, Neville; Sternad, Dagmar

    2018-04-18

    Manipulation of complex objects as in tool use is ubiquitous and has given humans an evolutionary advantage. This study examined the strategies humans choose when manipulating an object with underactuated internal dynamics, such as a cup of coffee. The object's dynamics renders the temporal evolution complex, possibly even chaotic, and difficult to predict. A cart-and-pendulum model, loosely mimicking coffee sloshing in a cup, was implemented in a virtual environment with a haptic interface. Participants rhythmically manipulated the virtual cup containing a rolling ball; they could choose the oscillation frequency, while the amplitude was prescribed. Three hypotheses were tested: 1) humans decrease interaction forces between hand and object; 2) humans increase the predictability of the object dynamics; 3) humans exploit the resonances of the coupled object-hand system. Analysis revealed that humans chose either a high-frequency strategy with anti-phase cup-and-ball movements or a low-frequency strategy with in-phase cup-and-ball movements. Counter Hypothesis 1, they did not decrease interaction force; instead, they increased the predictability of the interaction dynamics, quantified by mutual information, supporting Hypothesis 2. To address Hypothesis 3, frequency analysis of the coupled hand-object system revealed two resonance frequencies separated by an anti-resonance frequency. The low-frequency strategy exploited one resonance, while the high-frequency strategy afforded more choice, consistent with the frequency response of the coupled system; both strategies avoided the anti-resonance. Hence, humans did not prioritize interaction force, but rather strategies that rendered interactions predictable. These findings highlight that physical interactions with complex objects pose control challenges not present in unconstrained movements.

  11. CAMEO-SIM: a physics-based broadband scene simulation tool for assessment of camouflage, concealment, and deception methodologies

    NASA Astrophysics Data System (ADS)

    Moorhead, Ian R.; Gilmore, Marilyn A.; Houlbrook, Alexander W.; Oxford, David E.; Filbee, David R.; Stroud, Colin A.; Hutchings, G.; Kirk, Albert

    2001-09-01

    Assessment of camouflage, concealment, and deception (CCD) methodologies is not a trivial problem; conventionally the only method has been to carry out field trials, which are both expensive and subject to the vagaries of the weather. In recent years computing power has increased, such that there are now many research programs using synthetic environments for CCD assessments. Such an approach is attractive; the user has complete control over the environmental parameters and many more scenarios can be investigated. The UK Ministry of Defence is currently developing a synthetic scene generation tool for assessing the effectiveness of air vehicle camouflage schemes. The software is sufficiently flexible to allow it to be used in a broader range of applications, including full CCD assessment. The synthetic scene simulation system (CAMEO- SIM) has been developed, as an extensible system, to provide imagery within the 0.4 to 14 micrometers spectral band with as high a physical fidelity as possible. it consists of a scene design tool, an image generator, that incorporates both radiosity and ray-tracing process, and an experimental trials tool. The scene design tool allows the user to develop a 3D representation of the scenario of interest from a fixed viewpoint. Target(s) of interest can be placed anywhere within this 3D representation and may be either static or moving. Different illumination conditions and effects of the atmosphere can be modeled together with directional reflectance effects. The user has complete control over the level of fidelity of the final image. The output from the rendering tool is a sequence of radiance maps, which may be used by sensor models or for experimental trials in which observers carry out target acquisition tasks. The software also maintains an audit trail of all data selected to generate a particular image, both in terms of material properties used and the rendering options chosen. A range of verification tests has shown that the software computes the correct values for analytically tractable scenarios. Validation test using simple scenes have also been undertaken. More complex validation tests using observer trials are planned. The current version of CAMEO-SIM and how its images are used for camouflage assessment is described. The verification and validation tests undertaken are discussed. In addition, example images will be used to demonstrate the significance of different effects, such as spectral rendering and shadows. Planned developments of CAMEO-SIM are also outlined.

  12. Rifaximin diminishes neutropenia following potentially lethal whole-body radiation.

    PubMed

    Jahraus, Christopher D; Schemera, Bettina; Rynders, Patricia; Ramos, Melissa; Powell, Charles; Faircloth, John; Brawner, William R

    2010-07-01

    Terrorist attacks involving radiological or nuclear weapons are a substantial geopolitical concern, given that large populations could be exposed to potentially lethal doses of radiation. Because of this, evaluating potential countermeasures against radiation-induced mortality is critical. Gut microflora are the most common source of systemic infection following exposure to lethal doses of whole-body radiation, suggesting that prophylactic antibiotic therapy may reduce mortality after radiation exposure. The chemical stability, easy administration and favorable tolerability profile of the non-systemic antibiotic, rifaximin, make it an ideal potential candidate for use as a countermeasure. This study evaluated the use of rifaximin as a countermeasure against low-to-intermediate-dose whole-body radiation in rodents. Female Wistar rats (8 weeks old) were irradiated with 550 cGy to the whole body and were evaluated for 30 d. Animals received methylcellulose, neomycin (179 mg/kg/d) or variably dosed rifaximin (150-2000 mg/kg/d) one hour after irradiation and daily throughout the study period. Clinical assessments (e.g. body weight) were made daily. On postirradiation day 30, blood samples were collected and a complete blood cell count was performed. Animals receiving high doses of rifaximin (i.e. 1000 or 2000 mg/kg/d) had a greater increase in weight from the day of irradiation to postirradiation day 30 compared with animals that received placebo or neomycin. For animals with an increase in average body weight from irradiation day within 80-110% of the group average, methylcellulose rendered an absolute neutrophil count (ANC) of 211, neomycin rendered an ANC of 334, rifaximin 300 mg/kg/d rendered an ANC of 582 and rifaximin 1000 mg/kg/d rendered an ANC of 854 (P = 0.05 for group comparison). Exposure to rifaximin after near-lethal whole-body radiation resulted in diminished levels of neutropenia.

  13. Amira: Multi-Dimensional Scientific Visualization for the GeoSciences in the 21st Century

    NASA Astrophysics Data System (ADS)

    Bartsch, H.; Erlebacher, G.

    2003-12-01

    amira (www.amiravis.com) is a general purpose framework for 3D scientific visualization that meets the needs of the non-programmer, the script writer, and the advanced programmer alike. Provided modules may be visually assembled in an interactive manner to create complex visual displays. These modules and their associated user interfaces are controlled either through a mouse, or via an interactive scripting mechanism based on Tcl. We provide interactive demonstrations of the various features of Amira and explain how these may be used to enhance the comprehension of datasets in use in the Earth Sciences community. Its features will be illustrated on scalar and vector fields on grid types ranging from Cartesian to fully unstructured. Specialized extension modules developed by some of our collaborators will be illustrated [1]. These include a module to automatically choose values for salient isosurface identification and extraction, and color maps suitable for volume rendering. During the session, we will present several demonstrations of remote networking, processing of very large spatio-temporal datasets, and various other projects that are underway. In particular, we will demonstrate WEB-IS, a java-applet interface to Amira that allows script editing via the web, and selected data analysis [2]. [1] G. Erlebacher, D. A. Yuen, F. Dubuffet, "Case Study: Visualization and Analysis of High Rayleigh Number -- 3D Convection in the Earth's Mantle", Proceedings of Visualization 2002, pp. 529--532. [2] Y. Wang, G. Erlebacher, Z. A. Garbow, D. A. Yuen, "Web-Based Service of a Visualization Package 'amira' for the Geosciences", Visual Geosciences, 2003.

  14. Highly luminescent, biocompatible ytterbium(iii) complexes as near-infrared fluorophores for living cell imaging.

    PubMed

    Ning, Yingying; Tang, Juan; Liu, Yi-Wei; Jing, Jing; Sun, Yuansheng; Zhang, Jun-Long

    2018-04-21

    Herein, we report the design and synthesis of biocompatible Yb 3+ complexes for near-infrared (NIR) living cell imaging. Upon excitation at either the visible (Soret band) or red region (Q band), these β-fluorinated Yb 3+ complexes display high NIR luminescence (quantum yields up to 23% and 13% in dimethyl sulfoxide and water, respectively) and have higher stabilities and prolonged decay lifetimes (up to 249 μs) compared to the β-non-fluorinated counterparts. This renders the β-fluorinated Yb 3+ complexes as a new class of biological optical probes in both steady-state imaging and time-resolved fluorescence lifetime imaging (FLIM). NIR confocal fluorescence images showed strong and specific intracellular Yb 3+ luminescence signals when the biocompatible Yb 3+ complexes were uptaken into the living cells. Importantly, FLIM measurements showed an intracellular lifetime distribution between 100 and 200 μs, allowing an effective discrimination from cell autofluorescence, and afforded high signal-to-noise ratios as firstly demonstrated in the NIR region. These results demonstrated the prospects of NIR lanthanide complexes as biological probes for NIR steady-state fluorescence and time-resolved fluorescence lifetime imaging.

  15. Recognition Of Complex Three Dimensional Objects Using Three Dimensional Moment Invariants

    NASA Astrophysics Data System (ADS)

    Sadjadi, Firooz A.

    1985-01-01

    A technique for the recognition of complex three dimensional objects is presented. The complex 3-D objects are represented in terms of their 3-D moment invariants, algebraic expressions that remain invariant independent of the 3-D objects' orientations and locations in the field of view. The technique of 3-D moment invariants has been used successfully for simple 3-D object recognition in the past. In this work we have extended this method for the representation of more complex objects. Two complex objects are represented digitally; their 3-D moment invariants have been calculated, and then the invariancy of these 3-D invariant moment expressions is verified by changing the orientation and the location of the objects in the field of view. The results of this study have significant impact on 3-D robotic vision, 3-D target recognition, scene analysis and artificial intelligence.

  16. Magnetic resonance imaging of focal cortical dysplasia: Comparison of 3D and 2D fluid attenuated inversion recovery sequences at 3T.

    PubMed

    Tschampa, Henriette J; Urbach, Horst; Malter, Michael; Surges, Rainer; Greschus, Susanne; Gieseke, Jürgen

    2015-10-01

    Focal cortical dysplasia (FCD) is a frequent finding in drug resistant epilepsy. The aim of our study was to evaluate an isotropic high-resolution 3-dimensional Fluid-attenuated inversion recovery sequence (3D FLAIR) at 3T in comparison to standard 2D FLAIR in the diagnosis of FCD. In a prospective study, 19 epilepsy patients with the MR diagnosis of FCD were examined with a sagittal 3D FLAIR sequence with modulated refocusing flip angle (slice thickness 1.10mm) and a 2D FLAIR in the coronal (thk. 3mm) and axial planes (thk. 2mm). Manually placed regions of interest were used for quantitative analysis. Qualitative image analysis was performed by two neuroradiologists in consensus. Contrast between gray and white matter (p ≤ 0.02), the lesion (p ≤ 0.031) or hyperintense extension to the ventricle (p ≤ 0.021) and white matter was significantly higher in 2D than in 3D FLAIR sequences. In the visual analysis there was no difference between 2D and 3D sequences. Conventional 2D FLAIR sequences yield a higher image contrast compared to the employed 3D FLAIR sequence in patients with FCDs. Potential advantages of 3D imaging using surface rendering or automated techniques for lesion detection have to be further elucidated. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. A Spatially-Registered, Massively Parallelised Data Structure for Interacting with Large, Integrated Geodatasets

    NASA Astrophysics Data System (ADS)

    Irving, D. H.; Rasheed, M.; O'Doherty, N.

    2010-12-01

    The efficient storage, retrieval and interactive use of subsurface data present great challenges in geodata management. Data volumes are typically massive, complex and poorly indexed with inadequate metadata. Derived geomodels and interpretations are often tightly bound in application-centric and proprietary formats; open standards for long-term stewardship are poorly developed. Consequently current data storage is a combination of: complex Logical Data Models (LDMs) based on file storage formats; 2D GIS tree-based indexing of spatial data; and translations of serialised memory-based storage techniques into disk-based storage. Whilst adequate for working at the mesoscale over a short timeframes, these approaches all possess technical and operational shortcomings: data model complexity; anisotropy of access; scalability to large and complex datasets; and weak implementation and integration of metadata. High performance hardware such as parallelised storage and Relational Database Management System (RDBMS) have long been exploited in many solutions but the underlying data structure must provide commensurate efficiencies to allow multi-user, multi-application and near-realtime data interaction. We present an open Spatially-Registered Data Structure (SRDS) built on Massively Parallel Processing (MPP) database architecture implemented by a ANSI SQL 2008 compliant RDBMS. We propose a LDM comprising a 3D Earth model that is decomposed such that each increasing Level of Detail (LoD) is achieved by recursively halving the bin size until it is less than the error in each spatial dimension for that data point. The value of an attribute at that point is stored as a property of that point and at that LoD. It is key to the numerical efficiency of the SRDS that it is under-pinned by a power-of-two relationship thus precluding the need for computationally intensive floating point arithmetic. Our approach employed a tightly clustered MPP array with small clusters of storage, processors and memory communicating over a high-speed network inter-connect. This is a shared-nothing architecture where resources are managed within each cluster unlike most other RDBMSs. Data are accessed on this architecture by their primary index values which utilises the hashing algorithm for point-to-point access. The hashing algorithm’s main role is the efficient distribution of data across the clusters based on the primary index. In this study we used 3D seismic volumes, 2D seismic profiles and borehole logs to demonstrate application in both (x,y,TWT) and (x,y,z)-space. In the SRDS the primary index is a composite column index of (x,y) to avoid invoking time-consuming full table scans as is the case in tree-based systems. This means that data access is isotropic. A query for data in a specified spatial range permits retrieval recursively by point-to-point queries within each nested LoD yielding true linear performance up to the Petabyte scale with hardware scaling presenting the primary limiting factor. Our architecture and LDM promotes: realtime interaction with massive data volumes; streaming of result sets and server-rendered 2D/3D imagery; rigorous workflow control and auditing; and in-database algorithms run directly against data as a HPC cloud service.

  18. VPython: Writing Real-time 3D Physics Programs

    NASA Astrophysics Data System (ADS)

    Chabay, Ruth

    2001-06-01

    VPython (http://cil.andrew.cmu.edu/projects/visual) combines the Python programming language with an innovative 3D graphics module called Visual, developed by David Scherer. Designed to make 3D physics simulations accessible to novice programmers, VPython allows the programmer to write a purely computational program without any graphics code, and produces an interactive realtime 3D graphical display. In a program 3D objects are created and their positions modified by computational algorithms. Running in a separate thread, the Visual module monitors the positions of these objects and renders them many times per second. Using the mouse, one can zoom and rotate to navigate through the scene. After one hour of instruction, students in an introductory physics course at Carnegie Mellon University, including those who have never programmed before, write programs in VPython to model the behavior of physical systems and to visualize fields in 3D. The Numeric array processing module allows the construction of more sophisticated simulations and models as well. VPython is free and open source. The Visual module is based on OpenGL, and runs on Windows, Linux, and Macintosh.

  19. State of the "art": a taxonomy of artistic stylization techniques for images and video.

    PubMed

    Kyprianidis, Jan Eric; Collomosse, John; Wang, Tinghuai; Isenberg, Tobias

    2013-05-01

    This paper surveys the field of nonphotorealistic rendering (NPR), focusing on techniques for transforming 2D input (images and video) into artistically stylized renderings. We first present a taxonomy of the 2D NPR algorithms developed over the past two decades, structured according to the design characteristics and behavior of each technique. We then describe a chronology of development from the semiautomatic paint systems of the early nineties, through to the automated painterly rendering systems of the late nineties driven by image gradient analysis. Two complementary trends in the NPR literature are then addressed, with reference to our taxonomy. First, the fusion of higher level computer vision and NPR, illustrating the trends toward scene analysis to drive artistic abstraction and diversity of style. Second, the evolution of local processing approaches toward edge-aware filtering for real-time stylization of images and video. The survey then concludes with a discussion of open challenges for 2D NPR identified in recent NPR symposia, including topics such as user and aesthetic evaluation.

  20. Alpha-tocopheryl succinate induces apoptosis by targeting ubiquinone-binding sites in mitochondrial respiratory complex II.

    PubMed

    Dong, L-F; Low, P; Dyason, J C; Wang, X-F; Prochazka, L; Witting, P K; Freeman, R; Swettenham, E; Valis, K; Liu, J; Zobalova, R; Turanek, J; Spitz, D R; Domann, F E; Scheffler, I E; Ralph, S J; Neuzil, J

    2008-07-17

    Alpha-tocopheryl succinate (alpha-TOS) is a selective inducer of apoptosis in cancer cells, which involves the accumulation of reactive oxygen species (ROS). The molecular target of alpha-TOS has not been identified. Here, we show that alpha-TOS inhibits succinate dehydrogenase (SDH) activity of complex II (CII) by interacting with the proximal and distal ubiquinone (UbQ)-binding site (Q(P) and Q(D), respectively). This is based on biochemical analyses and molecular modelling, revealing similar or stronger interaction energy of alpha-TOS compared to that of UbQ for the Q(P) and Q(D) sites, respectively. CybL-mutant cells with dysfunctional CII failed to accumulate ROS and underwent apoptosis in the presence of alpha-TOS. Similar resistance was observed when CybL was knocked down with siRNA. Reconstitution of functional CII rendered CybL-mutant cells susceptible to alpha-TOS. We propose that alpha-TOS displaces UbQ in CII causing electrons generated by SDH to recombine with molecular oxygen to yield ROS. Our data highlight CII, a known tumour suppressor, as a novel target for cancer therapy.

Top